Applied Behavior Analysis Cooper Heron Heward Second ...

759
Applied Behavior Analysis Cooper Heron Heward Second Edition

Transcript of Applied Behavior Analysis Cooper Heron Heward Second ...

9 781292 023212

ISBN 978-1-29202-321-2

Applied Behavior AnalysisCooper Heron Heward

Second Edition

Applied B

ehavior Analysis Cooper H

eron Hew

ard Second Edition

Applied Behavior AnalysisCooper Heron Heward

Second Edition

Pearson Education LimitedEdinburgh GateHarlowEssex CM20 2JEEngland and Associated Companies throughout the world

Visit us on the World Wide Web at: www.pearsoned.co.uk

© Pearson Education Limited 2014

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without either the prior written permission of the publisher or a licence permitting restricted copying in the United Kingdom issued by the Copyright Licensing Agency Ltd, Saffron House, 6–10 Kirby Street, London EC1N 8TS.

All trademarks used herein are the property of their respective owners. The use of any trademark in this text does not vest in the author or publisher any trademark ownership rights in such trademarks, nor does the use of such trademarks imply any affi liation with or endorsement of this book by such owners.

British Library Cataloguing-in-Publication DataA catalogue record for this book is available from the British Library

Printed in the United States of America

ISBN 10: 1-292-02321-XISBN 13: 978-1-292-02321-2

ISBN 10: 1-292-02321-XISBN 13: 978-1-292-02321-2

Table of Contents

P E A R S O N C U S T O M L I B R A R Y

I

Glossary

1

1John O. Cooper, Timothy E. Heron, William L. Heward

1. Definition and Characteristics of Applied Behavior Analysis

22

22John O. Cooper, Timothy E. Heron, William L. Heward

2. Basic Concepts

44

44John O. Cooper, Timothy E. Heron, William L. Heward

3. Selecting and Defining Target Behaviors

68

68John O. Cooper, Timothy E. Heron, William L. Heward

4. Measuring Behavior

92

92John O. Cooper, Timothy E. Heron, William L. Heward

5. Improving and Assessing the Quality of Behavioral Measurement

122

122John O. Cooper, Timothy E. Heron, William L. Heward

6. Constructing and Interpreting Graphic Displays of Behavioral Data

146

146John O. Cooper, Timothy E. Heron, William L. Heward

7. Analyzing Behavior Change: Basic Assumptions and Strategies

178

178John O. Cooper, Timothy E. Heron, William L. Heward

8. Reversal and Alternating Treatments Designs

196

196John O. Cooper, Timothy E. Heron, William L. Heward

9. Multiple Baseline and Changing Criterion Designs

220

220John O. Cooper, Timothy E. Heron, William L. Heward

10. Planning and Evaluating Applied Behavior Analysis Research

245

245John O. Cooper, Timothy E. Heron, William L. Heward

11. Positive Reinforcement

276

276John O. Cooper, Timothy E. Heron, William L. Heward

12. Negative Reinforcement

311

311John O. Cooper, Timothy E. Heron, William L. Heward

II

13. Schedules of Reinforcement

324

324John O. Cooper, Timothy E. Heron, William L. Heward

14. Punishment by Stimulus Presentation

344

344John O. Cooper, Timothy E. Heron, William L. Heward

15. Punishment by Removal of a Stimulus

374

374John O. Cooper, Timothy E. Heron, William L. Heward

16. Motivating Operations

390

390John O. Cooper, Timothy E. Heron, William L. Heward

17. Stimulus Control

408

408John O. Cooper, Timothy E. Heron, William L. Heward

18. Imitation

426

426John O. Cooper, Timothy E. Heron, William L. Heward

19. Chaining

434

434John O. Cooper, Timothy E. Heron, William L. Heward

20. Shaping

454

454John O. Cooper, Timothy E. Heron, William L. Heward

21. Extinction

468

468John O. Cooper, Timothy E. Heron, William L. Heward

22. Differential Reinforcement

481

481John O. Cooper, Timothy E. Heron, William L. Heward

23. Antecedent Interventions

498

498John O. Cooper, Timothy E. Heron, William L. Heward

24. Functional Behavior Assessment

510

510John O. Cooper, Timothy E. Heron, William L. Heward

25. Verbal Behavior

536

536John O. Cooper, Timothy E. Heron, William L. Heward

26. Contingency Contracting, Token Economy, and Group Contingencies

558

558John O. Cooper, Timothy E. Heron, William L. Heward

27. Self-Management

583

583John O. Cooper, Timothy E. Heron, William L. Heward

28. Generalization and Maintenance of Behavior Change

622

622John O. Cooper, Timothy E. Heron, William L. Heward

29. Ethical Considerations for Applied Behavior Analysts

664

664John O. Cooper, Timothy E. Heron, William L. Heward

Bibliography

685

685John O. Cooper, Timothy E. Heron, William L. Heward

III

729

729Index

IV

A-B design A two-phase experimental design consisting ofa pre-treatment baseline condition (A) followed by a treatment condition (B).

A-B-A design A three-phase experimental design consistingof an initial baseline phase (A) until steady state respond-ing (or countertherapeutic trend) is obtained, an interven-tion phase in which the treatment condition (B) isimplemented until the behavior has changed and steadystate responding is obtained, and a return to baseline con-ditions (A) by withdrawing the independent variable tosee whether responding “reverses” to levels observed inthe initial baseline phase. (See A-B-A-B design, reversaldesign, withdrawal design.)

A-B-A-B design An experimental design consisting of (1) aninitial baseline phase (A) until steady state responding (orcountertherapeutic trend) is obtained, (2) an initial inter-vention phase in which the treatment variable (B) is im-plemented until the behavior has changed and steady stateresponding is obtained, (3) a return to baseline conditions(A) by withdrawing the independent variable to seewhether responding “reverses” to levels observed in theinitial baseline phase, and (4) a second intervention phase(B) to see whether initial treatment effects are replicated(also called reversal design, withdrawal design).

abative effect (of a motivating operation) A decrease in thecurrent frequency of behavior that has been reinforced bythe stimulus that is increased in reinforcing effectivenessby the same motivating operation. For example, food in-gestion abates (decreases the current frequency of) be-havior that has been reinforced by food.

ABC recording See anecdotal observation.abolishing operation (AO) A motivating operation that de-

creases the reinforcing effectiveness of a stimulus, object,or event. For example, the reinforcing effectiveness of foodis abolished as a result of food ingestion.

accuracy (of measurement) The extent to which observedvalues, the data produced by measuring an event, match thetrue state, or true values, of the event as it exists in nature.(See observed value and true value.)

adjunctive behavior Behavior that occurs as a collateral ef-fect of a schedule of periodic reinforcement for other be-

havior; time-filling or interim activities (e.g., doodling,idle talking, smoking, drinking) that are induced by sched-ules of reinforcement during times when reinforcement isunlikely to be delivered. Also called schedule-induced behavior.

affirmation of the consequent A three-step form of reason-ing that begins with a true antecedent–consequent (if-A-then-B) statement and proceeds as follows: (1) If A is true,then B is true; (2) B is found to be true; (3) therefore, A istrue. Although other factors could be responsible for thetruthfulness of A, a sound experiment affirms several if-A-then-B possibilities, each one reducing the likelihood offactors other than the independent variable being respon-sible for the observed changes in behavior.

alternating treatments design An experimental design inwhich two or more conditions (one of which may be a no-treatment control condition) are presented in rapidly alter-nating succession (e.g., on alternating sessions or days)independent of the level of responding; differences in re-sponding between or among conditions are attributed to theeffects of the conditions (also called concurrent scheduledesign, multielement design, multiple schedule design).

alternative schedule Provides reinforcement whenever therequirement of either a ratio schedule or an interval sched-ule—the basic schedules that makeup the alternativeschedule—is met, regardless of which of the componentschedule’s requirements is met first.

anecdotal observation A form of direct, continuous obser-vation in which the observer records a descriptive, tem-porally sequenced account of all behavior(s) of interestand the antecedent conditions and consequences for thosebehaviors as those events occur in the client’s natural en-vironment (also called ABC recording).

antecedent An environmental condition or stimulus changeexisting or occurring prior to a behavior of interest.

antecedent intervention A behavior change strategy thatmanipulates contingency-independent antecedent stimuli(motivating operations). (See noncontingent reinforce-ment, high-probability request sequence, and functionalcommunication training. Contrast with antecedent con-trol, a behavior change intervention that manipulates

Glossary

From Applied Behavior Analysis, Second Edition. John O. Cooper, Timothy E. Heron, William L. Heward. Copyright © 2007 by PearsonEducation, Inc. All rights reserved.

1

Glossary

contingency-dependent consequence events to affect stim-ulus control.)

antecedent stimulus class A set of stimuli that share a com-mon relationship. All stimuli in an antecedent stimulusclass evoke the same operant behavior, or elicit the samerespondent behavior. (See arbitrary stimulus class, fea-ture stimulus class.)

applied behavior analysis (ABA) The science in which tac-tics derived from the principles of behavior are applied toimprove socially significant behavior and experimentatonis used to identify the variables responsible for the im-provement in behavior.

arbitrary stimulus class Antecedent stimuli that evoke thesame response but do not resemble each other in physicalform or share a relational aspect such as bigger or under(e.g., peanuts, cheese, coconut milk, and chicken breastsare members of an arbitrary stimulus class if they evoke theresponse “sources of protein”). (Compare to feature stim-ulus class.)

artifact An outcome or result that appears to exist because ofthe way it is measured but in fact does not correspond towhat actually occurred.

ascending baseline A data path that shows an increasingtrend in the response measure over time. (Compare withdescending baseline.)

audience Anyone who functions as a discriminative stimulusevoking verbal behavior. Different audiences may controldifferent verbal behavior about the same topic because ofa differential reinforcement history. Teens may describethe same event in different ways when talking to peers ver-sus parents.

autoclitic A secondary verbal operant in which some aspectof a speaker’s own verbal behavior functions as an SD oran MO for additional speaker verbal behavior. The auto-clitic relation can be thought of as verbal behavior aboutverbal behavior.

automatic punishment Punishment that occurs independentof the social mediation by others (i.e., a response productserves as a punisher independent of the social environ-ment).

automatic reinforcement Reinforcement that occurs inde-pendent of the social mediation of others (e.g., scratchingan insect bite relieves the itch).

automaticity (of reinforcement) Refers to the fact that be-havior is modified by its consequences irrespective of theperson’s awareness; a person does not have to recognize orverbalize the relation between her behavior and a rein-forcing consequence, or even know that a consequencehas occurred, for reinforcement to “work.” (Contrast withautomatic reinforcement.)

aversive stimulus In general, an unpleasant or noxious stim-ulus; more technically, a stimulus change or condition thatfunctions (a) to evoke a behavior that has terminated it inthe past; (b) as a punisher when presented following be-havior, and/or (c) as a reinforcer when withdrawn follow-ing behavior.

avoidance contingency A contingency in which a responseprevents or postpones the presentation of a stimulus.(Compare with escape contingency.)

B-A-B design A three-phase experimental design that beginswith the treatment condition. After steady state respond-ing has been obtained during the initial treatment phase(B), the treatment variable is withdrawn (A) to see whetherresponding changes in the absence of the independent vari-able. The treatment variable is then reintroduced (B) in anattempt to recapture the level of responding obtained dur-ing the first treatment phase.

backup reinforcers Tangible objects, activities, or privilegesthat serve as reinforcers and that can be purchased withtokens.

backward chaining A teaching procedure in which atrainer completes all but the last behavior in a chain,which is performed by the learner, who then receives re-inforcement for completing the chain. When the learnershows competence in performing the final step in thechain, the trainer performs all but the last two behaviorsin the chain, the learner emits the final two steps to com-plete the chain, and reinforcement is delivered. This se-quence is continued until the learner completes the entirechain independently.

backward chaining with leaps ahead A backward chain-ing procedure in which some steps in the task analysis areskipped; used to increase the efficiency of teaching longbehavior chains when there is evidence that the skippedsteps are in the learner’s repertoire.

bar graph A simple and versatile graphic format for sum-marizing behavioral data; shares most of the line graph’sfeatures except that it does not have distinct data pointsrepresenting successive response measures through time.Also called a histogram.

baseline A condition of an experiment in which the inde-pendent variable is not present; data obtained during base-line are the basis for determining the effects of theindependent variable; a control condition that does notnecessarily mean the absence of instruction or treatment,only the absence of a specific independent variable of ex-perimental interest.

baseline logic A term sometimes used to refer to the exper-imental reasoning inherent in single-subject experimentaldesigns; entails three elements: prediction, verification,and replication. (See steady state strategy.)

behavior The activity of living organisms; human behaviorincludes everything that people do. A technical definition:“that portion of an organism’s interaction with its envi-ronment that is characterized by detectable displacementin space through time of some part of the organism andthat results in a measurable change in at least one aspectof the environment” (Johnston & Pennypacker, 1993a, p. 23). (See operant behavior, respondent behavior, re-sponse, response class.)

behavior-altering effect (of a motivating operation) Analteration in the current frequency of behavior that has

2

Glossary

been reinforced by the stimulus that is altered in effec-tiveness by the same motivating operation. For example,the frequency of behavior that has been reinforced withfood is increased or decreased by food deprivation or foodingestion.

behavior chain A sequence of responses in which each re-sponse produces a stimulus change that functions as con-ditioned reinforcement for that response and as adiscriminative stimulus for the next response in the chain;reinforcement for the last response in a chain maintainsthe reinforcing effectiveness of the stimulus changes pro-duced by all previous responses in the chain.

behavior chain interruption strategy An intervention thatrelies on the participant’s skill in performing the critical el-ements of a chain independently; the chain is interruptedoccasionally so that another behavior can be emitted.

behavior chain with a limited hold A contingency that spec-ifies a time interval by which a behavior chain must becompleted for reinforcement to be delivered.

behavior change tactic A technologically consistent methodfor changing behavior derived from one or more princi-ples of behavior (e.g., differential reinforcement of otherbehavior, response cost); possesses sufficient generalityacross subjects, settings, and/or behaviors to warrant itscodification and dissemination.

behavior checklist A checklist that provides descriptions ofspecific skills (usually in hierarchical order) and the con-ditions under which each skill should be observed. Somechecklists are designed to assess one particular behavior orskill area. Others address multiple behaviors or skill areas.Most use a Likert scale to rate responses.

behavior trap An interrelated community of contingen-cies of reinforcement that can be especially powerful,producing substantial and long-lasting behavior changes.Effective behavior traps share four essential features:(a) They are “baited” with virtually irresistible rein-forcers that “lure” the student to the trap; (b) only a low-effort response already in the student’s repertoire isnecessary to enter the trap; (c) once inside the trap, in-terrelated contingencies of reinforcement motivate thestudent to acquire, extend, and maintain targeted aca-demic and/or social skills; and (d) they can remain ef-fective for a long time because students shows few, ifany, satiation effects.

behavioral assessment A form of assessment that involvesa full range of inquiry methods (observation, interview,testing, and the systematic manipulation of antecedent orconsequence variables) to identify probable antecedentand consequent controlling variables. Behavioral assess-ment is designed to discover resources, assets, significantothers, competing contingencies, maintenance and gener-ality factors, and possible reinforcer and/or punishers thatsurround the potential target behavior.

behavioral contract See contingency contract.behavioral contrast The phenomenon in which a change in

one component of a multiple schedule that increases or

decreases the rate of responding on that component is ac-companied by a change in the response rate in the oppo-site direction on the other, unaltered component of theschedule.

behavioral cusp A behavior that has sudden and dramaticconsequences that extend well beyond the idiosyncraticchange itself because it exposes the person to new envi-ronments, reinforcers, contingencies, responses, and stim-ulus controls. (See pivotal behavior.)

behavioral momentum A metaphor to describe a rate of re-sponding and its resistance to change following an alter-ation in reinforcement conditions. The momentum metaphorhas also been used to describe the effects produced by thehigh-probability (high-p) request sequence.

behaviorism The philosophy of a science of behavior; thereare various forms of behaviorism. (See methodologicalbehaviorism, radical behaviorism.)

believability The extent to which the researcher convincesherself and others that the data are trustworthy and de-serve interpretation. Measures of interobserver agreement(IOA) are the most often used index of believability in ap-plied behavior analysis. (See interobserver agreement(IOA).)

bonus response cost A procedure for implementing responsecost in which the person is provided a reservoir of rein-forcers that are removed in predetermined amounts con-tingent on the occurrence of the target behavior.

calibration Any procedure used to evaluate the accuracy ofa measurement system and, when sources of error arefound, to use that information to correct or improve themeasurement system.

celeration The change (acceleration or deceleration) in rateof responding over time; based on count per unit of time(rate); expressed as a factor by which responding is ac-celerating or decelerating (multiplying or dividing); dis-played with a trend line on a Standard Celeration Chart.Celeration is a generic term without specific reference toaccelerating or decelerating rates of response. (SeeStandard Celeration Chart.)

celeration time period A unit of time (e.g., per week, permonth) in which celeration is plotted on a Standard Cel-eration Chart. (See celeration and celeration trend line.)

celeration trend line The celeration trend line is measuredas a factor by which rate multiplies or divides across theceleration time periods (e.g., rate per week, rate per month,rate per year, and rate per decade). (See celeration.)

chained schedule A schedule of reinforcement in which theresponse requirements of two or more basic schedulesmust be met in a specific sequence before reinforcementis delivered; a discriminative stimulus is correlated witheach component of the schedule.

chaining Various procedures for teaching behavior chains.(See backward chaining, backward chaining withleaps ahead, behavior chain, forward chaining.)

changing criterion design An experimental design in whichan initial baseline phase is followed by a series of treatment

3

Glossary

phases consisting of successive and gradually changingcriteria for reinforcement or punishment. Experimentalcontrol is evidenced by the extent the level of respondingchanges to conform to each new criterion.

clicker training A term popularized by Pryor (1999) forshaping behavior using conditioned reinforcement in theform of an auditory stimulus. A handheld device producesa click sound when pressed. The trainer pairs other formsof reinforcement (e.g., edible treats) with the click soundso that the sound becomes a conditioned reinforcer.

component analysis Any experiment designed to identifythe active elements of a treatment condition, the relativecontributions of different variables in a treatment pack-age, and/or the necessary and sufficient components of anintervention. Component analyses take many forms, butthe basic strategy is to compare levels of responding acrosssuccessive phases in which the intervention is implementedwith one or more components left out.

compound schedule A schedule of reinforcement consist-ing of two or more elements of continuous reinforcement(CRF), the four intermittent schedules of reinforcement(FR, VR, FI, VI), differential reinforcement of variousrates of responding (DRH, DRL), and extinction. The el-ements from these basic schedules can occur successivelyor simultaneously and with or without discriminative stim-uli; reinforcement may be contingent on meeting the re-quirements of each element of the schedule independentlyor in combination with all elements.

concept formation A complex example of stimulus controlthat requires stimulus generalization within a class of stim-uli and discrimination between classes of stimuli.

concurrent schedule (conc) A schedule of reinforcement inwhich two or more contingencies of reinforcement (ele-ments) operate independently and simultaneously for twoor more behaviors.

conditional probability The likelihood that a target behav-ior will occur in a given circumstance; computed by cal-culating (a) the proportion of occurrences of behavior thatwere preceded by a specific antecedent variable and (b) the proportion of occurrences of problem behavior thatwere followed by a specific consequence. Conditionalprobabilities range from 0.0 to 1.0; the closer the condi-tional probability is to 1.0, the stronger the relationship isbetween the target behavior and the antecedent/conse-quence variable.

conditioned motivating operation (CMO) A motivating op-eration whose value-altering effect depends on a learninghistory. For example, because of the relation betweenlocked doors and keys, having to open a locked door is aCMO that makes keys more effective as reinforcers, andevokes behavior that has obtained such keys.

conditioned negative reinforcer A previously neutral stim-ulus change that functions as a negative reinforcer becauseof prior pairing with one or more negative reinforcers. (Seenegative reinforcer; compare with unconditioned neg-ative reinforcer).

conditioned punisher A previously neutral stimuluschange that functions as a punisher because of prior pair-ing with one or more other punishers; sometimes calledsecondary or learned punisher. (Compare with uncon-ditioned punisher.)

conditioned reflex A learned stimulus–response functionalrelation consisting of an antecedent stimulus (e.g., soundof refrigerator door opening) and the response it elicits(e.g., salivation); each person’s repertoire of conditionedreflexes is the product of his or her history of interactionswith the environment (ontogeny). (See respondent con-ditioning, unconditioned reflex.)

conditioned reinforcer A stimulus change that functionsas a reinforcer because of prior pairing with one or moreother reinforcers; sometimes called secondary or learnedreinforcer.

conditioned stimulus (CS) The stimulus component of a con-ditioned reflex; a formerly neutral stimulus change thatelicits respondent behavior only after it has been pairedwith an unconditioned stimulus (US) or another CS.

confidentiality Describes a situation of trust insofar as any in-formation regarding a person receiving or having receivedservices may not be discussed with or otherwise madeavailable to another person or group, unless that personhas provided explicit authorization for release of such information.

conflict of interest A situation in which a person in a positionof responsibility or trust has competing professional orpersonal interests that make it difficult to fulfill his or herduties impartially.

confounding variable An uncontrolled factor known or sus-pected to exert influence on the dependent variable.

consequence A stimulus change that follows a behavior of in-terest. Some consequences, especially those that are im-mediate and relevant to current motivational states, havesignificant influence on future behavior; others have littleeffect. (See punisher, reinforcer.)

contingency Refers to dependent and/or temporal relationsbetween operant behavior and its controlling variables.(See contingent, three-term contingency.)

contingency contract A mutually agreed upon document be-tween parties (e.g., parent and child) that specifies a con-tingent relationship between the completion of specifiedbehavior(s) and access to specified reinforcer(s).

contingency reversal Exchanging the reinforcement contin-gencies for two topographically different responses. Forexample, if Behavior A results in reinforcement on an FR 1schedule of reinforcement and Behavior B results in rein-forcement being withheld (extinction), a contingency re-versal consists of changing the contingencies such thatBehavior A now results in extinction and Behavior B re-sults in reinforcement on an FR 1 schedule.

contingent Describes reinforcement (or punishment) that isdelivered only after the target behavior has occurred.

contingent observation A procedure for implementing time-out in which the person is repositioned within an existing

4

Glossary

setting such that observation of ongoing activities remains,but access to reinforcement is lost.

continuous measurement Measurement conducted in a man-ner such that all instances of the response class(es) of in-terest are detected during the observation period.

continuous reinforcement (CRF) A schedule of reinforce-ment that provides reinforcement for each occurrence ofthe target behavior.

contrived contingency Any contingency of reinforcement(or punishment) designed and implemented by a behavioranalyst or practitioner to achieve the acquisition, mainte-nance, and/or generalization of a targeted behavior change.(Contrast with naturally existing contingency.)

contrived mediating stimulus Any stimulus made functionalfor the target behavior in the instructional setting that laterprompts or aids the learner in performing the target be-havior in a generalization setting.

copying a text An elementary verbal operant that is evokedby a nonvocal verbal discriminative stimulus that haspoint-to-point correspondence and formal similarity withthe controlling response.

count A simple tally of the number of occurrences of a be-havior. The observation period, or counting time, shouldalways be noted when reporting count measures.

counting time The period of time in which a count of thenumber of responses emitted was recorded.

cumulative record A type of graph on which the cumula-tive number of responses emitted is represented on the ver-tical axis; the steeper the slope of the data path, the greaterthe response rate.

cumulative recorder A device that automatically draws cu-mulative records (graphs) that show the rate of response inreal time; each time a response is emitted, a pen movesupward across paper that continuously moves at a con-stant speed.

data The results of measurement, usually in quantifiableform; in applied behavior analysis, it refers to measuresof some quantifiable dimension of a behavior.

data path The level and trend of behavior between succes-sive data points; created by drawing a straight line from thecenter of each data point in a given data set to the centerof the next data point in the same set.

delayed multiple baseline design A variation of the multi-ple baseline design in which an initial baseline, and per-haps intervention, are begun for one behavior (or setting,or subject), and subsequent baselines for additional be-haviors are begun in a staggered or delayed fashion.

dependent group contingency A contingency in which re-inforcement for all members of a group is dependent on thebehavior of one member of the group or the behavior of aselect group of members within the larger group.

dependent variable The variable in an experiment measured todetermine if it changes as a result of manipulations of theindependent variable; in applied behavior analysis, it repre-sents some measure of a socially significant behavior. (Seetarget behavior; compare with independent variable.)

deprivation The state of an organism with respect to howmuch time has elapsed since it has consumed or contacteda particular type of reinforcer; also refers to a procedurefor increasing the effectiveness of a reinforcer (e.g., with-holding a person’s access to a reinforcer for a specifiedperiod of time prior to a session). (See motivating oper-ation; contrast with satiation.)

descending baseline A data path that shows a decreasingtrend in the response measure over time. (Compare withascending baseline.)

descriptive functional behavior assessment Direct obser-vation of problem behavior and the antecedent and con-sequent events under naturally occurring conditions.

determinism The assumption that the universe is a lawfuland orderly place in which phenomena occur in relation toother events and not in a willy-nilly, accidental fashion.

differential reinforcement Reinforcing only those responseswithin a response class that meet a specific criterion alongsome dimension(s) (i.e., frequency, topography, duration,latency, or magnitude) and placing all other responses inthe class on extinction. (See differential reinforcement ofalternative behavior, differential reinforcement of in-compatible behavior, differential reinforcement ofother behavior, discrimination training, shaping.)

differential reinforcement of alternative behavior (DRA)A procedure for decreasing problem behavior in which re-inforcement is delivered for a behavior that serves as a de-sirable alternative to the behavior targeted for reductionand withheld following instances of the problem behavior(e.g., reinforcing completion of academic worksheet itemswhen the behavior targeted for reduction is talk-outs).

differential reinforcement of diminishing rates (DRD) Aschedule of reinforcement in which reinforcement is pro-vided at the end of a predetermined interval contingent onthe number of responses emitted during the interval beingfewer than a gradually decreasing criterion based on the in-dividual’s performance in previous intervals (e.g., fewerthan five responses per 5 minutes, fewer than four responses per 5 minutes, fewer than three responses per 5 minutes).

differential reinforcement of high rates (DRH) A sched-ule of reinforcement in which reinforcement is providedat the end of a predetermined interval contingent on thenumber of responses emitted during the interval beinggreater than a gradually increasing criterion based on theindividual’s performance in previous intervals (e.g., morethan three responses per 5 minutes, more than five re-sponses per 5 minutes, more than eight responses per 5 minutes).

differential reinforcement of incompatible behavior (DRI)A procedure for decreasing problem behavior in which re-inforcement is delivered for a behavior that is topograph-ically incompatible with the behavior targeted forreduction and withheld following instances of the prob-lem behavior (e.g., sitting in seat is incompatible withwalking around the room).

5

Glossary

differential reinforcement of low rates (DRL) A scheduleof reinforcement in which reinforcement (a) follows eachoccurrence of the target behavior that is separated fromthe previous response by a minimum interresponse time(IRT), or (b) is contingent on the number of responseswithin a period of time not exceeding a predetermined cri-terion. Practitioners use DRL schedules to decrease therate of behaviors that occur too frequently but should bemaintained in the learner’s repertoire. (See full-sessionDRL, interval DRL, and spaced-responding DRL.)

differential reinforcement of other behavior (DRO) A pro-cedure for decreasing problem behavior in which rein-forcement is contingent on the absence of the problembehavior during or at specific times (i.e., momentary DRO);sometimes called differential reinforcement of zero ratesof responding or omission training). (See fixed-intervalDRO, fixed-momentary DRO, variable-interval DRO,and variable-momentary DRO.)

direct measurement Occurs when the behavior that is mea-sured is the same as the behavior that is the focus of the in-vestigation. (Contrast with indirect measurement.)

direct replication An experiment in which the researcherattempts to duplicate exactly the conditions of an earlierexperiment.

discontinuous measurement Measurement conducted in amanner such that some instances of the response class(es)of interest may not be detected.

discrete trial Any operant whose response rate is controlledby a given opportunity to emit the response. Each discrete re-sponse occurs when an opportunity to respond exists.Discrete trial, restricted operant, and controlled operant aresynonymous technical terms. (Contrast with free operant.)

discriminated avoidance A contingency in which respond-ing in the presence of a signal prevents the onset of a stim-ulus from which escape is a reinforcer (See alsodiscriminative stimulus, discriminated operant, free-operant avoidance, and stimulus control.)

discriminated operant An operant that occurs more fre-quently under some antecedent conditions than under oth-ers. (See discriminative stimulus [SD], stimulus control.)

discriminative stimulus (SD) A stimulus in the presence ofwhich responses of some type have been reinforced and inthe absence of which the same type of responses have oc-curred and not been reinforced; this history of differentialreinforcement is the reason an SD increases the momentaryfrequency of the behavior. (See differential reinforce-ment, stimulus control, stimulus discrimination train-ing, and stimulus delta [S�].)

double-blind control A procedure that prevents the subjectand the observer(s) from detecting the presence or absenceof the treatment variable; used to eliminate confounding ofresults by subject expectations, parent and teacher expec-tations, differential treatment by others, and observer bias.(See placebo control.)

DRI/DRA reversal technique An experimental techniquethat demonstrates the effects of reinforcement; it uses dif-

ferential reinforcement of an incompatible or alternativebehavior (DRI/DRA) as a control condition instead of ano-reinforcement (baseline) condition. During the DRI/DRA condition, the stimulus change used as reinforce-ment in the reinforcement condition is presented contin-gent on occurrences of a specified behavior that is eitherincompatible with the target behavior or an alternative tothe target behavior. A higher level of responding duringthe reinforcement condition than during the DRI/DRAcondition demonstrates that the changes in behavior arethe result of contingent reinforcement, not simply the pre-sentation of or contact with the stimulus event. (Comparewith DRO reversal technique and noncontingent rein-forcement (NCR) reversal technique.)

DRO reversal technique An experimental technique fordemonstrating the effects of reinforcement by using dif-ferential reinforcement of other behavior (DRO) as a con-trol condition instead of a no-reinforcement (baseline)condition. During the DRO condition, the stimuluschange used as reinforcement in the reinforcement con-dition is presented contingent on the absence of the tar-get behavior for a specified time period. A higher levelof responding during the reinforcement condition thanduring the DRO condition demonstrates that the changesin behavior are the result of contingent reinforcement,not simply the presentation of or contact with the stim-ulus event. (Compare with DRI/DRA reversal tech-nique and noncontingent reinforcement (NCR)reversal technique.)

duration A measure of the total extent of time in which abehavior occurs.

echoic An elementary verbal operant involving a responsethat is evoked by a verbal discriminative stimulus that haspoint-to-point correspondence and formal similarity withthe response.

ecological assessment An assessment protocol that ac-knowledges complex interrelationships between envi-ronment and behavior. An ecological assessment is a method for obtaining data across multiple settings andpersons.

empiricism The objective observation of the phenomena ofinterest; objective observations are “independent of the in-dividual prejudices, tastes, and private opinions of the sci-entist. . . . Results of empirical methods are objective inthat they are open to anyone’s observation and do not de-pend on the subjective belief of the individual scientist”(Zuriff, 1985, p. 9).

environment The conglomerate of real circumstances inwhich the organism or referenced part of the organism ex-ists; behavior cannot occur in the absence of environment.

escape contingency A contingency in which a response ter-minates (produces escape from) an ongoing stimulus.(Compare with avoidance contingency.)

escape extinction Behaviors maintained with negative rein-forcement are placed on escape extinction when those be-haviors are not followed by termination of the aversive

6

Glossary

stimulus; emitting the target behavior does not enable theperson to escape the aversive situation.

establishing operation (EO) A motivating operation thatestablishes (increases) the effectiveness of some stimu-lus, object, or event as a reinforcer. For example, food de-privation establishes food as an effective reinforcer.

ethical codes of behavior Statements that provide guide-lines for members of professional associations when de-ciding a course of action or conducting professionalduties; standards by which graduated sanctions (e.g., rep-rimand, censure, expulsion) can be imposed for deviatingfrom the code.

ethics Behaviors, practices, and decisions that address suchbasic and fundamental questions as: What is the right thingto do? What’s worth doing? What does it mean to be agood behavior analytic practitioner?

event recording Measurement procedure for obtaining a tallyor count of the number of times a behavior occurs.

evocative effect (of a motivating operation) An increase inthe current frequency of behavior that has been reinforcedby the stimulus that is increased in reinforcing effective-ness by the same motivating operation. For example, fooddeprivation evokes (increases the current frequency of) be-havior that has been reinforced by food.

exact count-per-interval IOA The percentage of total in-tervals in which two observers recorded the same count;the most stringent description of IOA for most data sets ob-tained by event recording.

exclusion time-out A procedure for implementing time-outin which, contingent on the occurrence of a target behav-ior, the person is removed physically from the current en-vironment for a specified period.

experiment A carefully controlled comparison of some mea-sure of the phenomenon of interest (the dependent vari-able) under two or more different conditions in which onlyone factor at a time (the independent variable) differs fromone condition to another.

experimental analysis of behavior (EAB) A natural scienceapproach to the study of behavior as a subject matter inits own right founded by B. F. Skinner; methodologicalfeatures include rate of response as a basic dependent vari-able, repeated or continuous measurement of clearly de-fined response classes, within-subject experimentalcomparisons instead of group design, visual analysis ofgraphed data instead of statistical inference, and an em-phasis on describing functional relations between behav-ior and controlling variables in the environment overformal theory testing.

experimental control Two meanings: (a) the outcome of anexperiment that demonstrates convincingly a functionalrelation, meaning that experimental control is achievedwhen a predictable change in behavior (the dependent variable) can be reliably produced by manipulating a spe-cific aspect of the environment (the independent variable);and (b) the extent to which a researcher maintains precisecontrol of the independent variable by presenting it, with-

drawing it, and/or varying its value, and also by eliminat-ing or holding constant all confounding and extraneousvariables. (See confounding variable, extraneous vari-able, and independent variable.)

experimental design The particular type and sequence ofconditions in a study so that meaningful comparisons ofthe effects of the presence and absence (or different values)of the independent variable can be made.

experimental question A statement of what the researcherseeks to learn by conducting the experiment; may be pre-sented in question form and is most often found in a pub-lished account as a statement of the experiment’s purpose.All aspects of an experiment’s design should follow from theexperimental question (also called the research question).

explanatory fiction A fictitious or hypothetical variable thatoften takes the form of another name for the observed phe-nomenon it claims to explain and contributes nothing to afunctional account or understanding of the phenomenon,such as “intelligence” or “cognitive awareness” as expla-nations for why an organism pushes the lever when thelight is on and food is available but does not push the leverwhen the light is off and no food is available.

external validity The degree to which a study’s findings havegenerality to other subjects, settings, and/or behaviors.(Compare to internal validity.)

extinction (operant) The discontinuing of a reinforcement ofa previously reinforced behavior (i.e., responses no longerproduce reinforcement); the primary effect is a decrease inthe frequency of the behavior until it reaches a prereinforcedlevel or ultimately ceases to occur. (See extinction burst,spontaneous recovery; compare respondent extinction)

extinction burst An increase in the frequency of respondingwhen an extinction procedure is initially implemented.

extraneous variable Any aspect of the experimental setting(e.g., lighting, temperature) that must be held constant toprevent unplanned environmental variation.

fading A procedure for transferring stimulus control inwhich features of an antecedent stimulus (e.g., shape,size, position, color) controlling a behavior are gradu-ally changed to a new stimulus while maintaining thecurrent behavior; stimulus features can be faded in (en-hanced) or faded out (reduced).

feature stimulus class Stimuli that share common physicalforms or structures (e.g., made from wood, four legs,round, blue) or common relative relationships (e.g., biggerthan, hotter than, higher than, next to). (Compare toarbitrary stimulus class.)

fixed interval (FI) A schedule of reinforcement in which re-inforcement is delivered for the first response emitted fol-lowing the passage of a fixed duration of time since thelast response was reinforced (e.g., on an FI 3-minuteschedule, the first response following the passage of 3 min-utes is reinforced).

fixed-interval DRO (FI-DRO) A DRO procedure in whichreinforcement is available at the end of intervals of fixedduration and delivered contingent on the absence of the

7

Glossary

problem behavior during each interval. (See differentialreinforcement of other behavior (DRO).)

fixed-momentary DRO (FM-DRO) A DRO procedure inwhich reinforcement is available at specific moments oftime, which are separated by a fixed amount of time, anddelivered contingent on the problem not occurring at thosemoments. (See differential reinforcement of other be-havior (DRO).)

fixed ratio (FR) A schedule of reinforcement requiring afixed number of responses for reinforcement (e.g., an FR4 schedule reinforcement follows every fourth response).

fixed-time schedule (FT) A schedule for the delivery of non-contingent stimuli in which a time interval remains thesame from one delivery to the next.

formal similarity A situation that occurs when the control-ling antecedent stimulus and the response or responseproduct (a) share the same sense mode (e.g., both stimu-lus and response are visual, auditory, or tactile) and (b)physically resemble each other. The verbal relations withformal similarity are echoic, coping a text, and imitationas it relates to sign language.

forward chaining A method for teaching behavior chainsthat begins with the learner being prompted and taught toperform the first behavior in the task analysis; the trainercompletes the remaining steps in the chain. When thelearner shows competence in performing the first step inthe chain, he is then taught to perform the first two be-haviors in the chain, with the training completing the chain.This process is continued until the learner completes theentire chain independently.

free operant Any operant behavior that results in minimaldisplacement of the participant in time and space. A freeoperant can be emitted at nearly any time; it is discrete, itrequires minimal time for completion, and it can producea wide range of response rates. Examples in ABA include(a) the number of words read during a 1-minute countingperiod, (b) the number of hand slaps per 6 seconds, and(c) the number of letter strokes written in 3 minutes. (Con-trast with discrete trial.)

free-operant avoidance A contingency in which responsesat any time during an interval prior to the scheduled onsetof an aversive stimulus delays the presentation of the aver-sive stimulus. (Contrast with discriminated avoidance.)

frequency A ratio of count per observation time; often ex-pressed as count per standard unit of time (e.g., per minute,per hour, per day) and calculated by dividing the numberof responses recorded by the number of standard units oftime in which observations were conducted; used inter-changeably with rate.

full-session DRL A procedure for implementing DRL inwhich reinforcement is delivered at the end of the sessionif the total number of responses emitted during the ses-sion does not exceed a criterion limit. (See differentialreinforcement of low rates (DRL).)

function-altering effect (relevant to operant relations) Arelatively permanent change in an organism’s repertoire

of MO, stimulus, and response relations, caused by re-inforcement, punishment, an extinction procedure, or a recovery from punishment procedure. Respondentfunction-altering effects result from the pairing and un-pairing of antecedent stimuli.

function-based definition Designates responses as membersof the targeted response class solely in terms of their com-mon effect on the environment.

functional analysis (as part of functional behavior assess-ment) An analysis of the purposes (functions) of prob-lem behavior, wherein antecedents and consequencesrepresenting those in the person’s natural routines arearranged within an experimental design so that their sep-arate effects on problem behavior can be observed andmeasured; typically consists of four conditions: three testconditions—contingent attention, contingent escape, andalone—and a control condition in which problem behav-ior is expected to be low because reinforcement is freelyavailable and no demands are placed on the person.

functional behavior assessment (FBA) A systematic methodof assessment for obtaining information about the pur-poses (functions) a problem behavior serves for a person;results are used to guide the design of an intervention fordecreasing the problem behavior and increasing appro-priate behavior.

functional communication training (FCT) An antecedent in-tervention in which an appropriate communicative behavioris taught as a replacement behavior for problem behaviorusually evoked by an establishing operation (EO); involvesdifferential reinforcement of alternative behavior (DRA).

functional relation A verbal statement summarizing the re-sults of an experiment (or group of related experiments)that describes the occurrence of the phenomena understudy as a function of the operation of one or more spec-ified and controlled variables in the experiment in whicha specific change in one event (the dependent variable)can be produced by manipulating another event (the inde-pendent variable), and that the change in the dependentvariable was unlikely the result of other factors (con-founding variables); in behavior analysis expressed as b = f (x1), (x2), . . . , where b is the behavior and x1, x2,etc., are environmental variables of which the behavior isa function.

functionally equivalent Serving the same function or pur-pose; different topographies of behavior are functionallyequivalent if they produce the same consequences.

general case analysis A systematic process for identifyingand selecting teaching examples that represent the fullrange of stimulus variations and response requirements inthe generalization setting(s). (See also multiple exemplartraining and teaching sufficient examples.)

generalization A generic term for a variety of behavioralprocesses and behavior change outcomes. (See general-ization gradient, generalized behavior change, responsegeneralization, response maintenance, setting/situationgeneralization, and stimulus generalization.)

8

Glossary

generalization across subjects Changes in the behavior ofpeople not directly treated by an intervention as a func-tion of treatment contingencies applied to other people.

generalization probe Any measurement of a learner’s per-formance of a target behavior in a setting and/or stimulussituation in which direct training has not been provided.

generalization setting Any place or stimulus situation thatdiffers in some meaningful way from the instructional set-ting and in which performance of the target behavior isdesired. (Contrast with instructional setting.)

generalized behavior change A behavior change that has notbeen taught directly. Generalized outcomes take one, or acombination of, three primary forms: response maintenance,stimulus/setting generalization, and response generaliza-tion. Sometimes called generalized outcome.

generalized conditioned punisher A stimulus change that,as a result of having been paired with many other punish-ers, functions as punishment under most conditions be-cause it is free from the control of motivating conditionsfor specific types of punishment.

generalized conditioned reinforcer A conditioned reinforcerthat as a result of having been paired with many other re-inforcers does not depend on an establishing operation forany particular form of reinforcement for its effectiveness.

generic (tact) extension A tact evoked by a novel stimulusthat shares all of the relevant or defining features associ-ated with the original stimulus.

graph A visual format for displaying data; reveals relationsamong and between a series of measurements and rele-vant variables.

group contingency A contingency in which reinforcementfor all members of a group is dependent on the behaviorof (a) a person within the group, (b) a select group of mem-bers within the larger group, or (c) each member of thegroup meeting a performance criterion. (See dependentgroup contingency, independent group contingency,interdependent group contingency.)

habilitation Habilitation (adjustment) occurs when a per-son’s repertoire has been changed such that short- andlong-term reinforcers are maximized and short- and long-term punishers are minimized.

habit reversal A multiple-component treatment package forreducing unwanted habits such as fingernail biting andmuscle tics; treatment typically includes self-awarenesstraining involving response detection and procedures foridentifying events that precede and trigger the response;competing response training; and motivation techniquesincluding self-administered consequences, social supportsystems, and procedures for promoting the generalizationand maintenance of treatment gains.

habituation A decrease in responsiveness to repeated pre-sentations of a stimulus; most often used to describe a re-duction of respondent behavior as a function of repeatedpresentation of the eliciting stimulus over a short span oftime; some researchers suggest that the concept also ap-plies to within-session changes in operant behavior.

hallway time-out A procedure for implementing time-out inwhich, contingent on the occurrence of an inappropriatebehavior, the student is removed from the classroom to ahallway location near the room for a specified period oftime.

hero procedure Another term for a dependent group contin-gency (i.e., a person earns a reward for the group).

high-probability (high-p) request sequence An antecedentintervention in which two to five easy tasks with a knownhistory of learner compliance (the high-p requests) arepresented in quick succession immediately before re-questing the target task, the low-p request. Also called int-erspersed requests, pretask requests, or behavioralmomentum.

higher order conditioning Development of a conditionedreflex by pairing of a neutral stimulus (NS) with a condi-tioned stimulus (CS). Also called secondary conditioning.

history of reinforcement An inclusive term referring in gen-eral to all of a person’s learning experiences and morespecifically to past conditioning with respect to particularresponse classes or aspects of a person’s repertoire. (Seeontogeny.)

hypothetical construct A presumed but unobserved processor entity (e.g., Freud’s id, ego, and superego).

imitation A behavior controlled by any physical movementthat serves as a novel model excluding vocal-verbal be-havior, has formal similarity with the model, and imme-diately follows the occurrence of the model (e.g., withinseconds of the model presentation). An imitative behavioris a new behavior emitted following a novel antecedentevent (i.e., the model). (See formal similarity; contrastwith echoic.)

impure tact A verbal operant involving a response that isevoked by both an MO and a nonverbal stimulus; thus, theresponse is part mand and part tact. (See mand and tact.)

independent group contingency A contingency in whichreinforcement for each member of a group is dependent onthat person’s meeting a performance criterion that is in ef-fect for all members of the group.

independent variable The variable that is systematically ma-nipulated by the researcher in an experiment to see whetherchanges in the independent variable produce reliable changesin the dependent variable. In applied behavior analysis, it isusually an environmental event or condition antecedent orconsequent to the dependent variable. Sometimes called the intervention or treatment variable. (Compare withdependent variable.)

indirect functional assessment Structured interviews, check-lists, rating scales, or questionnaires used to obtain infor-mation from people who are familiar with the personexhibiting the problem behavior (e.g., teachers, parents,caregivers, and/or the individual him- or herself); used toidentify conditions or events in the natural environmentthat correlate with the problem behavior.

indirect measurement Occurs when the behavior that is measured is in some way different from the behavior of

9

Glossary

interest; considered less valid than direct measurement be-cause inferences about the relation between the data ob-tained and the actual behavior of interest are required.(Contrast with direct measurement.)

indiscriminable contingency A contingency that makes itdifficult for the learner to discriminate whether the nextresponse will produce reinforcement. Practitioners use in-discriminable contingencies in the form of intermittentschedules of reinforcement and delayed rewards to pro-mote generalized behavior change.

informed consent When the potential recipient of services orparticipant in a research study gives his explicit permissionbefore any assessment or treatment is provided. Full dis-closure of effects and side effects must be provided. Togive consent, the person must (a) demonstrate the capac-ity to decide, (b) do so voluntarily, and (c) have adequateknowledge of all salient aspects of the treatment.

instructional setting The environment where instruction oc-curs; includes all aspects of the environment, planned andunplanned, that may influence the learner’s acquisitionand generalization of the target behavior. (Contrast withgeneralization setting.)

interdependent group contingency A contingency in whichreinforcement for all members of a group is dependent oneach member of the group meeting a performance criterionthat is in effect for all members of the group.

intermittent schedule of reinforcement (INT) A contin-gency of reinforcement in which some, but not all, occur-rences of the behavior produce reinforcement.

internal validity The extent to which an experiment showsconvincingly that changes in behavior are a function ofthe independent variable and not the result of uncontrolledor unknown variables. (Compare to external validity.)

interobserver agreement (IOA) The degree to which twoor more independent observers report the same observedvalues after measuring the same events.

interresponse time (IRT) A measure of temporal locus; defined as the elapsed time between two successive responses.

interval-by-interval IOA An index of the agreement be-tween observers for data obtained by interval recording ortime sampling measurement; calculated for a given ses-sion or measurement period by comparing the two ob-servers’ recordings of the occurrence or nonoccurrence ofthe behavior in each observation interval and dividing thenumber of intervals of agreement by the total number ofintervals and multiplying by 100. Also called the point-by-point or total interval IOA. (Compare to scored-intervalIOA and unscored-interval IOA.)

interval DRL A procedure for implementing DRL in whichthe total session is divided into equal intervals and rein-forcement is provided at the end of each interval in whichthe number of responses during the interval is equal to orbelow a criterion limit. (See differential reinforcement oflow rates (DRL).)

intraverbal An elementary verbal operant that is evoked bya verbal discriminative stimulus and that does not havepoint-to-point correspondence with that verbal stimulus.

irreversibility A situation that occurs when the level of re-sponding observed in a previous phase cannot be repro-duced even though the experimental conditions are thesame as they were during the earlier phase.

lag reinforcement schedule A schedule of reinforcement inwhich reinforcement is contingent on a response beingdifferent in some specified way (e.g., different topogra-phy) from the previous response (e.g., Lag 1) or a speci-fied number of previous responses (e.g., Lag 2 or more).

latency See response latency.level The value on the vertical axis around which a series of

behavioral measures converge.level system A component of some token economy systems

in which participants advance up (or down) through a suc-cession of levels contingent on their behavior at the cur-rent level. The performance criteria and sophistication ordifficulty of the behaviors required at each level are higherthan those of preceding levels; as participants advance tohigher levels, they gain access to more desirable rein-forcers, increased privileges, and greater independence.

limited hold A situation in which reinforcement is availableonly during a finite time following the elapse of an FI orVI interval; if the target response does not occur withinthe time limit, reinforcement is withheld and a new inter-val begins (e.g., on an FI 5-minute schedule with a limitedhold of 30 seconds, the first correct response followingthe elapse of 5 minutes is reinforced only if that responseoccurs within 30 seconds after the end of the 5-minute interval).

line graph Based on a Cartesian plane, a two-dimensionalarea formed by the intersection of two perpendicular lines.Any point within the plane represents a specific relationbetween the two dimensions described by the intersectinglines. It is the most common graphic format for displayingdata in applied behavior analysis.

listener Someone who provides reinforcement for verbal be-havior. A listener may also serve as an audience evokingverbal behavior. (Contrast with speaker.)

local response rate The average rate of response during asmaller period of time within a larger period for which anoverall response rate has been given. (See overall re-sponse rate.)

magnitude The force or intensity with which a response isemitted; provides important quantitative parameters usedin defining and verifying the occurrence of some responseclasses. Responses meeting those criteria are measuredand reported by one or more fundamental or derivativemeasures such as frequency, duration, or latency. Some-times called amplitude.

maintenance Two different meanings in applied behavioranalysis: (a) the extent to which the learner continues toperform the target behavior after a portion or all of the

10

Glossary

intervention has been terminated (i.e., response mainte-nance), a dependent variable or characteristic of behavior;and (b) a condition in which treatment has been discon-tinued or partially withdrawn, an independent variable orexperimental condition.

mand An elementary verbal operant that is evoked by an MOand followed by specific reinforcement.

massed practice A self-directed behavior change techniquein which the person forces himself to perform an unde-sired behavior (e.g., a compulsive ritual) repeatedly, whichsometimes decreases the future frequency of the behavior.

matching law The allocation of responses to choices avail-able on concurrent schedules of reinforcement; rates ofresponding across choices are distributed in proportionsthat match the rates of reinforcement received from eachchoice alternative.

matching-to-sample A procedure for investigating condi-tional relations and stimulus equivalence. A matching-to-sample trial begins with the participant making a responsethat presents or reveals the sample stimulus; next, the sam-ple stimulus may or may not be removed, and two or morecomparison stimuli are presented. The participant then se-lects one of the comparison stimuli. Responses that selecta comparison stimulus that matches the sample stimulusare reinforced, and no reinforcement is provided for re-sponses selecting the nonmatching comparison stimuli.

mean count-per-interval IOA The average percentage ofagreement between the counts reported by two observersin a measurement period comprised of a series of smallercounting times; a more conservative measure of IOA thantotal count IOA.

mean duration-per-occurrence IOA An IOA index for du-ration per occurrence data; also a more conservative andusually more meaningful assessment of IOA for total du-ration data calculated for a given session or measurementperiod by computing the average percentage of agreementof the durations reported by two observers for each oc-currence of the target behavior.

measurement bias Nonrandom measurement error; a form ofinaccurate measurement in which the data consistentlyoverestimate or underestimate the true value of an event.

measurement by permanent product A method of mea-suring behavior after it has occurred by recording the ef-fects that the behavior produced on the environment.

mentalism An approach to explaining behavior that assumesthat a mental, or “inner,” dimension exists that differs froma behavioral dimension and that phenomena in this di-mension either directly cause or at least mediate someforms of behavior, if not all.

metaphorical (tact) extension A tact evoked by a novel stim-ulus that shares some, but not all, of the relevant featuresof the original stimulus.

methodological behaviorism A philosophical position thatviews behavioral events that cannot be publicly observedas outside the realm of science.

metonymical (tact) extension A tact evoked by a novel stim-ulus that shares none of the relevant features of the origi-nal stimulus configuration, but some irrelevant yet relatedfeature has acquired stimulus control.

mixed schedule (mix) A compound schedule of reinforce-ment consisting of two or more basic schedules of rein-forcement (elements) that occur in an alternating, usuallyrandom, sequence; no discriminative stimuli are correlatedwith the presence or absence of each element of the sched-ule, and reinforcement is delivered for meeting the re-sponse requirements of the element in effect at any time.

momentary time sampling A measurement method in whichthe presence or absence of behaviors are recorded at pre-cisely specified time intervals. (Contrast with intervalrecording.)

motivating operation (MO) An environmental variable that(a) alters (increases or decreases) the reinforcing or pun-ishing effectiveness of some stimulus, object, or event; and(b) alters (increases or decreases) the current frequency ofall behavior that has been reinforced or punished by thatstimulus, object, or event. (See abative effect, abolishingoperation (AO), behavior-altering effect, evocative ef-fect, establishing operation (EO), value-altering effect.)

multielement design See alternating treatments design.multiple baseline across behaviors design A multiple base-

line design in which the treatment variable is applied totwo or more different behaviors of the same subject in thesame setting.

multiple baseline across settings design A multiple base-line design in which the treatment variable is applied tothe same behavior of the same subject across two or moredifferent settings, situations, or time periods.

multiple baseline across subjects design A multiple base-line design in which the treatment variable is applied tothe same behavior of two or more subjects (or groups) inthe same setting.

multiple baseline design An experimental design that beginswith the concurrent measurement of two or more behaviorsin a baseline condition, followed by the application of thetreatment variable to one of the behaviors while baselineconditions remain in effect for the other behavior(s). Aftermaximum change has been noted in the first behavior, thetreatment variable is applied in sequential fashion to each ofthe other behaviors in the design. Experimental control isdemonstrated if each behavior shows similar changes when,and only when, the treatment variable is introduced.

multiple control (of verbal behavior) There are two typesof multiple control: (a) convergent multiple control occurswhen a single verbal response is a function of more thanone variable and (b) what is said has more than one an-tecedent source of control. Divergent multiple control oc-curs when a single antecedent variable affects the strengthof more than one responses.

multiple exemplar training Instruction that provides thelearner with practice with a variety of stimulus conditions,

11

Glossary

response variations, and response topographies to ensurethe acquisition of desired stimulus controls responseforms; used to promote both setting/situation generaliza-tion and response generalization. (See teaching sufficientexamples.)

multiple probe design A variation of the multiple baselinedesign that features intermittent measures, or probes, dur-ing baseline. It is used to evaluate the effects of instructionon skill sequences in which it is unlikely that the subjectcan improve performance on later steps in the sequencebefore learning prior steps.

multiple schedule (mult) A compound schedule of rein-forcement consisting of two or more basic schedules of re-inforcement (elements) that occur in an alternating,usually random, sequence; a discriminative stimulus iscorrelated with the presence or absence of each elementof the schedule, and reinforcement is delivered for meet-ing the response requirements of the element in effect atany time.

multiple treatment interference The effects of one treatmenton a subject’s behavior being confounding by the influenceof another treatment administered in the same study.

multiple treatment reversal design Any experimental de-sign that uses the experimental methods and logic of thereversal tactic to compare the effects of two or more ex-perimental conditions to baseline and/or to one another(e.g., A-B-A-B-C-B-C, A-B-A-C-A-D-A-C-A-D, A-B-A-B-B+C-B-B+C).

naive observer An observer who is unaware of the study’s pur-pose and/or the experimental conditions in effect during agiven phase or observation period. Data obtained by a naiveobserver are less likely to be influenced by observers’ expectations.

naturally existing contingency Any contingency of rein-forcement (or punishment) that operates independent ofthe behavior analyst’s or practitioner’s efforts; includessocially mediated contingencies contrived by other peo-ple and already in effect in the relevant setting. (Contrastwith contrived contingency.)

negative punishment A response behavior is followed im-mediately by the removal of a stimulus (or a decrease inthe intensity of the stimulus), that decreases the future fre-quency of similar responses under similar conditions;sometimes called Type II punishment. (Contrast withpositive punishment.)

negative reinforcer A stimulus whose termination (or re-duction in intensity) functions as reinforcement. (Contrastwith positive reinforcer.)

neutral stimulus (NS) A stimulus change that does not elicitrespondent behavior. (Compare to conditioned stimulus(CS), unconditioned stimulus (US).)

noncontingent reinforcement (NCR) A procedure in whichstimuli with known reinforcing properties are presentedon fixed-time (FT) or variable-time (VT) schedules com-pletely independent of behavior; often used as an an-

tecedent intervention to reduce problem behavior. (Seefixed-time schedule (FT), variable-time schedule (VT).)

noncontingent reinforcement (NCR) reversal techniqueAn experimental control technique that demonstrates theeffects of reinforcement by using noncontingent rein-forcement (NCR) as a control condition instead of a no-reinforcement (baseline) condition. During the NCRcondition, the stimulus change used as reinforcement inthe reinforcement condition is presented on a fixed or vari-able time schedule independent of the subject’s behavior.A higher level of responding during the reinforcement con-dition than during the NCR condition demonstrates thatthe changes in behavior are the result of contingent rein-forcement, not simply the presentation of or contact withthe stimulus event. (Compare with DRI/DRA reversaltechnique, DRO reversal technique.)

nonexclusion time-out A procedure for implementing time-out in which, contingent on the occurrence of the targetbehavior, the person remains within the setting, but doesnot have access to reinforcement, for a specified period.

normalization As a philosophy and principle, the belief thatpeople with disabilities should, to the maximum extent pos-sible, be physically and socially integrated into the main-stream of society regardless of the degree or type ofdisability. As an approach to intervention, the use of pro-gressively more typical settings and procedures “to estab-lish and/or maintain personal behaviors which are asculturally normal as possible” (Wolfensberger, 1972, p. 28).

observed value A measure produced by an observation andmeasurement system. Observed values serve as the datathat the researcher and others will interpret to form con-clusions about an investigation. (Compare with true value.)

observer drift Any unintended change in the way an ob-server uses a measurement system over the course of an in-vestigation that results in measurement error; often entailsa shift in the observer’s interpretation of the original def-initions of the target behavior subsequent to being trained.(See measurement bias, observer reactivity.)

observer reactivity Influence on the data reported by an observer that results from the observer’s awareness thatothers are evaluating the data he reports. (See also meas-urement bias and observer drift.)

ontogeny The history of the development of an individualorganism during its lifetime. (See history of reinforce-ment; compare to phylogeny.)

operant behavior Behavior that is selected, maintained, andbrought under stimulus control as a function of its conse-quences; each person’s repertoire of operant behavior is aproduct of his history of interactions with the environment(ontogeny).

operant conditioning The basic process by which operantlearning occurs; consequences (stimulus changes imme-diately following responses) result in an increased (rein-forcement) or decreased (punishment) frequency of thesame type of behavior under similar motivational and

12

Glossary

environmental conditions in the future. (See motivatingoperation, punishment, reinforcement, response class,stimulus control.)

overall response rate The rate of response over a given timeperiod. (See local response rate.)

overcorrection A behavior change tactic based on positivepunishment in which, contingent on the problem behavior,the learner is required to engage in effortful behavior di-rectly or logically related to fixing the damage caused bythe behavior. Forms of overcorrection are restitutionalovercorrection and positive practice overcorrection. (Seepositive practice overcorrection, restitutional over-correction.)

parametric analysis An experiment designed to discover thedifferential effects of a range of values of an independentvariable.

parsimony The practice of ruling out simple, logical expla-nations, experimentally or conceptually, before consider-ing more complex or abstract explanations.

partial-interval recording A time sampling method for mea-suring behavior in which the observation period is dividedinto a series of brief time intervals (typically from 5 to 10seconds). The observer records whether the target behavioroccurred at any time during the interval. Partial-intervalrecording is not concerned with how many times the be-havior occurred during the interval or how long the behav-ior was present, just that it occurred at some point during theinterval; tends to overestimate the proportion of the obser-vation period that the behavior actually occurred.

partition time-out An exclusion procedure for implement-ing time-out in which, contingent on the occurrence of thetarget behavior, the person remains within the time-in set-ting, but stays behind a wall, shield, or barrier that restrictsthe view.

percentage A ratio (i.e., a proportion) formed by combiningthe same dimensional quantities, such as count (number ÷number) or time (duration ÷ duration; latency ÷ latency);expressed as a number of parts per 100; typically expressedas a ratio of the number of responses of a certain type pertotal number of responses (or opportunities or intervals inwhich such a response could have occurred). A percentagepresents a proportional quantity per 100.

philosophic doubt An attitude that the truthfulness and va-lidity of all scientific theory and knowledge should be con-tinually questioned.

phylogeny The history of the natural evolution of a species.(Compare to ontogeny.)

pivotal behavior A behavior that, when learned, producescorresponding modifications or covariation in other un-trained behaviors. (Compare to behavioral cusp.)

placebo control A procedure that prevents a subject fromdetecting the presence or absence of the treatment vari-able. To the subject the placebo condition appears thesame as the treatment condition (e.g., a placebo pill con-tains an inert substance but looks, feels, and tastes exactly

like a pill that contains the treatment drug). (See double-blind control.)

planned activity check (PLACHECK) A variation of mo-mentary time sampling in which the observer recordswhether each person in a group is engaged in the targetbehavior at specific points in time; provides a measure of“group behavior.”

planned ignoring A procedure for implementing time-outin which social reinforcers—usually attention, physicalcontact, and verbal interaction—are withheld for a briefperiod contingent on the occurrence of the target behavior.

point-to-point correspondence A relation between the stim-ulus and response or response product that occurs when thebeginning, middle, and end of the verbal stimulus matchesthe beginning, middle, and end of the verbal response. Theverbal relations with point-to-point correspondence areechoic, copying a text, imitation as it relates to sign lan-guage, textual, and transcription.

positive practice overcorrection A form of overcorrection inwhich, contingent on an occurrence of the target behavior,the learner is required to repeated a correct form of the be-havior, or a behavior incompatible with the problem be-havior, a specified number of times; entails an educativecomponent. (See overcorrection, restitutional overcor-rection.)

positive punishment A behavior is followed immediately bythe presentation of a stimulus that decreases the future fre-quency of the behavior; sometimes called Type I punish-ment. (Contrast with negative punishment.)

positive reinforcement Occurs when a behavior is followedimmediately by the presentation of a stimulus that in-creases the future frequency of the behavior in similar con-ditions (Contrast to negative reinforcement.)

positive reinforcer A stimulus whose presentation or onsetfunctions as reinforcement. (Contrast with negative rein-forcer.)

postreinforcement pause The absence of responding for aperiod of time following reinforcement; an effect com-monly produced by fixed interval (FI) and fixed ratio (FR)schedules of reinforcement.

practice effects Improvements in performance resulting fromopportunities to perform a behavior repeatedly so thatbaseline measures can be obtained.

prediction A statement of the anticipated outcome of apresently unknown or future measurement; one of threecomponents of the experimental reasoning, or baselinelogic, used in single-subject research designs. (See repli-cation, verification.)

Premack principle A principle that states that making theopportunity to engage in a high-probability behaviorcontingent on the occurrence of a low-frequency behaviorwill function as reinforcement for the low-frequency be-havior. (See also response-deprivation hypothesis.)

principle of behavior A statement describing a functional re-lation between behavior and one or more of its controlling

13

Glossary

variables with generality across organisms, species, set-tings, behaviors, and time (e.g., extinction, positive rein-forcement); an empirical generalization inferred frommany experiments demonstrating the same functional relation

procedural fidelity See treatment integrity.programming common stimuli A tactic for promoting set-

ting/situation generalization by making the instructionalsetting similar to the generalization setting; the two-stepprocess involves (1) identifying salient stimuli that char-acterize the generalization setting and (2) incorporatingthose stimuli into the instructional setting.

progressive schedule of reinforcement A schedule that sys-tematically thins each successive reinforcement opportu-nity independent of the individual’s behavior; progressiveratio (PR) and progressive interval (PI) schedules arethinned using arithmetic or geometric progressions.

punisher A stimulus change that decreases the future fre-quency of behavior that immediately precedes it. (Seeaversive stimulus, conditioned punisher, unconditionedpunisher.)

punishment Occurs when stimulus change immediately fol-lows a response and decreases the future frequency of thattype of behavior in similar conditions. (See negative pun-ishment, positive punishment.)

radical behaviorism A thoroughgoing form of behaviorismthat attempts to understand all human behavior, includingprivate events such as thoughts and feelings, in terms ofcontrolling variables in the history of the person (on-togeny) and the species (phylogeny).

rate A ratio of count per observation time; often expressed ascount per standard unit of time (e.g., per minute, per hour,per day) and calculated by dividing the number of re-sponses recorded by the number of standard units of timein which observations were conducted; used interchange-ably with frequency. The ratio is formed by combining thedifferent dimensional quantities of count and time (i.e.,count time). Ratios formed from different dimensionalquantities retain their dimensional quantities. Rate andfrequency in behavioral measurement are synonymousterms. (Contrast with percentage.)

ratio strain A behavioral effect associated with abrupt in-creases in ratio requirements when moving from denserto thinner reinforcement schedules; common effects in-clude avoidance, aggression, and unpredictable pauses orcessation in responding.

reactivity Effects of an observation and measurement pro-cedure on the behavior being measured. Reactivity is mostlikely when measurement procedures are obtrusive, espe-cially if the person being observed is aware of the ob-server’s presence and purpose.

recovery from punishment procedure The occurrence of apreviously punished type of response without its punish-ing consequence. This procedure is analogous to the ex-tinction of previously reinforced behavior and has theeffect of undoing the effect of the punishment.

reflex A stimulus–response relation consisting of an antecedentstimulus and the respondent behavior it elicits (e.g., brightlight–pupil contraction). Unconditioned and conditioned re-flexes protect against harmful stimuli, help regulate the in-ternal balance and economy of the organism, and promotereproduction. (See conditioned reflex, respondent be-havior, respondent conditioning, unconditioned reflex.)

reflexive conditioned motivating operation (CMO-R) Astimulus that acquires MO effectiveness by preceding someform of worsening or improvement. It is exemplified bythe warning stimulus in a typical escape–avoidance proce-dure, which establishes its own offset as reinforcement andevokes all behavior that has accomplished that offset.

reflexivity A type of stimulus-to-stimulus relation in whichthe learner, without any prior training or reinforcement fordoing so, selects a comparison stimulus that is the same asthe sample stimulus (e.g., A = A). Reflexivity would bedemonstrated in the following matching-to-sample proce-dure: The sample stimulus is a picture of a tree, and thethree comparison stimuli are a picture of a mouse, a pic-ture of a cookie, and a duplicate of the tree picture used asthe sample stimulus. The learner selects the picture of thetree without specific reinforcement in the past for makingthe tree-picture-to-tree-picture match. (It is also calledgeneralized identity matching.) (See stimulus equiva-lence; compare to transitivity, symmetry.)

reinforcement Occurs when a stimulus change immediatelyfollows a response and increases the future frequency ofthat type of behavior in similar conditions. (See negativereinforcement, positive reinforcement.)

reinforcer A stimulus change that increases the future fre-quency of behavior that immediately precedes it. (Seeconditioned reinforcer, unconditioned reinforcer.)

reinforcer-abolishing effect (of a motivating operation) Adecrease in the reinforcing effectiveness of a stimulus, ob-ject, or event caused by a motivating operation. For ex-ample, food ingestion abolishes (decreases) the reinforcingeffectiveness of food.

reinforcer assessment Refers to a variety of direct, empiri-cal methods for presenting one or more stimuli contingenton a target response and measuring their effectiveness asreinforcers.

reinforcer-establishing effect (of a motivating operation)An increase in the reinforcing effectiveness of a stimulus,object, or event caused by a motivating operation. For ex-ample, food deprivation establishes (increases) the rein-forcing effectiveness of food.

relevance of behavior rule Holds that only behaviors likelyto produce reinforcement in the person’s natural environ-ment should be targeted for change.

reliability (of measurement) Refers to the consistency ofmeasurement, specifically, the extent to which repeatedmeasurement of the same event yields the same values.

repeatability Refers to the fact that a behavior can occur re-peatedly through time (i.e., behavior can be counted); oneof the three dimensional quantities of behavior from which

14

Glossary

all behavioral measurements are derived. (See count, fre-quency, rate, celeration, temporal extent, and temporallocus.)

repertoire All of the behaviors a person can do; or a set ofbehaviors relevant to a particular setting or task (e.g., gar-dening, mathematical problem solving).

replication (a) Repeating conditions within an experimentto determine the reliability of effects and increase internalvalidity. (See baseline logic, prediction, verification.)(b) Repeating whole experiments to determine the gener-ality of findings of previous experiments to other subjects,settings, and/or behaviors. (See direct replication, ex-ternal validity, systematic replication.)

resistance to extinction The relative frequency with whichoperant behavior is emitted during extinction.

respondent behavior The response component of a reflex;behavior that is elicited, or induced, by antecedent stimuli.(See reflex, respondent conditioning.)

respondent conditioning A stimulus–stimulus pairing pro-cedure in which a neutral stimulus (NS) is presented withan unconditioned stimulus (US) until the neutral stimulusbecomes a conditioned stimulus that elicits the conditionedresponse (also called classical or Pavlovian conditioning).(See conditioned reflex, higher order conditioning.)

respondent extinction The repeated presentation of a con-ditioned stimulus (CS) in the absence of the unconditionedstimulus (US); the CS gradually loses its ability to elicit theconditioned response until the conditioned reflex no longerappears in the individual’s repertoire.

response A single instance or occurrence of a specific classor type of behavior. Technical definition: an “action of anorganism’s effector. An effector is an organ at the end of anefferent nerve fiber that is specialized for altering its envi-ronment mechanically, chemically, or in terms of other en-ergy changes” (Michael, 2004, p. 8). (See response class.)

response blocking A procedure in which the therapist phys-ically intervenes as soon as the learner begins to emit aproblem behavior to prevent completion of the targeted behavior.

response class A group of responses of varying topography,all of which produce the same effect on the environment.

response cost The contingent loss of reinforcers (e.g., a fine),producing a decrease of the frequency of behavior; a formof negative punishment.

response-deprivation hypothesis A model for predictingwhether contingent access to one behavior will function asreinforcement for engaging in another behavior based onwhether access to the contingent behavior represents a re-striction of the activity compared to the baseline level ofengagement. (See Premack principle.)

response differentiation A behavior change produced bydifferential reinforcement: Reinforced members of the current response class occur with greater frequency, andunreinforced members occur less frequently (undergo ex-tinction); the overall result is the emergence of a new re-sponse class.

response generalization The extent to which a learner emitsuntrained responses that are functionally equivalent to thetrained target behavior. (Compare to response mainte-nance and setting/situation generalization.)

response latency A measure of temporal locus; the elapsedtime from the onset of a stimulus (e.g., task direction, cue)to the initiation of a response.

response maintenance The extent to which a learner con-tinues to perform the target behavior after a portion or allof the intervention responsible for the behavior’s initialappearance in the learner’s repertoire has been terminated.Often called maintenance, durability, behavioral persis-tence, and (incorrectly) resistance to extinction. (Com-pare to response generalization and setting/situationgeneralization.)

restitutional overcorrection A form of overcorrection inwhich, contingent on the problem behavior, the learner isrequired to repair the damage or return the environmentto its original state and then to engage in additional behavior to bring the environment to a condition vastlybetter than it was in prior to the misbehavior. (See overcorrection and positive practice overcorrection.)

reversal design Any experimental design in which the re-searcher attempts to verify the effect of the independentvariable by “reversing” responding to a level obtained ina previous condition; encompasses experimental designsin which the independent variable is withdrawn (A-B-A-B) or reversed in its focus (e.g., DRI/DRA). (See A-B-Adesign, A-B-A-B design, B-A-B, DRI/DRA reversaltechnique, DRO reversal technique, noncontingent re-inforcement (NCR) reversal technique.)

rule-governed behavior Behavior controlled by a rule (i.e.,a verbal statement of an antecedent-behavior-consequencecontingency); enables human behavior (e.g., fastening aseatbelt) to come under the indirect control of temporallyremote or improbable but potentially significant conse-quences (e.g., avoiding injury in an auto accident). Oftenused in contrast to contingency-shaped behavior, a termused to indicate behavior selected and maintained by con-trolled, temporally close consequences.

satiation A decrease in the frequency of operant behaviorpresumed to be the result of continued contact with or con-sumption of a reinforcer that has followed the behavior;also refers to a procedure for reducing the effectivenessof a reinforcer (e.g., presenting a person with copiousamounts of a reinforcing stimulus prior to a session). (Seemotivating operation; contrast with deprivation.)

scatterplot A two-dimensional graph that shows the relativedistribution of individual measures in a data set with re-spect to the variables depicted by the x and y axes. Datapoints on a scatterplot are not connected.

schedule of reinforcement A rule specifying the environ-mental arrangements and response requirements for rein-forcement; a description of a contingency of reinforcement.

schedule thinning Changing a contingency of reinforcementby gradually increasing the response ratio or the extent of

15

Glossary

the time interval; it results in a lower rate of reinforcementper responses, time, or both.

science A systematic approach to the understanding of nat-ural phenomena (as evidenced by description, prediction,and control) that relies on determinism as its fundamentalassumption, empiricism as its primary rule, experimenta-tion as its basic strategy, replication as a requirement forbelievability, parsimony as a value, and philosophic doubtas its guiding conscience.

scored-interval IOA An interobserver agreement indexbased only on the intervals in which either observerrecorded the occurrence of the behavior; calculated by di-viding the number of intervals in which the two observersagreed that the behavior occurred by the number of inter-vals in which either or both observers recorded the occur-rence of the behavior and multiplying by 100.Scored-interval IOA is recommended as a measure ofagreement for behaviors that occur at low rates because itignores the intervals in which agreement by chance ishighly likely. (Compare to interval-by-interval IOA andunscored-interval IOA.)

selection by consequences The fundamental principle un-derlying operant conditioning; the basic tenet is that allforms of (operant) behavior, from simple to complex, areselected, shaped, and maintained by their consequencesduring an individual’s lifetime; Skinner’s concept of se-lection by consequences is parallel to Darwin’s conceptof natural selection of genetic structures in the evolutionof species.

self-contract Contingency contract that a person makes withhimself, incorporating a self-selected task and reward aswell as personal monitoring of task completions and self-delivery of the reward.

self-control Two meanings: (a) A person’s ability to “delaygratification” by emitting a response that will produce alarger (or higher quality) delayed reward over a responsethat produces a smaller but immediate reward (sometimesconsidered impulse control); (b) A person’s behaving in acertain way so as to change a subsequent behavior (i.e., toself-manage her own behavior). Skinner (1953) concep-tualized self-control as a two-response phenomenon: Thecontrolling response affects variables in such a way as tochange the probability of the controlled response. (Seeself-management.)

self-evaluation A procedure in which a person compares hisperformance of a target behavior with a predeterminedgoal or standard; often a component of self-management.Sometimes called self-assessment.

self-instruction Self-generated verbal responses, covert orovert, that function as rules or response prompts for a desiredbehavior; as a self-management tactic, self-instruction canguide a person through a behavior chain or sequence of tasks.

self-management The personal application of behaviorchange tactics that produces a desired change in behavior.

self-monitoring A procedure whereby a person systemati-cally observes his behavior and records the occurrence or

nonoccurrence of a target behavior. (Also called self-recording or self-observation.)

semilogarithmic chart A two-dimensional graph with a log-arithmic scaled y axis so that equal distances on the verti-cal axis represent changes in behavior that are of equalproportion. (See Standard Celeration Chart.)

sensory extinction The process by which behaviors main-tained by automatic reinforcement are placed on extinctionby masking or removing the sensory consequence.

sequence effects The effects on a subject’s behavior in agiven condition that are the result of the subject’s experi-ence with a prior condition.

setting/situation generalization The extent to which alearner emits the target behavior in a setting or stimulus sit-uation that is different from the instructional setting.

shaping Using differential reinforcement to produce a seriesof gradually changing response classes; each responseclass is a successive approximation toward a terminal be-havior. Members of an existing response class are selectedfor differential reinforcement because they more closelyresemble the terminal behavior. (See differential rein-forcement, response class, response differentiation,successive approximations.)

single-subject designs A wide variety of research designsthat use a form of experimental reasoning called baselinelogic to demonstrate the effects of the independent variableon the behavior of individual subjects. (Also called single-case, within-subject, and intra-subject designs) (See alsoalternating treatments design, baseline logic, changingcriterion design, multiple baseline design, reversal de-sign, steady state strategy.)

social validity Refers to the extent to which target behaviorsare appropriate, intervention procedures are acceptable, andimportant and significant changes in target and collateral be-haviors are produced.

solistic (tact) extension A verbal response evoked by a stim-ulus property that is only indirectly related to the propertact relation (e.g., Yogi Berra’s classic malapropism:“Baseball is ninety percent mental; the other half is physical.”

spaced-responding DRL A procedure for implementingDRL in which reinforcement follows each occurrence ofthe target behavior that is separated from the previous re-sponse by a minimum interresponse time (IRT). (See diff-erential reinforcement of low rates (DRL).)

speaker Someone who engages in verbal behavior by emit-ting mands, tacts, intraverbals, autoclitics, and so on. Aspeaker is also someone who uses sign language, gestures,signals, written words, codes, pictures, or any form of ver-bal behavior. (Contrast with listener.)

split-middle line of progress A line drawn through a seriesof graphed data points that shows the overall trend in thedata; drawn through the intersections of the vertical andhorizontal middles of each half of the charted data andthen adjusted up or down so that half of all the data pointsfall on or above and half fall on or below the line.

16

Glossary

spontaneous recovery A behavioral effect associated withextinction in which the behavior suddenly begins to occurafter its frequency has decreased to its prereinforcementlevel or stopped entirely.

stable baseline Data that show no evidence of an upward ordownward trend; all of the measures fall within a relativelysmall range of values. (See steady state responding.)

Standard Celeration Chart A multiply–divide chart with

can accommodate response rates as low as 1 per 24 hours(0.000695 per minute) to as high as 1,000 per minute. Itenables the standardized charting of celeration, a factorby which rate of behavior multiplies or divides per unit oftime. (See semilogarithmic chart.)

steady state responding A pattern of responding that ex-hibits relatively little variation in its measured dimensionalquantities over a period of time.

steady state strategy Repeatedly exposing a subject to agiven condition while trying to eliminate or control extra-neous influences on the behavior and obtaining a stablepattern of responding before introducing the next condi-tion.

stimulus “An energy change that affects an organism throughits receptor cells” (Michael, 2004, p. 7).

stimulus class A group of stimuli that share specified com-mon elements along formal (e.g., size, color), temporal(e.g., antecedent or consequent), and/or functional (e.g.,discriminative stimulus) dimensions.

stimulus control A situation in which the frequency, latency,duration, or amplitude of a behavior is altered by the pres-ence or absence of an antecedent stimulus. (See discrimi-nation, discriminative stimulus.)

stimulus delta (S�) A stimulus in the presence of which agiven behavior has not produced reinforcement in the past.(Contrast with discriminative stimulus (SD).)

stimulus discrimination training The conventional proce-dure requires one behavior and two antecedent stimulusconditions. Responses are reinforced in the presence ofone stimulus condition, the SD, but not in the presence ofthe other stimulus, the S�.

stimulus equivalence The emergence of accurate respondingto untrained and nonreinforced stimulus–stimulus rela-tions following the reinforcement of responses to somestimulus–stimulus relations. A positive demonstration ofreflexivity, symmetry, and transitivity is necessary to meetthe definition of equivalence.

stimulus generalization When an antecedent stimulus hasa history of evoking a response that has been reinforced inits presence, the same type of behavior tends to be evokedby stimuli that share similar physical properties with thecontrolling antecedent stimulus.

stimulus generalization gradient A graphic depiction of theextent to which behavior that has been reinforced in thepresence of a specific stimulus condition is emitted in the presence of other stimuli. The gradient shows relativedegree of stimulus generalization and stimulus control (or

discrimination). A flat slope across test stimuli shows ahigh degree of stimulus generalization and relatively littlediscrimination between the trained stimulus and otherstimuli; a slope that drops sharply from its highest pointcorresponding to the trained stimulus indicates a high de-gree of stimulus control (discrimination) and relatively lit-tle stimulus generalization.

stimulus preference assessment A variety of proceduresused to determine the stimuli that a person prefers, the rel-ative preference values (high versus low) of those stimuli,the conditions under which those preference values remainin effect, and their presumed value as reinforcers.

stimulus–stimulus pairing A procedure in which two stim-uli are presented at the same time, usually repeatedly fora number of trials, which often results in one stimulus ac-quiring the function of the other stimulus.

successive approximations The sequence of new responseclasses that emerge during the shaping process as the re-sult of differential reinforcement; each successive responseclass is closer in form to the terminal behavior than theresponse class it replaces.

surrogate conditioned motivating operation (CMO-S) Astimulus that acquires its MO effectiveness by being pairedwith another MO and has the same value-altering andbehavior-altering effects as the MO with which it waspaired.

symmetry A type of stimulus-to-stimulus relationship in whichthe learner, without prior training or reinforcement for doingso, demonstrates the reversibility of matched sample andcomparison stimuli (e.g., if A = B, then B = A). Symmetrywould be demonstrated in the following matching-to-sampleprocedure: The learner is taught, when presented with thespoken word car (sample stimulus A), to select a compari-son picture of a car (comparison B). When presented with thepicture of a car (sample stimulus B), without additional train-ing or reinforcement, the learner selects the comparison spo-ken word car (comparison A). (See stimulus equivalence;compare to reflexivity, transitivity.)

systematic desensitization A behavior therapy treatment foranxieties, fears, and phobias that involves substituting oneresponse, generally muscle relaxation, for the unwantedbehavior—the fear and anxiety. The client practices re-laxing while imagining anxiety-producing situations in asequence from the least fearful to the most fearful.

systematic replication An experiment in which the re-searcher purposefully varies one or more aspects of an ear-lier experiment. A systematic replication that reproducesthe results of previous research not only demonstrates thereliability of the earlier findings but also adds to the ex-ternal validity of the earlier findings by showing that thesame effect can be obtained under different conditions.

tact An elementary verbal operant evoked by a nonverbaldiscriminative stimulus and followed by generalized con-ditioned reinforcement.

tandem schedule A schedule of reinforcement identical tothe chained schedule except, like the mix schedule, the

six base-10 (or 10, ÷ 10) cycles on the vertical axis that×

17

Glossary

tandem schedule does not use discriminative stimuli withthe elements in the chain. (See chained schedule, mixedschedule.)

target behavior The response class selected for intervention;can be defined either functionally or topographically.

task analysis The process of breaking a complex skill or se-ries of behaviors into smaller, teachable units; also refersto the results of this process.

teaching loosely Randomly varying functionally irrelevantstimuli within and across teaching sessions; promotes set-ting/situation generalization by reducing the likelihoodthat (a) a single or small group of noncritical stimuli willacquire exclusive control over the target behavior and(2) the learner’s performance of the target behavior willbe impeded or “thrown off” should he encounter any ofthe “loose” stimuli in the generalization setting.

teaching sufficient examples A strategy for promoting gen-eralized behavior change that consists of teaching thelearner to respond to a subset of all of the relevant stimu-lus and response examples and then assessing the learner’sperformance on untrained examples. (See multiple ex-emplar training.)

temporal extent Refers to the fact that every instance of be-havior occurs during some amount of time; one of the threedimensional quantities of behavior from which all behav-ioral measurements are derived. (See repeatability andtemporal locus.)

temporal locus Refers to the fact that every instance of be-havior occurs at a certain point in time with respect to otherevents (i.e., when in time behavior occurs can be mea-sured); often measured in terms of response latency andinterresponse time (IRT); one of the three dimensionalquantities of behavior from which all behavioral measure-ments are derived. (See repeatability, temporal extent.)

terminal behavior The end product of shaping.textual An elementary verbal operant involving a response

that is evoked by a verbal discriminative stimulus that haspoint-to-point correspondence, but not formal similarity,between the stimulus and the response product.

three-term contingency The basic unit of analysis in theanalysis of operant behavior; encompasses the temporaland possibly dependent relations among an antecedentstimulus, behavior, and consequence.

time-out from positive reinforcement The contingent with-drawal of the opportunity to earn positive reinforcement orthe loss of access to positive reinforcers for a specifiedtime; a form of negative punishment (also called time-out).

time-out ribbon A procedure for implementing nonexclu-sion time-out in which a child wears a ribbon or wristbandthat becomes discriminative for receiving reinforcement.Contingent on misbehavior, the ribbon is removed and ac-cess to social and other reinforcers are unavailable for aspecific period. When time-out ends, the ribbon or band isreturned to the child and time-in begins.

time sampling A measurement of the presence or absenceof behavior within specific time intervals. It is most use-

ful with continuous and high-rate behaviors. (Seemomentary time sampling, partial-interval recording,and whole-interval recording.)

token An object that is awarded contingent on appropriatebehavior and that serves as the medium of exchange forbackup reinforcers.

token economy A system whereby participants earn gener-alized conditioned reinforcers (e.g., tokens, chips, points)as an immediate consequence for specific behaviors; par-ticipants accumulate tokens and exchange them for itemsand activities from a menu of backup reinforcers. (Seegeneralized conditioned reinforcer.)

topography The physical form or shape of a behavior.topography-based definition Defines instances of the tar-

geted response class by the shape or form of the behavior.total count IOA The simplest indicator of IOA for event

recording data; based on comparing the total countrecorded by each observer per measurement period; cal-culated by dividing the smaller of the two observers’counts by the larger count and multiplying by 100.

total duration IOA A relevant index of IOA for total dura-tion measurement; computed by dividing the shorter ofthe two durations reported by the observers by the longerduration and multiplying by 100.

total-task chaining A variation of forward chaining in whichthe learner receives training on each behavior in the chainduring each session.

transcription An elementary verbal operant involving a spoken verbal stimulus that evokes a written, typed, or fin-ger-spelled response. Like the textual, there is point-to-point correspondence between the stimulus and theresponse product, but no formal similarity.

transitive conditioned motivating operation (CMO-T) Anenvironmental variable that, as a result of a learning history, establishes (or abolishes) the reinforcing effec-tiveness of another stimulus and evokes (or abates) the be-havior that has been reinforced by that other stimulus.

transitivity A derived (i.e., untrained) stimulus-stimulus re-lation (e.g., A = C, C = A) that emerges as a product oftraining two other stimulus-stimulus relations (e.g., A =B and B = C). For example, transitivity would be demon-strated if, after training the two stimulus-stimulus rela-tions shown in 1 and 2 below, the relation shown in 3 emerges without additional instruction or reinforcement:(1) If A (e.g., spoken word bicycle) = B (e.g., the picture

of a bicycle) (see Figure 3), and(2) B (the picture of a bicycle) = C (e.g., the written word

bicycle) (see Figure 4), then(3) C (the written word bicycle) = A (the spoken name,

bicycle) (see Figure 5). (See stimulus equivalence;compare to reflexivity, symmetry.)

treatment drift An undesirable situation in which the inde-pendent variable of an experiment is applied differentlyduring later stages than it was at the outset of the study.

treatment integrity The extent to which the independentvariable is applied exactly as planned and described and no

18

Glossary

other unplanned variables are administered inadvertentlyalong with the planned treatment. Also called proceduralfidelity.

trend The overall direction taken by a data path. It is de-scribed in terms of direction (increasing, decreasing, orzero trend), degree (gradual or steep), and the extent ofvariability of data points around the trend. Trend is usedin predicting future measures of the behavior under un-changing conditions.

trial-by-trial IOA An IOA index for discrete trial data basedon comparing the observers’ counts (0 or 1) on a trial-by-trial, or item-by-item, basis; yields a more conservativeand meaningful index of IOA for discrete trial data thandoes total count IOA.

trials-to-criterion A special form of event recording; a mea-sure of the number of responses or practice opportunitiesneeded for a person to achieve a preestablished level ofaccuracy or proficiency.

true value A measure accepted as a quantitative descriptionof the true state of some dimensional quantity of an eventas it exists in nature. Obtaining true values requires “spe-cial or extraordinary precautions to ensure that all possi-ble sources of error have been avoided or removed” (John-ston & Pennypacker, 1993a, p. 136). (Compare withobserved value.)

Type I error An error that occurs when a researcher con-cludes that the independent variable had an effect on thedependent variable, when no such relation exists; a falsepositive. (Contrast with Type II error.)

Type II error An error that occurs when a researcher con-cludes that the independent variable had no effect on thedependent variable, when in truth it did; a false negative.(Contrast with Type I error.)

unconditioned motivating operation (UMO) A motivatingoperation whose value-altering effect does not depend ona learning history. For example, food deprivation increasesthe reinforcing effectiveness of food without the necessityof any learning history.

unconditioned negative reinforcer A stimulus that func-tions as a negative reinforcer as a result of the evolution-ary development of the species (phylogeny); no priorlearning is involved (e.g., shock, loud noise, intense light,extreme temperatures, strong pressure against the body).(See negative reinforcer; compare with conditioned neg-ative reinforcer.)

unconditioned punisher A stimulus change that decreasesthe frequency of any behavior that immediately precedesit irrespective of the organism’s learning history with thestimulus. Unconditioned punishers are products of the evo-lutionary development of the species (phylogeny), mean-ing that all members of a species are more or lesssusceptible to punishment by the presentation of uncon-ditioned punishers (also called primary or unlearned pun-ishers). (Compare with conditioned punisher.)

unconditioned reflex An unlearned stimulus–response func-tional relation consisting of an antecedent stimulus (e.g.,

food in mouth) that elicits the response (e.g., salivation);a product of the phylogenic evolution of a given species;all biologically intact members of a species are born withsimilar repertoires of unconditioned reflexes. (See con-ditioned reflex.)

unconditioned reinforcer A stimulus change that increasesthe frequency of any behavior that immediately precedesit irrespective of the organism’s learning history with thestimulus. Unconditioned reinforcers are the product of theevolutionary development of the species (phylogeny). Alsocalled primary or unlearned reinforcer. (Compare withconditioned reinforcer.)

unconditioned stimulus (US) The stimulus component ofan unconditioned reflex; a stimulus change that elicits re-spondent behavior without any prior learning.

unpairing Two kinds: (a) The occurrence alone of a stimu-lus that acquired its function by being paired with an al-ready effective stimulus, or (b) the occurrence of thestimulus in the absence as well as in the presence of the ef-fective stimulus. Both kinds of unpairing undo the resultof the pairing: the occurrence alone of the stimulus thatbecame a conditioned reinforcer; and the occurrence ofthe unconditioned reinforcer in the absence as well as inthe presence of the conditioned reinforcer.

unscored-interval IOA An interobserver agreement indexbased only on the intervals in which either observerrecorded the nonoccurrence of the behavior; calculated bydividing the number of intervals in which the two ob-servers agreed that the behavior did not occur by the num-ber of intervals in which either or both observers recordedthe nonoccurrence of the behavior and multiplying by 100.Unscored-interval IOA is recommended as a measure ofagreement for behaviors that occur at high rates becauseit ignores the intervals in which agreement by chance ishighly likely. (Compare to interval-by-interval IOA,scored-interval IOA.)

validity (of measurement) The extent to which data obtainedfrom measurement are directly relevant to the target be-havior of interest and to the reason(s) for measuring it.

value-altering effect (of a motivating operation) An alter-ation in the reinforcing effectiveness of a stimulus, object,or event as a result of a motivating operation. For exam-ple, the reinforcing effectiveness of food is altered as a re-sult of food deprivation and food ingestion.

variability The frequency and extent to which multiple mea-sures of behavior yield different outcomes.

variable baseline Data points that do not consistently fallwithin a narrow range of values and do not suggest anyclear trend.

variable interval (VI) A schedule of reinforcement that pro-vides reinforcement for the first correct response follow-ing the elapse of variable durations of time occurring in arandom or unpredictable order. The mean duration of theintervals is used to describe the schedule (e.g., on a VI 10-minute schedule, reinforcement is delivered for the firstresponse following an average of 10 minutes since the last

19

Glossary

reinforced response, but the time that elapses followingthe last reinforced response might range from 30 secondsor less to 25 minutes or more).

variable-interval DRO (VI-DRO) A DRO procedure inwhich reinforcement is available at the end of intervals ofvariable duration and delivered contingent on the absenceof the problem behavior during the interval. (See differ-ential reinforcement of other behavior (DRO).)

variable-momentary DRO (VM-DRO) A DRO procedurein which reinforcement is available at specific momentsof time, which are separated by variable amounts of timein random sequence, and delivered if the problem is not oc-curring at those times. (See differential reinforcementof other behavior (DRO).)

variable ratio (VR) A schedule of reinforcement requiringa varying number of responses for reinforcement. Thenumber of responses required varies around a randomnumber; the mean number of responses required for rein-forcement is used to describe the schedule (e.g., on a VR10 schedule an average of 10 responses must be emitted forreinforcement, but the number of responses required fol-lowing the last reinforced response might range from 1 to30 or more).

variable-time schedule (VT) A schedule for the delivery ofnoncontingent stimuli in which the interval of time fromone delivery to the next randomly varies around a giventime. For example, on a VT 1-minute schedule, the delivery-to-delivery interval might range from 5 seconds to 2 min-utes, but the average interval would be 1 minute.

verbal behavior Behavior whose reinforcement is mediatedby a listener; includes both vocal-verbal behavior (e.g.,

saying “Water, please” to get water) and nonvocal-verbalbehavior (pointing to a glass of water to get water). En-compasses the subject matter usually treated as languageand topics such as thinking, grammar, composition, andunderstanding.

verification One of three components of the experimentalreasoning, or baseline logic, used in single-subject researchdesigns; accomplished by demonstrating that the priorlevel of baseline responding would have remained un-changed had the independent variable not been introduced.Verifying the accuracy of the original prediction reducesthe probability that some uncontrolled (confounding) vari-able was responsible for the observed change in behavior.(See prediction, replication.)

visual analysis A systematic approach for interpreting theresults of behavioral research and treatment programsthat entails visual inspection of graphed data for vari-ability, level, and trend within and between experimentalconditions.

whole-interval recording A time sampling method for mea-suring behavior in which the observation period is dividedinto a series of brief time intervals (typically from 5 to 15seconds). At the end of each interval, the observer recordswhether the target behavior occurred throughout the entireinterval; tends to underestimate the proportion of the ob-servation period that many behaviors actually occurred.

withdrawal design A term used by some authors as a syn-onym for A-B-A-B design; also used to describe experi-ments in which an effective treatment is sequentially orpartially withdrawn to promote the maintenance of behav-ior changes. (See A-B-A-B design, reversal design.)

20

21

Definition and Characteristics ofApplied Behavior Analysis

Key Terms

applied behavior analysis (ABA)behaviorismdeterminismempiricismexperimentexperimental analysis of behavior

(EAB)

explanatory fictionfunctional relationhypothetical constructmentalismmethodological behaviorismparsimony

philosophic doubtradical behaviorismreplicationscience

Behavior Analyst Certification Board® BCBA® & BCABA®Behavior Analyst Task List©, Third Edition

Content Area 2: Definition and Characteristics

2-1 Explain behavior in accordance with the philosophical assumptions of behavioranalysis, such as the lawfulness of behavior, empiricism, experimentalanalysis, and parsimony.

2-2 Explain determinism as it relates to behavior analysis.

2-3 Distinguish between mentalistic and environmental explanations of behavior.

2-4 Distinguish among the experimental analysis of behavior, applied behavioranalysis, and behavioral technologies.

2-5 Describe and explain behavior, including private events, in behavior analytic(nonmentalistic) terms.

2-6 Use the dimensions of applied behavior analysis (Baer, Wolf, & Risley 1968) forevaluating interventions to determine if they are behavior analytic.

Content Area 3: Principles, Processes, and Concepts

3-10 Define and provide examples of functional relations.

© 2006 The Behavior Analyst Certification Board, Inc.,® (BACB®). All rights reserved. A current version ofthis document may be found at www.bacb.com. Requests to reprint, copy, or distribute this document andquestions about this document must be submitted directly to the BACB.

From Chapter 1 of Applied Behavior Analysis, Second Edition. John O. Cooper, Timothy E. Heron, William L. Heward. Copyright © 2007by Pearson Education, Inc. All rights reserved.

22

Applied behavior analysis is a science devotedto the understanding and improvement ofhuman behavior. But other fields have similar

intent. What sets applied behavior analysis apart? Theanswer lies in its focus, goals, and methods. Applied be-havior analysts focus on objectively defined behaviors ofsocial significance; they intervene to improve the behav-iors under study while demonstrating a reliable relation-ship between their interventions and the behavioralimprovements; and they use the methods of scientific in-quiry—objective description, quantification, and con-trolled experimentation. In short, applied behavioranalysis, or ABA, is a scientific approach for discoveringenvironmental variables that reliably influence sociallysignificant behavior and for developing a technology ofbehavior change that takes practical advantage of thosediscoveries.

This chapter provides a brief outline of the historyand development of behavior analysis, discusses the phi-losophy that underlies the science, and identifies thedefining dimensions and characteristics of applied be-havior analysis. Because applied behavior analysis is anapplied science, we begin with an overview of some pre-cepts fundamental to all scientific disciplines.

Some Basic Characteristicsand a Definition of ScienceUsed properly, the word science refers to a systematicapproach for seeking and organizing knowledge aboutthe natural world. Before offering a definition of science,we discuss the purpose of science and the basic assump-tions and attitudes that guide the work of all scientists, ir-respective of their fields of study.

Purpose of Science

The overall goal of science is to achieve a thorough un-derstanding of the phenomena under study—socially im-portant behaviors, in the case of applied behavioranalysis. Science differs from other sources of knowl-edge or ways we obtain knowledge about the worldaround us (e.g., contemplation, common sense, logic, au-thority figures, religious or spiritual beliefs, political cam-paigns, advertisements, testimonials). Science seeks todiscover nature’s truths. Although it is frequently mis-used, science is not a tool for validating the cherished orpreferred versions of “the truth” held by any group, cor-poration, government, or institution. Therefore, scientificknowledge must be separated from any personal, politi-cal, economic, or other reasons for which it was sought.

Different types of scientific investigations yieldknowledge that enables one or more of three levels of un-

derstanding: description, prediction, and control. Eachlevel of understanding contributes to the scientific knowl-edge base of a given field.

Description

Systematic observation enhances the understanding of agiven phenomenon by enabling scientists to describe itaccurately. Descriptive knowledge consists of a collec-tion of facts about the observed events that can be quan-tified, classified, and examined for possible relations withother known facts—a necessary and important activityfor any scientific discipline. The knowledge obtainedfrom descriptive studies often suggests possible hy-potheses or questions for additional research.

For example, White (1975) reported the results ofobserving the “natural rates” of approval (verbal praise orencouragement) and disapproval (criticisms, reproach)by 104 classroom teachers in grades 1 to 12. Two majorfindings were that (a) the rates of teacher praise droppedwith each grade level, and (b) in every grade after second,the rate at which teachers delivered statements of disap-proval to students exceeded the rate of teacher approval.The results of this descriptive study helped to stimulatedozens of subsequent studies aimed at discovering thefactors responsible for the disappointing findings or atincreasing teachers’ rates of praise (e.g., Alber, Heward,& Hippler, 1999; Martens, Hiralall, & Bradley, 1997;Sutherland, Wehby, & Yoder, 2002; Van Acker, Grant, &Henry, 1996).

Prediction

A second level of scientific understanding occurs whenrepeated observations reveal that two events consistentlycovary with each other. That is, in the presence of oneevent (e.g., approaching winter) another event occurs (orfails to occur) with some specified probability (e.g., cer-tain birds fly south). When systematic covariation be-tween two events is found, this relationship—termed acorrelation—can be used to predict the relative proba-bility that one event will occur, based on the presence ofthe other event.

Because no variables are manipulated or controlledby the researcher, correlational studies cannot demon-strate whether any of the observed variables are respon-sible for the changes in the other variable(s), and no suchrelations should be inferred (Johnston & Pennypacker,1993a). For example, although a strong correlation ex-ists between hot weather and an increased incidence ofdrowning deaths, we should not assume that a hot andhumid day causes anyone to drown. Hot weather alsocorrelates with other factors, such as an increased num-ber of people (both swimmers and nonswimmers) seeking

Definition and Characteristics of Applied Behavior Analysis

23

relief in the water, and many instances of drowning havebeen found to be a function of factors such as the use ofalcohol or drugs, the relative swimming skills of the vic-tims, strong riptides, and the absence of supervision bylifeguards.

Results of correlational studies can, however, sug-gest the possibility of causal relations, which can then beexplored in later studies. The most common type of cor-relational study reported in the applied behavior analysisliterature compares the relative rates or conditional prob-abilities of two or more observed (but not manipulated)variables (e.g., Atwater & Morris, 1988; Symons, Hoch,Dahl, & McComas, 2003; Thompson & Iwata, 2001).For example, McKerchar and Thompson (2004) foundcorrelations between problem behavior exhibited by 14preschool children and the following consequent events:teacher attention (100% of the children), presentation ofsome material or item to the child (79% of the children),and escape from instructional tasks (33% of the children).The results of this study not only provide empirical val-idation for the social consequences typically used in clin-ical settings to analyze the variables maintainingchildren’s problem behavior, but also increase confidencein the prediction that interventions based on the findingsfrom such assessments will be relevant to the conditionsthat occur naturally in preschool classrooms (Iwata et al.,1994). In addition, by revealing the high probabilitieswith which teachers responded to problem behavior inways that are likely to maintain and strengthen it, Mc-Kerchar and Thompson’s findings also point to the needto train teachers in more effective ways to respond toproblem behavior.

Control

The ability to predict with a certain degree of confidenceis a valuable and useful result of science; prediction en-ables preparation. However, the greatest potential bene-fits from science can be derived from the third, andhighest, level of scientific understanding—control. Evi-dence of the kinds of control that can been derived fromscientific findings in the physical and biological sciencessurrounds us in the everyday technologies we take forgranted: pasteurized milk and the refrigerators we storeit in; flu shots and the automobiles we drive to go getthem; aspirin and the televisions that bombard us withnews and advertisements about the drug.

Functional relations, the primary products of basicand applied behavior analytic research, provide the kindof scientific understanding that is most valuable and use-ful to the development of a technology for changingbehavior. A functional relation exists when a well-controlled experiment reveals that a specific change inone event (the dependent variable) can reliably be pro-

1Skinner (1953) noted, that although telescopes and cyclotrons give us a“dramatic picture of science in action” (p. 12), and science could not haveadvanced very far without them, such devices and apparatus are not sciencethemselves. “Nor is science to be identified with precise measurement. Wecan measure and be mathematical without being scientific at all, just as wemay be scientific with these aids” (p. 12). Scientific instruments bring sci-entists into greater contact with their subject matter and, with measure-ment and mathematics, enable a more precise description and control of keyvariables.

duced by specific manipulations of another event (theindependent variable), and that the change in the depen-dent variable was unlikely to be the result of other extra-neous factors (confounding variables).

Johnston and Pennypacker (1980) described func-tional relations as “the ultimate product of a natural sci-entific investigation of the relation between behavior andits determining variables” (p. 16).

Such a “co-relation” is expressed as y = f (x), where x isthe independent variable or argument of the function,and y is the dependent variable. In order to determine ifan observed relation is truly functional, it is necessary todemonstrate the operation of the values of x in isolationand show that they are sufficient for the production of y. . . . [H]owever, a more powerful relation exists if ne-cessity can be shown (that y occurs only if x occurs). Themost complete and elegant form of empirical inquiry in-volves applying the experimental method to identifyingfunctional relations. (Johnston & Pennypacker, 1993a,p. 239)

The understanding gained by the scientific discoveryof functional relations is the basis of applied technologiesin all fields. Almost all of the research studies cited in thistext are experimental analyses that have demonstrated ordiscovered a functional relation between a target behav-ior and one or more environmental variables. However, itis important to understand that functional relations arealso correlations (Cooper, 2005), and that:

In fact, all we ever really know is that two events arerelated or “co-related” in some way. To say that one“causes” another is to say that one is solely the result ofthe other. To know this, it is necessary to know that noother factors are playing a contributing role. This is vir-tually impossible to know because it requires identifyingall possible such factors and then showing that they arenot relevant. (Johnston & Pennypacker, 1993a, p. 240)

Attitudes of ScienceScience is first of all a set of attitudes.

—B. F. Skinner (1953, p. 12)

The definition of science lies not in test tubes, spec-trometers, or electron accelerators, but in the behavior ofscientists. To begin to understand any science, we need tolook past the apparatus and instrumentation that are mostreadily apparent and examine what scientists do.1 The

Definition and Characteristics of Applied Behavior Analysis

24

pursuit of knowledge can properly be called science whenit is carried out according to general methodological pre-cepts and expectations that define science. Although thereis no “scientific method” in the sense of a prescribed setof steps or rules that must be followed, all scientists sharea fundamental assumption about the nature of events thatare amenable to investigation by science, general notionsabout basic strategy, and perspectives on how to viewtheir findings. These attitudes of science—determinism,empiricism, experimentation, replication, parsimony, andphilosophic doubt—constitute a set of overriding as-sumptions and values that guide the work of all scientists(Whaley & Surratt, 1968).

Determinism

Science is predicated on the assumption of determinism.Scientists presume that the universe, or at least that partof it they intend to probe with the methods of science, isa lawful and orderly place in which all phenomena occuras the result of other events. In other words, events donot just happen willy-nilly; they are related in system-atic ways to other factors, which are themselves physicalphenomena amenable to scientific investigation.

If the universe were governed by accidentalism, aphilosophical position antithetical to determinism thatholds that events occur by accident or without cause, orby fatalism, the belief that events are predetermined, thescientific discovery and technological use of functionalrelations to improve things would be impossible.

If we are to use the methods of science in the field ofhuman affairs, we must assume behavior is lawful anddetermined. We must expect to discover what a mandoes is the result of specifiable conditions and that oncethese conditions have been discovered, we can anticipateand to some extent determine his actions. (Skinner,1953, p. 6)

Determinism plays a pivotal dual role in the conductof scientific practice: It is at once a philosophical stancethat does not lend itself to proof and the confirmation thatis sought by each experiment. In other words, the scien-tist first assumes lawfulness and then proceeds to lookfor lawful relations (Delprato & Midgley, 1992).

Empiricism

Scientific knowledge is built on, above all, empiricism—the practice of objective observation of the phenomena ofinterest. Objectivity in this sense means “independent ofthe individual prejudices, tastes, and private opinionsof the scientist. . . . Results of empirical methods are ob-jective in that they are open to anyone’s observation anddo not depend on the subjective belief of the individualscientist” (Zuriff, 1985, p. 9).

In the prescientific era, as well as in nonscientificand pseudoscientific activities today, knowledge was (andis) the product of contemplation, speculation, personalopinion, authority, and the “obvious” logic of commonsense. The scientist’s empirical attitude, however, de-mands objective observation based on thorough descrip-tion, systematic and repeated measurement, and precisequantification of the phenomena of interest.

As in every scientific field, empiricism is the fore-most rule in the science of behavior. Every effort to un-derstand, predict, and improve behavior hinges on thebehavior analyst’s ability to completely define, system-atically observe, and accurately and reliably measure oc-currences and nonoccurrences of the behavior of interest.

Experimentation

Experimentation is the basic strategy of most sciences.Whaley and Surratt (1968) used the following anecdoteto introduce the need for conducting experiments.

A man who lived in a suburban dwelling area was sur-prised one evening to see his neighbor bow to the fourwinds, chant a strange melody, and dance around hisfront lawn beating a small drum. After witnessing thesame ritual for over a month, the man became over-whelmed with curiosity and decided to look into thematter.

“Why do you go through this same ritual eachevening?” the man asked his neighbor.

“It keeps my house safe from tigers,” the neighborreplied.

“Good grief!” the man said. “Don’t you know thereisn’t a tiger within a thousand miles of here?”

“Yeah,” the neighbor smiled. “Sure works, doesn’tit!” (pp. 23–2 to 23–3)

When events are observed to covary or occur in closetemporal sequence, a functional relation may exist, butother factors may be responsible for the observed valuesof the dependent variable. To investigate the possible ex-istence of a functional relation, an experiment (or better,a series of experiments) must be performed in which thefactor(s) suspected of having causal status are systemat-ically controlled and manipulated while the effects onthe event under study are carefully observed. In dis-cussing the meaning of the term experimental, Dinsmoor(2003) noted that

two measures of behavior may be found to covary at astatistically significant level, but it is not thereby madeclear which factor is the cause and which is the effect, orindeed whether the relations between the two is not theproduct of a third, confounded factor, with which bothof them happened to covary. Suppose, for example, it isfound that students with good grades have more datesthan those with lower grades. Does this imply that high

Definition and Characteristics of Applied Behavior Analysis

25

grades make people socially attractive? That dating isthe royal road to academic success? That it pays to besmart? Or that financial security and free time contributeboth to academic and to social success? (p. 152)

Reliably predicting and controlling any phenomena,including the presence of tigers in one’s backyard, re-quires the identification and manipulation of the factorsthat cause those phenomena to act as they do. One waythat the individual described previously could use the ex-perimental method to evaluate the effectiveness of his rit-ual would be to first move to a neighborhood in whichtigers are regularly observed and then systematically ma-nipulate the use of his antitiger ritual (e.g., one week on,one week off, one week on) while observing and record-ing the presence of tigers under the ritual and no-ritualconditions.

The experimental method is a method for isolating therelevant variables within a pattern of events. . . . [W]henthe experimental method is employed, it is possible tochange one factor at a time (independent variable) whileleaving all other aspects of the situation the same, andthen to observe what effect this change has on the targetbehavior (dependent variable). Ideally, a functional rela-tion may be obtained. Formal techniques of experimen-tal control are designed to make sure that the conditionsbeing compared are otherwise the same. Use of the ex-perimental method serves as a necessary condition (sinequo non) to distinguish the experimental analysis of be-havior from other methods of investigation. (Dinsmoor,2003, p. 152)

Thus, an experiment is a carefully conducted com-parison of some measure of the phenomenon of interest(the dependent variable) under two or more different con-ditions in which only one factor at a time (the indepen-dent variable) differs from one condition to another.

Replication

The results of a single experiment, no matter how well thestudy was designed and conducted and no matter howclear and impressive the findings, are never sufficient toearn an accepted place among the scientific knowledgebase of any field. Although the data from a single exper-iment have value in their own right and cannot be dis-counted, only after an experiment has been replicated anumber of times with the same basic pattern of resultsdo scientists gradually become convinced of the findings.

Replication—the repeating of experiments (as wellas repeating independent variable conditions within ex-periments)—“pervades every nook and cranny of the ex-perimental method” (Johnston & Pennypacker, 1993a,

p. 244). Replication is the primary method with whichscientists determine the reliability and usefulness of theirfindings and discover their mistakes (Johnston & Penny-packer, 1980; 1993a; Sidman, 1960). Replication—notthe infallibility or inherent honesty of scientists—is theprimary reason science is a self-correcting enterprise thateventually gets it right (Skinner, 1953).

How many times must an experiment be repeatedwith the same results before the scientific community ac-cepts the findings? There is no set number of replicationsrequired, but the greater the importance of the findings foreither theory or practice, the greater the number of repli-cations to be conducted.

Parsimony

One dictionary definition of parsimony is great frugal-ity, and in a special way this connotation accurately de-scribes the behavior of scientists. As an attitude ofscience, parsimony requires that all simple, logical ex-planations for the phenomenon under investigation beruled out, experimentally or conceptually, before morecomplex or abstract explanations are considered. Parsi-monious interpretations help scientists fit their findingswithin the field’s existing knowledge base. A fully par-simonious interpretation consists only of those elementsthat are necessary and sufficient to explain the phenom-enon at hand. The attitude of parsimony is so critical toscientific explanations that it is sometimes referred to asthe Law of Parsimony (Whaley & Surratt, 1968), a “law”derived from Occam’s Razor, credited to William ofOccam (1285–1349), who stated: “One should not in-crease, beyond what is necessary, the number of entitiesrequired to explain anything” (Mole, 2003). In otherwords, given a choice between two competing and com-pelling explanations for the same phenomenon, oneshould shave off extraneous variables and choose the sim-plest explanation, the one that requires the fewestassumptions.

Philosophic Doubt

The attitude of philosophic doubt requires the scientistto continually question the truthfulness of what is re-garded as fact. Scientific knowledge must always beviewed as tentative. Scientists must constantly be will-ing to set aside their most cherished beliefs and findingsand replace them with the knowledge derived from newdiscoveries.

Good scientists maintain a healthy level of skepti-cism. Although being skeptical of others’ research may be

Definition and Characteristics of Applied Behavior Analysis

26

easy, a more difficult but critical characteristic of scien-tists is that they remain open to the possibility—as wellas look for evidence—that their own findings or inter-pretations are wrong. As Oliver Cromwell (1650) statedin another context: “I beseech you . . . think it possibleyou may be mistaken.” For the true scientist, “new find-ings are not problems; they are opportunities for furtherinvestigation and expanded understanding” (Todd & Mor-ris, 1993, p. 1159).

Practitioners should be as skeptical as researchers.The skeptical practitioner not only requires scientific ev-idence before implementing a new practice, but also eval-uates continually its effectiveness once the practice hasbeen implemented. Practitioners must be particularlyskeptical of extraordinary claims made for the effective-ness of new theories, therapies, or treatments (Jacobson,Foxx, & Mulick, 2005; Maurice, 2006).

Claims that sound too good to be true are usually justthat. . . . Extraordinary claims require extraordinary evi-dence. (Sagan, 1996; Shermer, 1997)

What constitutes extraordinary evidence? In thestrictest sense, and the sense that should be employedwhen evaluating claims of educational effectiveness, ev-idence is the outcome of the application of the scientificmethod to test the effectiveness of a claim, a theory, or apractice. The more rigorously the test is conducted, themore often the test is replicated, the more extensivelythe test is corroborated, the more extraordinary the evi-dence. Evidence becomes extraordinary when it is extra-ordinarily well tested. (Heward & Silvestri, 2005,p. 209)

We end our discussion of philosophic doubt with twopieces of advice, one from Carl Sagan, the other fromB. F. Skinner: “The question is not whether we like theconclusion that emerges out of a train of reasoning, butwhether the conclusion follows from the premise or start-ing point and whether that premise is true” (Sagan, 1996,p. 210). “Regard no practice as immutable. Change andbe ready to change again. Accept no eternal verity. Ex-periment” (Skinner, 1979, p. 346).

Other Important Attitudes and Values

The six attitudes of science that we have examined arenecessary features of science and provide an importantcontext for understanding applied behavior analysis.However, the behavior of most productive and success-ful scientists is also characterized by qualities such asthoroughness, curiosity, perseverance, diligence, ethics,and honesty. Good scientists acquire these traits becausebehaving in such ways has proven beneficial to theprogress of science.

2Informative and interesting descriptions of the history of behavior analy-sis can be found in Hackenberg (1995), Kazdin (1978), Michael (2004),Pierce and Epling (1999), Risley (1997, 2005), Sidman (2002), Skinner(1956, 1979), Stokes (2003), and in a special section of articles in the fall2003 issue of The Behavior Analyst.

A Definition of Science

There is no universally accepted, standard definition ofscience. We offer the following definition as one that en-compasses the previously discussed purposes and atti-tudes of science, irrespective of the subject matter.Science is a systematic approach to the understanding ofnatural phenomena—as evidenced by description, pre-diction, and control—that relies on determinism as itsfundamental assumption, empiricism as its prime direc-tive, experimentation as its basic strategy, replication asits necessary requirement for believability, parsimony asits conservative value, and philosophic doubt as its guid-ing conscience.

A Brief Historyof the Developmentof Behavior AnalysisBehavior analysis consists of three major branches. Be-haviorism is the philosophy of the science of behavior,basic research is the province of the experimental analy-sis of behavior (EAB), and developing a technology forimproving behavior is the concern of applied behavioranalysis (ABA). Applied behavior analysis can be fullyunderstood only in the context of the philosophy andbasic research traditions and findings from which itevolved and remains connected today. This section pro-vides an elementary description of the basic tenets of be-haviorism and outlines some of the major events that havemarked the development of behavior analysis.2 Table 1lists major books, journals, and professional organiza-tions that have contributed to the advancement of behav-ior analysis since the 1930s.

Stimulus–Response Behaviorism of Watson

Psychology in the early 1900s was dominated by thestudy of states of consciousness, images, and other men-tal processes. Introspection, the act of carefully observ-ing one’s own conscious thoughts and feelings, was aprimary method of investigation. Although the authorsof several texts in the first decade of the 20th century de-fined psychology as the science of behavior (see Kazdin,1978), John B. Watson is widely recognized as thespokesman for a new direction in the field of psychology.

Definition and Characteristics of Applied Behavior Analysis

27

Table 1 Representative Selection of Books, Journals, and Organizations That Have Played a Major Role in the Development and Dissemination of Behavior Analysis

Decade Books Journals Organizations

1930s The Behavior of Organisms—Skinner (1938)

1940s Walden Two—Skinner (1948)

1950s Principles of Psychology—Keller & Schoenfeld Journal of the Experimental Society for the Experimental Analysis (1950) Analysis of Behavior (1958) of Behavior (SEAB) (1957)

Science and Human Behavior—Skinner (1953)

Schedules of Reinforcement—Ferster & Skinner (1957)

Verbal Behavior—Skinner (1957)

1960s Tactics for Scientific Research—Sidman (1960) Journal of Applied Behavior American Psychological Association -

Child Development, Vols. I & II—Bijou & Baer Analysis (1968) Division 25 Experimental Analysis of

(1961, 1965) Behavior (1964)

The Analysis of Behavior—Holland Experimental Analysis of Behaviour

& Skinner (1961) Group (UK) (1965)

Research in Behavior Modification—Krasner & Ullmann (1965)

Operant Behavior: Areas of Research and Application—Honig (1966)

The Analysis of Human Operant Behavior—Reese (1966)

Contingencies of Reinforcement: A Theoretical Analysis—Skinner (1969)

1970s Beyond Freedom and Dignity—Skinner (1971) Behaviorism (1972) Norwegian Association for Behavior

Elementary Principles of Behavior—Whaley (became Behavior and Analysis (1973)

& Malott (1971) Philosophy in 1990) Midwestern Association for Behavior

Contingency Management in Education and Revista Mexicana de Analysis (MABA) (1974)

Other Equally Exciting Places—Malott (1974) Analisis de la Conducta Mexican Society of Behavior Analysis (1975) (1975)About Behaviorism—Skinner (1974)Behavior Modification (1977) Association for Behavior Analysis Applying Behavior-Analysis Procedures with Journal of Organizational (formerly MABA) (1978)Children and Youth—Sulzer-Azaroff & Mayer Behavior Management(1977)(1977)

Learning—Catania (1979)Education & Treatment of Children (1977)

The Behavior Analyst (1978)

1980s Strategies and Tactics for Human Behavioral Journal of Precision Teaching Society for the Advancement of Research—Johnston & Pennypacker (1980) and Celeration (originally, Behavior Analysis (1980)

Behaviorism: A Conceptual Reconstruction— Journal of Precision Teaching) Cambridge Center for Behavioral Zuriff (1985) (1980) Studies (1981)

Recent Issues in the Analysis of Behavior— Analysis of Verbal Behavior Japanese Association for Behavior Skinner (1989) (1982) Analysis (1983)

Behavioral Interventions(1986)

Japanese Journal of Behavior Analysis (1986)

Behavior Analysis Digest(1989)

Definition and Characteristics of Applied Behavior Analysis

28

In his influential article, “Psychology as the BehavioristViews It,” Watson (1913) wrote:

Psychology as the behaviorist views it is a purely objective experimental branch of natural science. Its theoretical goal is the prediction and control of behavior.Introspection forms no essential part of its methods, noris the scientific value of its data dependent upon thereadiness with which they lend themselves to interpreta-tion in terms of consciousness. (p. 158)

Watson argued that the proper subject matter for psy-chology was not states of mind or mental processes butobservable behavior. Further, the objective study of be-havior as a natural science should consist of direct ob-servation of the relationships between environmentalstimuli (S) and the responses (R) they evoke. Watsonianbehaviorism thus became known as stimulus–response(S–R) psychology. Although there was insufficient sci-entific evidence to support S–R psychology as a workableexplanation for most behavior, Watson was confident thathis new behaviorism would indeed lead to the predictionand control of human behavior and that it would allowpractitioners to improve performance in areas such as ed-

ucation, business, and law. Watson (1924) made boldclaims concerning human behavior, as illustrated in thisfamous quotation:

Give me a dozen healthy infants, well-formed, and myown specified world to bring them up in and I’ll guaran-tee to take any one at random and train him to becomeany type of specialist I might select—doctor, lawyer,artist, merchant-chief and, yes, even beggar-man andthief, regardless of his talents, penchants, tendencies,abilities, vocations, and race of his ancestors. I am goingbeyond my facts and I admit it, but so have the advo-cates of the contrary and they have been doing it formany thousands of years. (p. 104)

It is unfortunate that such extraordinary claims weremade, exaggerating the ability to predict and controlhuman behavior beyond the scientific knowledge avail-able. The quotation just cited has been used to discreditWatson and continues to be used to discredit behaviorismin general, even though the behaviorism that underliescontemporary behavior analysis is fundamentally differ-ent from the S–R paradigm. Nevertheless, Watson’s con-tributions were of great significance: He made a strong

Table 1 (continued )

Decade Books Journals Organizations

1990s Concepts and Principles of Behavior Behavior and Social Issues Accreditation of Training Programs in Analysis—Michael (1993) (1991) Behavior Analysis (Association for

Behavior Analysis) (1993)

Radical Behaviorism: The Philosophy and the Journal of Behavioral Behavior Analyst Certification Board Science—Chiesa (1994) Education (1991) (BACB) (1998)

Equivalence Relations and Behavior—Sidman Council of Directors of Graduate (1994) Programs in Behavior Analysis

Functional Analysis of Problem Behavior— Journal of Positive Behavior (Association for Behavior Analysis)

Repp & Horner (1999) Interventions (1999) (1999)

The Behavior Analyst Today(1999)

2000s European Journal of First Board Certified Behavior Behavior Analysis (2000) Analysts (BCBA) and Board Certified

Behavioral Technology Associate Behavior Analysts (BCABA)

Today (2001) credentialed by the BACB (2000)

Behavioral Development European Association for Behaviour

Bulletin (2002) Analysis (2002)

Journal of Early and Intensive Behavior Intervention (2004)

Brazilian Journal of Behavior Analysis (2005)

International Journal of Behavioral Consultation and Therapy (2005)

Definition and Characteristics of Applied Behavior Analysis

29

B. F. Skinner (left) in his Indiana University lab circa 1945 and (right) circa 1967.B. F. Skinner Foundation (left); Julie S. Vargas/B. F. Skinner Foundation (right).

3For an interesting biography and scholarly examination of J. B. Watson’scontributions, see Todd and Morris (1994).

case for the study of behavior as a natural science on a parwith the physical and biological sciences.3

Experimental Analysis of Behavior

The experimental branch of behavior analysis formallybegan in 1938 with the publication of B. F. Skinner’s TheBehavior of Organisms (1938/1966). The book summa-rized Skinner’s laboratory research conducted from 1930to 1937 and brought into perspective two kinds of be-havior: respondent and operant.

Respondent behavior is reflexive behavior as in thetradition of Ivan Pavlov (1927/1960). Respondents areelicited, or “brought out,” by stimuli that immediatelyprecede them. The antecedent stimulus (e.g., bright light)and the response it elicits (e.g., pupil constriction) forma functional unit called a reflex. Respondent behaviorsare essentially involuntary and occur whenever the elic-iting stimulus is presented.

Skinner was “interested in giving a scientific accountof all behavior, including that which Descartes had setaside as ‘willed’and outside the reach of science” (Glenn,Ellis, & Greenspoon, 1992, p. 1330). But, like other psy-chologists of the time, Skinner found that the S–R para-digm could not explain a great deal of behavior, particularlybehaviors for which there were no apparent antecedentcauses in the environment. Compared to reflexive behav-ior with its clear eliciting events, much of the behavior oforganisms appeared spontaneous or “voluntary.” In an at-tempt to explain the mechanisms responsible for “volun-tary” behavior, other psychologists postulated mediatingvariables inside the organism in the form of hypotheticalconstructs such as cognitive processes, drives, and free 4In The Behavior of Organisms, Skinner called the conditioning of re-

spondent behavior Type S conditioning and the conditioning of operantbehavior Type R conditioning, but these terms were soon dropped. Re-spondent and operant conditioning and the three-term contingency are fur-ther defined and discussed in the next chapter entitled “Basic Concepts.”

will. Skinner took a different tack. Instead of creating hyp-othetical constructs, presumed but unobserved entitiesthat could not be manipulated in an experiment, Skinnercontinued to look in the environment for the determinantsof behavior that did not have apparent antecedent causes(Kimball, 2002; Palmer, 1998).

He did not deny that physiological variables played arole in determining behavior. He merely felt that thiswas the domain of other disciplines, and for his part, re-mained committed to assessing the causal role of the en-vironment. This decision meant looking elsewhere intime. Through painstaking research, Skinner accumu-lated significant, if counterintuitive, evidence that be-havior is changed less by the stimuli that precede it(though context is important) and more by the conse-quences that immediately follow it (i.e., consequencesthat are contingent upon it). The essential formulationfor this notion is S–R–S, otherwise known as thethree–term contingency. It did not replace the S–Rmodel—we still salivate, for instance, if we smell foodcooking when we are hungry. It did, however, accountfor how the environment “selects” the great part oflearned behavior.

With the three-term contingency Skinner gave us anew paradigm. He achieved something no less profoundfor the study of behavior and learning than Bohr’s modelof the atom or Mendel’s model of the gene. (Kimball,2002, p. 71)

Skinner called the second type of behavior operantbehavior.4 Operant behaviors are not elicited by preced-ing stimuli but instead are influenced by stimulus changesthat have followed the behavior in the past. Skinner’s

Definition and Characteristics of Applied Behavior Analysis

30

Res

po

nse

s

50

Time in Minutes

60 120

Original ConditioningAll responses to the lever were reinforced. The first three reinforcements were apparently ineffective. The fourth is followed by a rapid increase in rate.

Figure 1 The first data set presented in B. F. Skinner’s The Behavior of Organisms:An Experimental Analysis (1938).From The Behavior of Organisms: An Experimental Analysis byB. F. Skinner, p. 67. Original copyright 1938 by Appleton-Century. Copyright 1991 by B. F. Skinner Foundation,Cambridge, MA. Used by permission.

5Most of the methodological features of the experimental approach pio-neered by Skinner (e.g., repeated measurement of rate or frequency of re-sponse as the primary dependent variable, within-subject experimentalcomparisons, visual analysis of graphic data displays) continue to charac-terize both basic and applied research in behavior analysis.

most powerful and fundamental contribution to our un-derstanding of behavior was his discovery and experi-mental analyses of the effects of consequences onbehavior. The operant three-term contingency as the pri-mary unit of analysis was a revolutionary conceptualbreakthrough (Glenn, Ellis, & Greenspoon, 1992).

Skinner (1938/1966) argued that the analysis of op-erant behavior “with its unique relation to the environ-ment presents a separate important field of investigation”(p. 438). He named this new science the experimentalanalysis of behavior and outlined the methodology forits practice. Simply put, Skinner recorded the rate atwhich a single subject (he initially used rats and later, pi-geons) emitted a given behavior in a controlled and stan-dardized experimental chamber.

The first set of data Skinner presented in The Be-havior of Organisms was a graph that “gives a record ofthe resulting change in behavior” (p. 67) when a foodpellet was delivered immediately after a rat pressed alever (see Figure 1). Skinner noted that the first threetimes that food followed a response “had no observableeffect” but that “the fourth response was followed by anappreciable increase in rate showing a swift accelerationto a maximum” (pp. 67–68).

Skinner’s investigative procedures evolved into anelegant experimental approach that enabled clear andpowerful demonstrations of orderly and reliable func-tional relations between behavior and various types ofenvironmental events.5 By systematically manipulatingthe arrangement and scheduling of stimuli that precededand followed behavior in literally thousands of labora-tory experiments from the 1930s through the 1950s, Skin-ner and his colleagues and students discovered andverified the basic principles of operant behavior that con-tinue to provide the empirical foundation for behavior

6Skinner, who many consider the most eminent psychologist of the 20thcentury (Haagbloom et al., 2002), authored or coauthored 291 primary-source works (see Morris & Smith, 2003, for a complete bibliography). Inaddition to Skinner’s three-volume autobiography (Particulars of My Life,1976; The Shaping of a Behaviorist, 1979; A Matter of Consequences,1983), numerous biographical books and articles have been written aboutSkinner, both before and after his death. Students interested in learningabout Skinner should read B.F. Skinner: A Life by Daniel Bjork (1997),B.F. Skinner, Organism by Charles Catania (1992), Burrhus Frederic Skin-ner (1904–1990): A Thank You by Fred Keller (1990), B. F. Skinner—TheLast Few Days by his daughter Julie Vargas (1990), and Skinner as Self-Manager by Robert Epstein. Skinner’s contributions to ABA are describedby Morris, Smith, and Altus (2005).

analysis today. Description of these principles of behav-ior—general statements of functional relations betweenbehavior and environmental events—and tactics forchanging behavior derived from those principles consti-tute a major portion of this text.

Skinner’s Radical Behaviorism

In addition to being the founder of the experimentalanalysis of behavior, B. F. Skinner wrote extensively onthe philosophy of that science.6 Without question, Skin-ner’s writings have been the most influential both in guid-ing the practice of the science of behavior and inproposing the application of the principles of behaviorto new areas. In 1948 Skinner published Walden Two, afictional account of how the philosophy and principlesof behavior might be used in a utopian community. Thiswas followed by his classic text, Science and Human Be-havior (1953), in which he speculated on how the prin-ciples of behavior might be applied to complex humanbehavior in areas such as education, religion, govern-ment, law, and psychotherapy.

Much of Skinner’s writing was devoted to the de-velopment and explanation of his philosophy of behav-iorism. Skinner began his book About Behaviorism(1974) with these words:

Behaviorism is not the science of human behavior; it isthe philosophy of that science. Some of the questions itasks are these: Is such a science really possible? Can itaccount for every aspect of human behavior? What

Definition and Characteristics of Applied Behavior Analysis

31

methods can it use? Are its laws as valid as those ofphysics and biology? Will it lead to a technology, and ifso, what role will it play in human affairs? (p. 1)

The behaviorism that Skinner pioneered differed sig-nificantly (indeed, radically) from other psychologicaltheories, including other forms of behaviorism. Althoughthere were, and remain today, many psychological mod-els and approaches to the study of behavior, mentalismis the common denominator among most.

In general terms, mentalism may be defined as an ap-proach to the study of behavior which assumes that amental or “inner” dimension exists that differs from abehavioral dimension. This dimension is ordinarily re-ferred to in terms of its neural, psychic, spiritual, subjec-tive, conceptual, or hypothetical properties. Mentalismfurther assumes that phenomena in this dimension eitherdirectly cause or at least mediate some forms of behav-ior, if not all. These phenomena are typically designatedas some sort of act, state, mechanism, process, or entitythat is causal in the sense of initiating or originating.Mentalism regards concerns about the origin of thesephenomena as incidental at best. Finally, mentalismholds that an adequate causal explanation of behaviormust appeal directly to the efficacy of these mental phe-nomena. (Moore, 2003, pp. 181–182)

Hypothetical constructs and explanatory fictions arethe stock and trade of mentalism, which has dominatedWestern intellectual thought and most psychological the-ories (Descartes, Freud, Piaget), and it continues to doso into the 21st century. Freud, for example, created acomplex mental world of hypothetical constructs—theid, ego, and superego—that he contended were key tounderstanding a person’s actions.

Hypothetical constructs—“theoretical terms thatrefer to a possibly existing, but at the moment unobservedprocess or entity” (Moore, 1995, p. 36)—can neither beobserved nor experimentally manipulated (MacCorquo-dale & Meehl, 1948; Zuriff, 1985). Free will, readiness,innate releasers, language acquisition devices, storageand retrieval mechanisms for memory, and informationprocessing are all examples of hypothetical constructsthat are inferred from behavior. Although Skinner (1953,1974) clearly indicated that it is a mistake to rule outevents that influence our behavior because they are notaccessible to others, he believed that using presumed butunobserved mentalistic fictions (i.e., hypothetical con-structs) to explain the causes of behavior contributednothing to a functional account.

Consider a typical laboratory situation. A food-deprived rat pushes a lever each time a light comes onand receives food, but the rat seldom pushes the leverwhen the light is off (and if it does, no food is delivered).

When asked to explain why the rat pushes the lever onlywhen the light is on, most will say that the rat has “madethe association” between the light being on and foodbeing delivered when the lever is pressed. As a result ofmaking that association, the animal now “knows” to pressthe lever only when the light is on. Attributing the rat’s be-havior to a hypothetical cognitive process such as asso-ciating or to something called “knowledge” adds nothingto a functional account of the situation. First, the envi-ronment (in this case, the experimenter) paired the lightand food availability for lever presses, not the rat. Second,the knowledge or other cognitive process that is said toexplain the observed behavior is itself unexplained, whichbegs for still more conjecture.

The “knowledge” that is said to account for the rat’sperformance is an example of an explanatory fiction, afictitious variable that often is simply another name forthe observed behavior that contributes nothing to an un-derstanding of the variables responsible for developing ormaintaining the behavior. Explanatory fictions are the keyingredient in “a circular way of viewing the cause and ef-fect of a situation” (Heron, Tincani, Peterson, & Miller,2005, p. 274) that give a false sense of understanding.

Turning from observed behavior to a fanciful innerworld continues unabated. Sometimes it is little morethan a linguistic practice. We tend to make nouns of ad-jectives and verbs and must then find a place for thethings the nouns are said to represent. We say that a ropeis strong and before long we are speaking of its strength.We call a particular kind of strength tensile, and then ex-plain that the rope is strong because it possesses tensilestrength. The mistake is less obvious but more trouble-some when matters are more complex. . . .

Consider now a behavioral parallel. When a personhas been subject to mildly punishing consequences inwalking on a slippery surface, he may walk in a mannerwe describe as cautious. It is then easy to say that hewalks with caution or that he shows caution. There is no harm in this until we begin to say that he walks carefully because of his caution. (Skinner, 1974, pp.165–166, emphasis added)

Some believe that behaviorism rejects all events thatcannot be operationally defined by objective assessment.Accordingly, Skinner is thought to have rejected all datafrom his system that could not be independently verifiedby other people (Moore, 1984). Moore (1985) called thisoperational view “a commitment to truth by agreement”(p. 59). This common view of the philosophy of behav-iorism is limited; in reality, there are many kinds of be-haviorism—structuralism, methodological behaviorism,and forms of behaviorism that use cognitions as causalfactors (e.g., cognitive behavior modification and sociallearning theory), in addition to the radical behaviorism ofSkinner.

Definition and Characteristics of Applied Behavior Analysis

32

Structuralism and methodological behaviorism doreject all events that are not operationally defined by ob-jective assessment (Skinner, 1974). Structuralists avoidmentalism by restricting their activities to descriptionsof behavior. They make no scientific manipulations;accordingly, they do not address questions of causal fac-tors. Methodological behaviorists differ from the struc-turalists by using scientific manipulations to search forfunctional relations between events. Uncomfortable withbasing their science on unobservable phenomena, someearly behaviorists either denied the existence of “innervariables” or considered them outside the realm of a sci-entific account. Such an orientation is often referred to asmethodological behaviorism.

Methodological behaviorists also usually acknowl-edge the existence of mental events but do not considerthem in the analysis of behavior (Skinner, 1974). Method-ological behaviorists’ reliance on public events, exclud-ing private events, restricts the knowledge base of humanbehavior and discourages innovation in the science of be-havior. Methodological behaviorism is restrictive becauseit ignores areas of major importance for an understand-ing of behavior.

Contrary to popular opinion, Skinner did not objectto cognitive psychology’s concern with private events(i.e., events taking place “inside the skin”) (Moore, 2000).Skinner was the first behaviorist to view thoughts andfeelings (he called them “private events”) as behavior tobe analyzed with the same conceptual and experimentaltools used to analyze publicly observable behavior, not asphenomena or variables that exist within and operate viaprinciples of a separate mental world.

Essentially, Skinner’s behaviorism makes three majorassumptions regarding the nature of private events: (a)Private events such as thoughts and feelings are behavior;(b) behavior that takes place within the skin is distin-guished from other (“public”) behavior only by its inac-cessibility; and (c) private behavior is influenced by (i.e.,is a function of ) the same kinds of variables as publiclyaccessible behavior.

We need not suppose that events which take place withinan organism’s skin have special properties for that rea-son. A private event may be distinguished by its limitedaccessibility but not, so far as we know, by any specialstructure of nature. (Skinner, 1953, p. 257)

By incorporating private events into an overall con-ceptual system of behavior, Skinner created a radical be-haviorism that includes and seeks to understand allhuman behavior. “What is inside the skin, and how dowe know about it? The answer is, I believe, the heart ofradical behaviorism” (Skinner, 1974, p. 218). The properconnotations of the word radical in radical behaviorism

are far-reaching and thoroughgoing, connoting the phi-losophy’s inclusion of all behavior, public and private.Radical is also an appropriate modifier for Skinner’s formof behaviorism because it represents a dramatic depar-ture from other conceptual systems in calling for

probably the most drastic change ever proposed in ourway of thinking about man. It is almost literally a matterof turning the explanation of behavior inside out. (Skin-ner, 1974, p. 256)

Skinner and the philosophy of radical behaviorismacknowledge the events on which fictions such as cog-nitive processes are based. Radical behaviorism does notrestrict the science of behavior to phenomena that can bedetected by more than one person. In the context of rad-ical behaviorism, the term observe implies “coming intocontact with” (Moore, 1984). Radical behaviorists con-sider private events such as thinking or sensing the stim-uli produced by a damaged tooth to be no different frompublic events such as oral reading or sensing the soundsproduced by a musical instrument. According to Skinner(1974), “What is felt or introspectively observed is notsome nonphysical world of consciousness, mind, or men-tal life but the observer’s own body” (pp. 18–19).

The acknowledgment of private events is a majoraspect of radical behaviorism. Moore (1980) stated itconcisely:

For radical behaviorism, private events are those eventswherein individuals respond with respect to certain stim-uli accessible to themselves alone. . . . The responsesthat are made to those stimuli may themselves be public,i.e., observable by others, or they may be private, i.e.,accessible only to the individual involved. Nonetheless,to paraphrase Skinner (1953), it need not be supposedthat events taking place within the skin have any specialproperties for that reason alone. . . . For radical behav-iorism, then, one’s responses with respect to privatestimuli are equally lawful and alike in kind to one’s re-sponses with respect to public stimuli. (p. 460)

Scientists and practitioners are affected by their ownsocial context, and institutions and schools are dominatedby mentalism (Heward & Cooper, 1992; Kimball, 2002).A firm grasp of the philosophy of radical behaviorism, inaddition to knowledge of principles of behavior, can helpthe scientist and practitioner resist the mentalistic ap-proach of dropping the search for controlling variablesin the environment and drifting toward explanatory fic-tions in the effort to understand behavior. The principlesof behavior and the procedures presented in this textapply equally to public and private events.

Definition and Characteristics of Applied Behavior Analysis

33

A thorough discussion of radical behaviorism is farbeyond the scope of this text. Still, the serious student ofapplied behavior analysis should devote considerablestudy to the original works of Skinner and to other authorswho have critiqued, analyzed, and extended the philo-sophical foundations of the science of behavior.7 (SeeBox 1 for Don Baer’s perspectives on the meaning andimportance of radical behaviorism.)

Applied Behavior Analysis

One of the first studies to report the human application ofprinciples of operant behavior was conducted by Fuller(1949). The subject was an 18-year-old boy with pro-found developmental disabilities who was described inthe language of the time as a “vegetative idiot.” He lay onhis back, unable to roll over. Fuller filled a syringe witha warm sugar-milk solution and injected a small amountof the fluid into the young man’s mouth every time hemoved his right arm (that arm was chosen becausehe moved it infrequently). Within four sessions the boywas moving his arm to a vertical position at a rate of threetimes per minute.

The attending physicians . . . thought it was impossiblefor him to learn anything—according to them, he hadnot learned anything in the 18 years of his life—yet infour experimental sessions, by using the operant condi-tioning technique, an addition was made to his behaviorwhich, at this level, could be termed appreciable. Thosewho participated in or observed the experiment are ofthe opinion that if time permitted, other responses couldbe conditioned and discriminations learned. (Fuller,1949, p. 590)

During the 1950s and into the early 1960s re-searchers used the methods of the experimental analysisof behavior to determine whether the principles of be-havior demonstrated in the laboratory with nonhumansubjects could be replicated with humans. Much of theearly research with human subjects was conducted inclinic or laboratory settings. Although the participantstypically benefited from these studies by learning newbehaviors, the researchers’ major purpose was to deter-mine whether the basic principles of behavior discoveredin the laboratory operated with humans. For example,Bijou (1955, 1957, 1958) researched several principlesof behavior with typically developing subjects and peo-ple with mental retardation; Baer (1960, 1961, 1962) ex-amined the effects of punishment, escape, and avoidance

7Excellent discussions of radical behaviorism can be found in Baum (1994);Catania and Harnad (1988); Catania and Hineline (1996); Chiesa (1994);Lattal (1992); Lee (1988); and Moore (1980, 1984, 1995, 2000, 2003).

contingencies on preschool children; Ferster and DeMyer(1961, 1962; DeMyer & Ferster, 1962) conducted a sys-tematic study of the principles of behavior using childrenwith autism as subjects; and Lindsley (1956, 1960) as-sessed the effects of operant conditioning on the behav-ior of adults with schizophrenia. These early researchersclearly established that the principles of behavior are ap-plicable to human behavior, and they set the stage for thelater development of applied behavior analysis.

The branch of behavior analysis that would later becalled applied behavior analysis (ABA) can be traced tothe 1959 publication of Ayllon and Michael’s paper ti-tled “The Psychiatric Nurse as a Behavioral Engineer.”The authors described how direct care personnel in a statehospital used a variety of techniques based on the prin-ciples of behavior to improve the functioning of residentswith psychotic disorders or mental retardation. Duringthe 1960s many researchers began to apply principles ofbehavior in an effort to improve socially important be-havior, but these early pioneers faced many problems.Laboratory techniques for measuring behavior and forcontrolling and manipulating variables were sometimesunavailable, or their use was inappropriate in applied set-tings. As a result, the early practitioners of applied behavior analysis had to develop new experimental procedures as they went along. There was little fundingfor the new discipline, and researchers had no ready out-let for publishing their studies, making it difficult to com-municate among themselves about their findings andsolutions to methodological problems. Most journal ed-itors were reluctant to publish studies using an experi-mental method unfamiliar to mainstream social science,which relied on large numbers of subjects and tests ofstatistical inference.

Despite these problems it was an exciting time, andmajor new discoveries were being made regularly. Forexample, many pioneering applications of behavior prin-ciples to education occurred during this period (see, e.g.,O’Leary & O’Leary, 1972; Ulrich, Stachnik, & Mabry1974), from which were derived teaching procedures suchas contingent teacher praise and attention (Hall, Lund, &Jackson, 1968), token reinforcement systems (Birnbrauer,Wolf, Kidder, & Tague, 1965), curriculum design(Becker, Englemann, & Thomas, 1975), and programmedinstruction (Bijou, Birnbrauer, Kidder, & Tague, 1966;Markle, 1962). The basic methods for reliably improvingstudent performance developed by those early appliedbehavior analysts provided the foundation for behavioralapproaches to curriculum design, instructional methods,classroom management, and the generalization and main-tenance of learning that continue to be used decades later(cf., Heward et al., 2005).

University programs in applied behavior analysiswere begun in the 1960s and early 1970s at Arizona State

Definition and Characteristics of Applied Behavior Analysis

34

Box 1What Is Behaviorism?

Don Baer loved the science of behavior. Heloved to write about it, and he loved to talkabout it. Don was famous for his unparalleledability to speak extemporaneously aboutcomplex philosophical, experimental, andprofessional issues in a way that always madethorough conceptual, practical, and humansense. He did so with the vocabulary andsyntax of a great author and the accomplisheddelivery of a master storyteller. The only thingDon knew better than his audience was hisscience.

On three occasions, in three differentdecades, graduate students and faculty in thespecial education program at Ohio StateUniversity were fortunate to have ProfessorBaer serve as Distinguished Guest Faculty fora doctoral seminar, Contemporary Issues inSpecial Education and Applied BehaviorAnalysis. The questions and responses thatfollow were selected from transcripts of twoof Professor Baer’s three OSU teleconferenceseminars.

If a person on the street approached you and asked,“What’s behaviorism?” how would you reply?

The key point of behaviorism is that what people do canbe understood. Traditionally, both the layperson and thepsychologist have tried to understand behavior by see-ing it as the outcome of what we think, what we feel,what we want, what we calculate, and etcetera. But wedon’t have to think about behavior that way. We couldlook upon it as a process that occurs in its own right andhas its own causes. And those causes are, very often,found in the external environment.

Behavior analysis is a science of studying how wecan arrange our environments so they make very likelythe behaviors we want to be probable enough, and theymake unlikely the behaviors we want to be improbable.Behaviorism is understanding how the environmentworks so that we can make ourselves smarter, more or-ganized, more responsible; so we can encounter fewerpunishments and fewer disappointments. A central pointof behaviorism is this: We can remake our environmentto accomplish some of that much more easily than wecan remake our inner selves.

An interviewer once asked Edward Teller, the physi-cist who helped develop the first atomic bomb, “Canyou explain to a nonscientist what you find so fasci-

nating about science, particularly physics?” Tellerreplied, “No.” I sense that Teller was suggesting thata nonscientist would not be able to comprehend, un-derstand, or appreciate physics and his fascinationwith it. If a nonscientist asked you, “What do youfind so fascinating about science, particularly thescience of human behavior?” what would you say?

Ed Morris organized a symposium on just this topic acouple of years ago at the Association for BehaviorAnalysis annual convention, and in that symposium, JackMichael commented on the fact that although one of ourdiscipline’s big problems and challenges is communi-cating with our society about who we are, what we do,and what we can do, he didn’t find it reasonable to try tosummarize what behavior analysis is to an ordinary per-son in just a few words. He gave us this example: Imag-ine a quantum physicist is approached at a cocktail partyby someone who asks, “What is quantum physics?” Jacksaid that the physicist might very well answer, and prob-ably should answer, “I can’t tell you in a few words. Youshould register for my course.”

I’m very sympathetic with Jack’s argument. But Ialso know, as someone who’s confronted with the poli-tics of relating our discipline to society, that although itmay be a true answer, it’s not a good answer. It’s not ananswer that people will hear with any pleasure, or in-deed, even accept. I think such an answer creates onlyresentment. Therefore, I think we have to engage in a bitof honest show business. So, if I had to somehow statesome connotations of what holds me in the field, I guessI would say that since I was a child I always found mybiggest reinforcer was something called understanding.I liked to know how things worked. And of all of thethings in the world there are to understand, it becameclear to me that the most fascinating was what peopledo. I started with the usual physical science stuff, and itwas intriguing to me to understand how radios work, andhow electricity works, and how clocks work, etcetera.But when it became clear to me that we could also learnhow people work—not just biologically, but behav-iorally—I thought that’s the best of all. Surely, everyonemust agree that that’s the most fascinating subject mat-ter. That there could be a science of behavior, of whatwe do, of who we are? How could you resist that?

Adapted from “Thursday Afternoons with Don: Selections from ThreeTeleconference Seminars on Applied Behavior Analysis” by W. L.Heward & C. L. Wood (2003). In K. S. Budd & T. Stokes (Eds.), ASmall Matter of Proof: The Legacy of Donald M. Baer (pp. 293–310).Reno, NV: Context Press. Used by permission.

Definition and Characteristics of Applied Behavior Analysis

35

University, Florida State University, the University of Illi-nois, Indiana University, the University of Kansas, theUniversity of Oregon, the University of Southern Illinois,the University of Washington, West Virginia University,and Western Michigan University, among others. Throughtheir teaching and research, faculty at each of these pro-grams made major contributions to the rapid growth ofthe field.8

Two significant events in 1968 mark that year as theformal beginning of contemporary applied behavioranalysis. First, the Journal of Applied Behavior Analysis(JABA) began publication. JABA was the first journal inthe United States to deal with applied problems that gaveresearchers using methodology from the experimentalanalysis of behavior an outlet for publishing their find-ings. JABA was and continues to be the flagship journalof applied behavior analysis. Many of the early articles inJABA became model demonstrations of how to conductand interpret applied behavior analysis, which in turn ledto improved applications and experimental methodology.

The second major event of 1968 was the publicationof the paper, “Some Current Dimensions of Applied Be-havior Analysis” by Donald M. Baer, Montrose M. Wolf,and Todd R. Risley. These authors, the founding fathersof the new discipline, defined the criteria for judging theadequacy of research and practice in applied behavioranalysis and outlined the scope of work for those in thescience. Their 1968 paper has been the most widely citedpublication in applied behavior analysis, and it remainsthe standard description of the discipline.

Defining Characteristics ofApplied Behavior AnalysisBaer, Wolf, and Risley (1968) recommended that appliedbehavior analysis should be applied, behavioral, ana-lytic, technological, conceptually systematic, effective,and capable of appropriately generalized outcomes. In1987 Baer and colleagues reported that the “seven self-conscious guides to behavior analytic conduct” (p. 319)they had offered 20 years earlier “remain functional; theystill connote the current dimensions of the work usuallycalled applied behavior analysis” (p. 314). As we writethis book, nearly 40 years have passed since Baer, Wolf,and Risley’s seminal paper was published, and we findthat the seven dimensions they posed continue to serve asthe primary criteria for defining and judging the value ofapplied behavior analysis.

8Articles describing the histories of the applied behavior analysis programsat five of these universities can be found in the winter 1994 issue of JABA.

Applied

The applied in applied behavior analysis signals ABA’scommitment to affecting improvements in behaviors thatenhance and improve people’s lives. To meet this crite-rion, the researcher or practitioner must select behaviorsto change that are socially significant for participants:social, language, academic, daily living, self-care, voca-tional, and/or recreation and leisure behaviors that im-prove the day-to-day life experience of the participantsand/or affect their significant others (parents, teachers,peers, employers) in such a way that they behave morepositively with and toward the participant.

Behavioral

At first it may seem superfluous to include such an ob-vious criterion—of course applied behavior analysis mustbe behavioral. However, Baer and colleagues (1968)made three important points relative to the behavioral cri-terion. First, not just any behavior will do; the behaviorchosen for study must be the behavior in need of im-provement, not a similar behavior that serves as a proxyfor the behavior of interest or the subject’s verbal de-scription of the behavior. Behavior analysts conduct stud-ies of behavior, not studies about behavior. For example,in a study evaluating the effects of a program to teachschool children to get along with one another, an appliedbehavior analyst would directly observe and measureclearly defined classes of interactions between and amongthe children instead of using indirect measures such asthe children’s answers on a sociogram or responses to aquestionnaire about how they believe they get along withone another.

Second, the behavior must be measurable; the preciseand reliable measurement of behavior is just as critical inapplied research as it is in basic research. Applied re-searchers must meet the challenge of measuring sociallysignificant behaviors in their natural settings, and theymust do so without resorting to the measurement of non-behavioral substitutes.

Third, when changes in behavior are observed dur-ing an investigation, it is necessary to ask whose behav-ior has changed. Perhaps only the behavior of theobservers has changed. “Explicit measurement of the re-liability of human observers thus becomes not merelygood technique, but a prime criterion of whether the studywas appropriately behavioral” (Baer et al., 1968, p. 93).Or perhaps the experimenter’s behavior has changed in anunplanned way, making it inappropriate to attribute anyobserved change in the subject’s behavior to the inde-pendent variables that were manipulated. The applied be-havior analyst should attempt to monitor the behavior ofall persons involved in a study.

Definition and Characteristics of Applied Behavior Analysis

36

Analytic

A study in applied behavior analysis is analytic when theexperimenter has demonstrated a functional relation be-tween the manipulated events and a reliable change insome measurable dimension of the targeted behavior. Inother words, the experimenter must be able to control theoccurrence and nonoccurrence of the behavior. Some-times, however, society does not allow the repeatedmanipulation of important behaviors to satisfy the re-quirements of experimental method. Therefore, appliedbehavior analysts must demonstrate control to the great-est extent possible, given the restraints of the setting andbehavior; and then they must present the results for judg-ment by the consumers of the research. The ultimate issueis believability: Has the researcher achieved experimen-tal control to demonstrate a reliable functional relation?

The analytic dimension enables ABA to not onlydemonstrate effectiveness, but also to provide the “acidtest proof” of functional and replicable relations betweenthe interventions it recommends and socially significantoutcomes (D. M. Baer, October 21, 1982, personal communication).

Because we are a data- and design-based discipline, weare in the remarkable position of being able to prove thatbehavior can work in the way that our technology pre-scribes. We are not theorizing about how behavior canwork; we are describing systematically how it hasworked many times in real-world applications, in de-signs too competent and with measurement systems tooreliable and valid to doubt. Our ability to prove that be-havior can work that way does not, of course, establishthat behavior cannot work any other way: we are not in a discipline that can deny any other approaches,only in one that can affirm itself as knowing many of its sufficient conditions at the level of experimental proof . . . our subject matter is behavior change, and we can specify some actionable sufficient conditions for it. (D. M. Baer, personal communication, October21, 1982, emphasis in original).

Technological

A study in applied behavior analysis is technologicalwhen all of its operative procedures are identified anddescribed with sufficient detail and clarity “such that areader has a fair chance of replicating the application withthe same results” (Baer et al., 1987, p. 320).

It is not enough to say what is to be done when the sub-ject makes response R1; it is essential also wheneverpossible to say what is to be done if the subject makesthe alternative responses, R2, R3, etc. For example, onemay read that temper tantrums in children are often ex-tinguished by closing the child in his room for the dura-tion of the tantrums plus ten minutes. Unless that

procedure description also states what should be done ifthe child tries to leave the room early, or kicks out thewindow, or smears feces on the walls, or begins to makestrangling sounds, etc., it is not precise technological de-scription. (Baer et al., 1968, pp. 95–96)

No matter how powerful its effects in any givenstudy, a behavior change method will be of little value ifpractitioners are unable to replicate it. The developmentof a replicable technology of behavior change has beena defining characteristic and continuing goal of ABAfrom its inception. Behavioral tactics are replicable andteachable to others. Interventions that cannot be repli-cated with sufficient fidelity to achieve comparable out-comes are not considered part of the technology.

A good check of the technological adequacy of a pro-cedural description is to have a person trained in appliedbehavior analysis carefully read the description and thenact out the procedure in detail. If the person makes anymistakes, adds any operations, omits any steps, or has toask any questions to clarify the written description, thenthe description is not sufficiently technological and re-quires improvement.

Conceptually Systematic

Although Baer and colleagues (1968) did not state so ex-plicitly, a defining characteristic of applied behavioranalysis concerns the types of interventions used to im-prove behavior. Although there are an infinite number oftactics and specific procedures that can be used to alterbehavior, almost all are derivatives and/or combinationsof a relatively few basic principles of behavior. Thus,Baer and colleagues recommended that research reportsof applied behavior analysis be conceptually systematic,meaning that the procedures for changing behavior andany interpretations of how or why those procedures wereeffective should be described in terms of the relevant principle(s) from which they were derived.

Baer and colleagues (1968) provided a strong ra-tionale for the use of conceptual systems in applied be-havior analysis. First, relating specific procedures to basicprinciples might enable the research consumer to deriveother similar procedures from the same principle(s). Sec-ond, conceptual systems are needed if a technology is tobecome an integrated discipline instead of a “collectionof tricks.” Loosely related collections of tricks do notlend themselves to systematic expansion, and they aredifficult to learn and to teach in great number.

Effective

An effective application of behavioral techniques mustimprove the behavior under investigation to a practicaldegree. “In application, the theoretical importance of a

Definition and Characteristics of Applied Behavior Analysis

37

variable is usually not at issue. Its practical importance,specifically its power in altering behavior enough to besocially important, is the essential criterion” (Baer et al.,1968, p. 96). Whereas some investigations produce re-sults of theoretical importance or statistical significance,to be judged effective an applied behavior analysis studymust produce behavior changes that reach clinical or so-cial significance.

How much a given behavior of a given subject needsto change for the improvement to be considered sociallyimportant is a practical question. Baer and colleaguesstated that the answer is most likely to come from thepeople who must deal with the behavior; they should beasked how much the behavior needs to change. The ne-cessity of producing behavioral changes that are mean-ingful to the participant and/or those in the participant’senvironment has pushed behavior analysts to search for“robust” variables, interventions that produce large andconsistent effects on behavior (Baer, 1977a).

When they revisited the dimension of effectiveness20 years later, Baer, Wolf, and Risley (1987) recom-mended that the effectiveness of ABA also be judged bya second kind of outcome: the extent to which changes inthe target behaviors result in noticeable changes in thereasons those behaviors were selected for change origi-nally. If such changes in the subjects’ lives do not occur,ABA may achieve one level of effectiveness yet fail toachieve a critical form of social validity (Wolf, 1978).

We may have taught many social skills without examin-ing whether they actually furthered the subject’s sociallife; many courtesy skills without examining whetheranyone actually noticed or cared; many safety skillswithout examining whether the subject was actuallysafer thereafter; many language skills without measuringwhether the subject actually used them to interact differ-ently than before; many on-task skills without measur-ing the actual value of those tasks; and, in general, manysurvival skills without examining the subject’s actualsubsequent survival. (Baer et al., 1987, p. 322)

Generality

A behavior change has generality if it lasts over time, ap-pears in environments other than the one in which the in-tervention that initially produced it was implemented,and/or spreads to other behaviors not directly treated bythe intervention. A behavior change that continues afterthe original treatment procedures are withdrawn has gen-erality. And generality is evident when changes in tar-geted behavior occur in nontreatment settings orsituations as a function of treatment procedures. Gener-ality also exists when behaviors change that were not thefocus of the intervention. Although not all instances ofgenerality are adaptive (e.g., a beginning reader who has

just learned to make the sound for the letter p in wordssuch as pet and ripe, might make the same sound whenseeing the letter p in the word phone), desirable general-ized behavior changes are important outcomes of an ap-plied behavior analysis program because they representadditional dividends in terms of behavioral improvement.

Some Additional Characteristics of ABA

Applied behavior analysis offers society an approach to-ward solving problems that is accountable, public, doable,empowering, and optimistic (Heward, 2005). Althoughnot among ABA’s defining dimensions, these character-istics should help increase the extent to which decisionmakers and consumers in many areas look to behavioranalysis as a valuable and important source of knowl-edge for achieving improvements.

Accountable

The commitment of applied behavior analysts to effec-tiveness, their focus on accessible environmental vari-ables that reliably influence behavior, and their relianceon direct and frequent measurement to detect changes inbehavior yield an inescapable and socially valuable formof accountability. Direct and frequent measurement—thefoundation and most important component of ABA prac-tices—enables behavior analysts to detect their successesand, equally important, their failures so they can makechanges in an effort to change failure to success (Bushell& Bear, 1994; Greenwood & Maheady, 1997).

Failure is always informative in the logic of behavioranalysis, just as it is in engineering. The constant reac-tion to lack of progress [is] a definitive hallmark ofABA. (Baer, 2005, p. 8)

Gambrill (2003) described the sense of accountabil-ity and self-correcting nature of applied behavior analy-sis very well.

Applied behavior analysis is a scientific approach to un-derstanding behavior in which we guess and criticallytest ideas, rather than guess and guess again. It is aprocess for solving problems in which we learn fromour mistakes. Here, false knowledge and inert knowl-edge are not valued. (p. 67)

Public

“Everything about ABA is visible and public, explicit andstraightforward. . . . ABA entails no ephemeral, mysti-cal, or metaphysical explanations; there are no hidden

Definition and Characteristics of Applied Behavior Analysis

38

treatments; there is no magic” (Heward, 2005, p. 322).The transparent, public nature of ABA should raise itsvalue in fields such as education, parenting and child care,employee productivity, geriatrics, health and safety, andsocial work—to name only a few—whose goals, meth-ods, and outcomes are of vital interest to many con-stituencies.

Doable

Classroom teachers, parents, coaches, workplace super-visors, and sometimes the participants themselves im-plemented the interventions found effective in many ABAstudies. This demonstrates the pragmatic element ofABA. “Although ‘doing ABA’ requires far more thanlearning to administer a few simple procedures, it is notprohibitively complicated or arduous. As many teachershave noted, implementing behavioral strategies in theclassroom . . . might best be described as good old-fashioned hard work” (Heward, 2005, p. 322).

Empowering

ABA gives practitioners real tools that work. Knowinghow to do something and having the tools to accomplishit instills confidence in practitioners. Seeing the datashowing behavioral improvements in one’s clients, stu-dents, or teammates, or in oneself, not only feels good,but also raises one’s confidence level in assuming evenmore difficult challenges in the future.

Optimistic

Practitioners knowledgeable and skilled in behavioranalysis have genuine cause to be optimistic for four rea-sons. First, as Strain and Joseph (2004) noted:

The environmental view promoted by behaviorism is es-sentially optimistic; it suggests that (except for gross ge-netic factors) all individuals possess roughly equalpotential. Rather than assuming that individuals havesome essential internal characteristic, behaviorists as-sume that poor outcomes originate in the way the envi-ronment and experience shaped the individual’s currentbehavior. Once these environmental and experientialfactors are identified, we can design prevention and in-tervention programs to improve the outcomes. . . . Thus,the emphasis on external control in the behavioral ap-proach . . . offers a conceptual model that celebrates the possibilities for each individual. (Strain et al.,1992, p. 58)

Second, direct and continuous measurement enablespractitioners to detect small improvements in perfor-mance that might otherwise be overlooked. Third, the

more often a practitioner uses behavioral tactics with pos-itive outcomes (the most common result of behaviorallybased interventions), the more optimistic she becomesabout the prospects for future success.

A sense of optimism, expressed by the question “Whynot?” has been a central part of ABA and has had anenormous impact on its development from its earliestdays. Why can’t we teach a person who does not yet talkto talk? Why shouldn’t we go ahead and try to changethe environments of young children so that they will dis-play more creativity? Why would we assume that thisperson with a developmental disability could not learn todo the same things that many of us do? Why not try todo it? (Heward, 2005, p. 323)

Fourth, ABA’s peer-reviewed literature providesmany examples of success in teaching students who hadbeen considered unteachable. ABA’s continuous record ofachievements evokes a legitimate feeling of optimismthat future developments will yield solutions to behav-ioral challenges that are currently beyond the existingtechnology. For example, in response to the perspectivethat some people have disabilities so severe and profoundthat they should be viewed as ineducable, Don Baer of-fered this perspective:

Some of us have ignored both the thesis that all personsare educable and the thesis that some persons are inedu-cable, and instead have experimented with ways to teachsome previously unteachable people. Those experimentshave steadily reduced the size of the apparently inedu-cable group relative to the obviously educable group.Clearly, we have not finished that adventure. Why pre-dict its outcome, when we could simply pursue it, andjust as well without a prediction? Why not pursue it tosee if there comes a day when there is such a small classof apparently ineducable persons left that it consists ofone elderly person who is put forward as ineducable. Ifthat day comes, it will be a very nice day. And the nextday will be even better. (D. M. Baer, February 15,2002, personal communication, as cited in Heward,2006, p. 473)

Definition of AppliedBehavior AnalysisWe began this chapter by stating that applied behavioranalysis is concerned with the improvement and under-standing of human behavior. We then described some ofthe attitudes and methods that are fundamental to scien-tific inquiry, briefly reviewed the development of the sci-ence and philosophy of behavior analysis, and examinedthe characteristics of ABA. All of that provided neces-sary context for the following definition of applied be-havior analysis:

Definition and Characteristics of Applied Behavior Analysis

39

Applied behavior analysis is the science in whichtactics derived from the principles of behavior areapplied systematically to improve socially signifi-cant behavior and experimentation is used to iden-tify the variables responsible for behavior change.

This definition includes six key components. First,the practice of applied behavior analysis is guided by theattitudes and methods of scientific inquiry. Second, allbehavior change procedures are described and imple-mented in a systematic, technological manner. Third, notany means of changing behavior qualifies as appliedbehavior analysis: Only those procedures conceptuallyderived from the basic principles of behavior are cir-cumscribed by the field. Fourth, the focus of applied be-havior analysis is socially significant behavior. The fifthand sixth parts of the definition specify the twin goals ofapplied behavior analysis: improvement and under-standing. Applied behavior analysis seeks to make mean-ingful improvement in important behavior and to producean analysis of the factors responsible for that improve-ment.

Four Interrelated Domains ofBehavior Analytic Science andProfessional Practices Guided byThat Science

The science of behavior analysis and its application tohuman problems consists of four domains: the threebranches of behavior analysis—behaviorism, EAB, andABA—and professional practice in various fields that isinformed and guided by that science. Figure 2 identi-fies some of the defining features and characteristics ofthese four interrelated domains. Although most behavioranalysts work primarily in one or two of the domainsshown in Figure 2, it is common for a behavior analyst to function in multiple domains at one time or another(Hawkins & Anderson, 2002; Moore & Cooper, 2003).

A behavior analyst who pursues theoretical and con-ceptual issues is engaged in behaviorism, the philosoph-ical domain of behavior analysis. A product of such workis Delprato’s (2002) discussion of the importance ofcountercontrol (behavior by people experiencing aver-sive control by others that helps them escape and avoidthe control while not reinforcing and sometimes punish-ing the controller’s responses) toward an understandingof effective interventions for interpersonal relations andcultural design.

The experimental analysis of behavior is the basicresearch branch of the science. Basic research consistsof experiments in laboratory settings with both humanand nonhuman subjects with a goal of discovering and

9Schedules of reinforcement are discussed in chapter entitled “Schedulesof Reinforcement.”10Examples and discussions of bridge or translational research can be foundin the winter 1994 and winter 2003 issues of JABA; Fisher and Mazur(1997); Lattal and Neef (1996); and Vollmer and Hackenberg (2001).

clarifying fundamental principles of behavior. An exam-ple is Hackenberg and Axtell’s (1993) experiments in-vestigating how choices made by humans are affected bythe dynamic interaction of schedules of reinforcementthat entail short- and long-term consequences.9

Applied behavior analysts conduct experimentsaimed at discovering and clarifying functional relationsbetween socially significant behavior and its controllingvariables, with which they can contribute to the furtherdevelopment of a humane and effective technology ofbehavior change. An example is research by Tarbox, Wal-lace, and Williams (2003) on the assessment and treat-ment of elopement (running or walking away from acaregiver without permission), a behavior that poses greatdanger for young children and people with disabilities.

The delivery of behavior analytic professional ser-vices occurs in the fourth domain. Behavior analysis prac-titioners design, implement, and evaluate behavior changeprograms that consist of behavior change tactics derivedfrom fundamental principles of behavior discovered bybasic researchers, and that have been experimentally val-idated for their effects on socially significant behaviorby applied researchers. An example is when a therapistproviding home-based treatment for a child with autismembeds frequent opportunities for the child to use hisemerging social and language skills in the context of nat-uralistic, daily routines and ensures that the child’s re-sponses are followed with reinforcing events. Anotherexample is a classroom teacher trained in behavior analy-sis who uses positive reinforcement and stimulus fadingto teach students to identify and classify fish into theirrespective species by the shape, size, and location of theirfins.

Although each of the four domains of ABA can bedefined and practiced in its own right, none of the do-mains are, or should be, completely independent of anduninformed by developments in the others. Both the sci-ence and the application of its findings benefit when thefour domains are interrelated and influence one another(cf., Critchfield & Kollins, 2001; Lattal & Neef; 1996;Stromer, McComas, & Rehfeldt, 2000). Evidence of thesymbiotic relations between the basic and applied do-mains is evident in research that “bridges” basic and ap-plied areas and in applied research that translates theknowledge derived from basic research “into state-of-the-art clinical practices for use in the community” (Ler-man, 2003, p. 415).10

Definition and Characteristics of Applied Behavior Analysis

40

Figure 2 Some comparisons and relationships among the four domains of behavior analysis science andpractice.

Behaviorism Experimental Analysis Applied Behavior Practice Guided of Behavior (EAB) Analysis (ABA) by Behavior Analysis

The Science of Behavior Analysis

The Application of Behavior Analysis

Province Theory and Basic research Applied research Helping people behave more philosophy successfully

Primary activity Conceptual and Design, conduct, Design, conduct, Design, implement, and philosophical interpret, and report interpret, and report evaluate behavior change analysis basic experiments applied experiments programs

Primary goal Theoretical Discover and clarify A technology for Improvements in the lives and product account of all basic principles of improving socially of participants/clients as a

behavior consistent behavior; functional significant behavior; result of changes in their with existing data relations between functional relations behavior

behavior and control- between socially ling variables significant behavior

and controlling variables

Secondary goals Identify areas in Identify questions for Identify questions for Increased efficiency in which empirical EAB and/or ABA to EAB and/or ABA to achieving primary goal; may data are absent investigate further; investigate further; identify questions for ABA and/or conflict and raise theoretical raise theoretical and EABsuggest resolutions issues issues

Agreement with As much as Complete—Although Complete—Although As much as possible, but existing database possible, but theory differences among differences among practitioners must often deal

must go beyond data sets exist, EAB data sets exist, ABA with situations not covered database by provides the basic provides the applied by existing datadesign research database research database

Testability Partially—All Mostly—Technical Mostly—Same Partially—All behavior and behavior and limitations preclude limitations as EAB variables of interest are not variables of interest measurement and plus those posed by accessible (e.g., a student’s are not accessible experimental applied settings (e.g., home life)(e.g., phylogenic manipulation of some ethical concerns, contingencies) variables uncontrolled events)

Scope Most LeastWide scope As much scope as As much scope as Narrow scope because because theory the EAB database the ABA database practitioner’s primary focus is attempts to enables enables helping the specific situationaccount for all behavior

Precision Least MostMinimal precision As much precision as As much precision Maximum precision is sought is possible because EAB’s current tech- as ABA’s current to change behavior most ef-experimental data nology for experi- technology for fectively in specific instancedo not exist for all mental control and experimental control behavior encom- the researcher’s skills and the researcher’s passed by theory enable skills enable

▲ ▲

▲ ▲

Definition and Characteristics of Applied Behavior Analysis

41

The Promise and Potential of ABA

In a paper titled, “A Futuristic Perspective for AppliedBehavior Analysis,” Jon Bailey (2000) stated that

It seems to me that applied behavior analysis is morerelevant than ever before and that it offers our citizens,parents, teachers, and corporate and government leadersadvantages that cannot be matched by any other psycho-logical approach. . . . I know of no other approach inpsychology that can boast state-of-the-art solutions tothe most troubling social ills of the day. (p. 477)

We, too, believe that ABA’s pragmatic, natural sci-ence approach to discovering environmental variablesthat reliably influence socially significant behavior and todeveloping a technology to take practical advantage ofthose discoveries offers humankind its best hope for solv-ing many of its problems. It is important, however, to rec-ognize that behavior analysis’s knowledge of “howbehavior works,” even at the level of fundamental prin-ciples, is incomplete, as is the technology for changingsocially significant behavior derived from those princi-ples. There are aspects of about which relatively little isknown, and additional research, both basic and applied,is needed to clarify, extend, and fine-tune all existingknowledge (e.g., Critchfield & Kollins, 2001; Friman,Hayes, & Wilson, 1998; Murphy, McSweeny, Smith, &McComas, 2003; Stromer, McComas, & Rehfeldt, 2000).

Nevertheless, the still young science of applied be-havior analysis has contributed to a full range of areas inwhich human behavior is important. Even an informal,cursory survey of the research published in applied be-havior analysis reveals studies investigating virtually thefull range of socially significant human behavior from Ato Z and almost everywhere in between: AIDS prevention(e.g., DeVries, Burnette, & Redmona, 1991), conserva-tion of natural resources (e.g., Brothers, Krantz, & Mc-Clannahan, 1994), education (e.g., Heward et al., 2005),gerontology (e.g., Gallagher & Keenan, 2000), health andexercise (e.g., De Luca & Holborn, 1992), industrialsafety (e.g., Fox, Hopkins, & Anger, 1987), language ac-quisition (e.g., Drasgow, Halle, & Ostrosky, 1998), lit-tering (e.g., Powers, Osborne, & Anderson, 1973),medical procedures (e.g., Hagopian & Thompson, 1999),parenting (e.g., Kuhn, Lerman, & Vorndran, 2003), seat-belt use (e.g., Van Houten, Malenfant, Austin, & Lebbon,2005), sports (e.g., Brobst & Ward, 2002), and zoo man-agement and care of animals (e.g., Forthman & Ogden,1992).

Applied behavior analysis provides an empiricalbasis for not only understanding human behavior but alsoimproving it. Equally important, ABA continually testsand evaluates its methods.

SummarySome Basic Characteristics and a Definition of Science

1. Different types of scientific investigations yield knowl-edge that enables the description, prediction, and/or con-trol of the phenomena studied.

2. Descriptive studies yield a collection of facts about theobserved events that can be quantified, classified, and ex-amined for possible relations with other known facts.

3. Knowledge gained from a study that finds the systematiccovariation between two events—termed a correlation—can be used to predict the probability that one event willoccur based on the occurrence of the other event.

4. Results of experiments that show that specific manipula-tions of one event (the independent variable) produce a re-liable change in another event (the dependent variable),and that the change in the dependent variable was unlikelythe result of extraneous factors (confounding variables)—a finding known as a functional relation—can be used tocontrol the phenomena under investigation.

5. The behavior of scientists in all fields is characterized bya common set of assumptions and attitudes:

• Determinism—the assumption that the universe is a law-ful and orderly place in which phenomena occur as aresult of other events.

• Empiricism—the objective observation of the phenom-ena of interest.

• Experimentation—the controlled comparison of somemeasure of the phenomenon of interest (the dependentvariable) under two or more different conditions inwhich only one factor at a time (the independent vari-able) differs from one condition to another.

• Replication—repeating experiments (and independentvariable conditions within experiments) to determinethe reliability and usefulness of findings

• Parsimony—simple, logical explanations must be ruledout, experimentally or conceptually, before more com-plex or abstract explanations are considered.

• Philosophic doubt—continually questioning the truth-fulness and validity of all scientific theory andknowledge.

A Brief History of the Development of Behavior Analysis

6. Behavior analysis consists of three major branches: be-haviorism, the experimental analysis of behavior (EAB),and applied behavior analysis (ABA).

7. Watson espoused an early form of behaviorism known asstimulus–response (S–R) psychology, which did not ac-count for behavior without obvious antecedent causes.

Definition and Characteristics of Applied Behavior Analysis

42

8. Skinner founded the experimental analysis of behavior(EAB), a natural science approach for discovering orderlyand reliable relations between behavior and various typesof environmental variables of which it is a function.

9. EAB is characterized by these methodological features:

• Rate of response is the most common dependentvariable.

• Repeated or continuous measurement is made of care-fully defined response classes.

• Within-subject experimental comparisons are used in-stead of designs comparing the behavior of experimen-tal and control groups.

• The visual analysis of graphed data is preferred overstatistical inference.

• A description of functional relations is valued over for-mal theory testing.

10. Through thousands of laboratory experiments, Skinnerand his colleagues and students discovered and verifiedthe basic principles of operant behavior that provide theempirical foundation for behavior analysis today.

11. Skinner wrote extensively about a philosophy for a sci-ence of behavior he called radical behaviorism. Radicalbehaviorism attempts to explain all behavior, includingprivate events such as thinking and feeling.

12. Methodological behaviorism is a philosophical positionthat considers behavioral events that cannot be publiclyobserved to be outside the realm of the science.

13. Mentalism is an approach to understanding behavior thatassumes that a mental, or “inner,” dimension exists thatdiffers from a behavioral dimension and that phenomenain this dimension either directly cause or at least mediatesome forms of behavior; it relies on hypothetical constructsand explanatory fictions.

14. The first published report of the application of operantconditioning with a human subject was a study by Fuller(1949), in which an arm-raising response was conditionedin an adolescent with profound retardation.

15. The formal beginnings of applied behavior analysis canbe traced to 1959 and the publication of Ayllon andMichael’s article, ”The Psychiatric Nurse as a BehavioralEngineer.”

16. Contemporary applied behavior analysis (ABA) began in1968 with the publication of the first issue of the Journalof Applied Behavior Analysis (JABA).

Defining Characteristics of Applied Behavior Analysis

17. Baer, Wolf, and Risley (1968, 1987) stated that a researchstudy or behavior change program should meet sevendefining dimensions to be considered applied behavioranalysis:

• Applied—investigates socially significant behaviorswith immediate importance to the subject(s).

• Behavioral—entails precise measurement of the actualbehavior in need of improvement and documents that itwas the subject’s behavior that changed.

• Analytic—demonstrates experimental control over theoccurrence and nonoccurrence of the behavior—that is,if a functional relation is demonstrated.

• Technological—the written description of all proceduresused in the study is sufficiently complete and detailed toenable others to replicate it.

• Conceptually systematic—behavior change interven-tions are derived from basic principles of behavior.

• Effective—improves behavior sufficiently to producepractical results for the participant/client.

• Generality—produces behavior changes that last overtime, appear in other environments, or spread to otherbehaviors.

18. ABA offers society an approach toward solving many ofits problems that is accountable, public, doable, empow-ering, and optimistic.

A Definition of Applied Behavior Analysis

19. Applied behavior analysis is the science in which tacticsderived from the principles of behavior are applied sys-tematically to improve socially significant behavior andexperimentation is used to identify the variables respon-sible for behavior change.

20. Behavior analysts work in one or more of four interrelateddomains: behaviorism (theoretical and philosophical is-sues), the experimental analysis of behavior (basic re-search), applied behavior analysis (applied research), andprofessional practice (providing behavior analytic servicesto consumers).

21. ABA’s natural science approach to discovering environ-mental variables that reliably influence socially signifi-cant behavior and developing a technology to take practicaladvantage of those discoveries offers humankind its besthope for solving many of its problems.

Definition and Characteristics of Applied Behavior Analysis

43

antecedentautomaticity of reinforcementaversive stimulusbehaviorbehavior change tacticconditioned punisherconditioned reflexconditioned reinforcerconditioned stimulusconsequencecontingencycontingentdeprivationdiscriminated operantdiscriminative stimulus (SD)environmentextinction

habituationhigher order conditioninghistory of reinforcementmotivating operationnegative reinforcementneutral stimulusontogenyoperant behavioroperant conditioningphylogenypositive reinforcementprinciple of behaviorpunisherpunishmentreflexreinforcementreinforcer

repertoirerespondent behaviorrespondent conditioningrespondent extinctionresponseresponse classsatiationselection by consequencesstimulusstimulus classstimulus controlstimulus–stimulus pairingthree-term contingencyunconditioned punisherunconditioned reinforcerunconditioned stimulus

Behavior Analyst Certification Board® BCBA® & BCABA®Behavior Analyst Task List,© Third Edition

Content Area 3: Principles, Processes, and Concepts

3-1 Define and provide examples of behavior/response/response class.

3-2 Define and provide examples of stimulus and stimulus class.

3-3 Define and provide examples of positive and negative reinforcement.

3-4 Define and provide examples of conditioned and unconditioned reinforcement.

3-5 Define and provide examples of positive and negative punishment.

3-6 Define and provide examples of conditioned and unconditioned punishment.

3-7 Define and provide examples of stimulus control.

3-8 Define and provide examples of establishing operations.

3-9 Define and provide examples of behavioral contingencies.

(continued )

Basic Concepts

Key Terms

From Chapter 2 of Applied Behavior Analysis, Second Edition. John O. Cooper, Timothy E. Heron, William L. Heward. Copyright © 2007by Pearson Education, Inc. All rights reserved.

44

This chapter defines the basic elements involvedin a scientific analysis of behavior and intro-duces several principles that have been discov-

ered through such an analysis. The first concept weexamine—behavior—is the most fundamental of all. Be-cause the controlling variables of primary importance inapplied behavior analysis are located in the environment,the concepts of environment and stimulus are definednext. We then introduce several essential findings that thescientific study of behavior–environment relations hasdiscovered. Two functionally distinct types of behavior—respondent and operant—are described, and the basicways the environment influences each type of behavior—respondent conditioning and operant conditioning—areintroduced. The three-term contingency—a concept forexpressing and organizing the temporal and functionalrelations between operant behavior and environment—and its importance as a focal point in applied behavioranalysis are then explained.1 The chapter’s final sectionrecognizes the incredible complexity of human behav-ior, reminds us that behavior analysts possess an incom-plete knowledge, and identifies some of the obstacles andchallenges faced by those who strive to change behaviorin applied settings.

BehaviorWhat, exactly, is behavior? Behavior is the activity of liv-ing organisms. Human behavior is everything people do,including how they move and what they say, think, andfeel. Tearing open a bag of peanuts is behavior, and so isthinking how good the peanuts will taste once the bag isopen. Reading this sentence is behavior, and if you’reholding the book, so is feeling its weight and shape inyour hands.

Although words such as activity and movement ade-quately communicate the general notion of behavior, amore precise definition is needed for scientific purposes.How a scientific discipline defines its subject matter ex-erts profound influence on the methods of measurement,experimentation, and theoretical analysis that are appro-priate and possible.

Building on Skinner’s (1938) definition of behavioras “the movement of an organism or of its parts in a frameof reference provided by the organism or by various ex-ternal objects or fields” (p. 6), Johnston and Pennypacker(1980, 1993a) articulated the most conceptually soundand empirically complete definition of behavior to date.

The behavior of an organism is that portion of an organ-ism’s interaction with its environment that is character-ized by detectable displacement in space through timeof some part of the organism and that results in a mea-surable change in at least one aspect of the environ-ment. (p. 23)

Johnston and Pennypacker (1993a) discussed themajor elements of each part of this definition. The phrasebehavior of an organism restricts the subject matter tothe activity of living organisms, leaving notions such asthe “behavior” of the stock market outside the realm ofthe scientific use of the term.

The phrase portion of the organism’s interaction withthe environment specifies “the necessary and sufficientconditions for the occurrence of behavior as (a) the exis-tence of two separate entities, organism and environment,and (b) the existence of a relation between them” (John-ston & Pennypacker, 1993a, p. 24). The authors elabo-rated on this part of the definition as follows:

Behavior is not a property or attribute of the organism. Ithappens only when there is an interactive condition be-tween an organism and its surroundings, which includeits own body. This means that independent states of theorganism, whether real or hypothetical, are not behav-ioral events, because there is no interactive process.Being hungry or being anxious are examples of statesthat are sometimes confused with the behavior that theyare supposed to explain. Neither phrase specifies an en-vironmental agent with which the hungry or anxious or-ganism interacts, so no behavior is implied.

Basic Concepts

1The reader should not be overwhelmed by the many technical terms andconcepts contained in this chapter. With the exception of the material onrespondent behavior, all of the concepts introduced in this chapter are ex-plained in greater detail in subsequent chapters. This initial overview ofbasic concepts is intended to provide background information that will fa-cilitate understanding those portions of the text that precede the more de-tailed explanations.

Content Area 3: Principles, Processes, and Concepts (continued )

3-13 Describe and provide examples of the respondent conditioning paradigm.

3-14 Describe and provide examples of the operant conditioning paradigm.

© 2006 The Behavior Analyst Certification Board, Inc.,® (BACB®) all rights reserved. A current version of thisdocument may be found at www.bacb.com. Requests to reprint, copy, or distribute this document and ques-tions about this document must be submitted directly to the BACB.

45

Basic Concepts

Similarly, independent conditions or changes in theenvironment do not define behavioral occurrences be-cause no interaction is specified. Someone walking inthe rain gets wet, but “getting wet” is not an instance ofbehavior. A child may receive tokens for correctly work-ing math problems, but “receiving a token” is not be-havior. Receiving a token implies changes in theenvironment but does not suggest or require change inthe child’s movement. In contrast, both doing mathproblems and putting the token in a pocket are behav-ioral events because the environment both prompts thechild’s actions and is then changed by them. (Johnston& Pennypacker, 1993a, p. 24)

Behavior is movement, regardless of scale; hence thephrase displacement in space through time. In addition to excluding static states of the organism, the definitiondoes not include bodily movements produced by the ac-tion of independent physical forces as behavioral events.For example, being blown over by a strong gust of windis not behavior; given sufficient wind, nonliving objectsand organisms move similarly. Behavior can be accom-plished only by living organisms. A useful way to tellwhether movement is behavior is to apply the dead mantest: “If a dead man can do it, it ain’t behavior. And if adead man can’t do, then it is behavior” (Malott & TrojanSuarez, 2004, p. 9). So, although being knocked downby strong wind is not behavior (a dead man would also beblown over), moving arms and hands in front of one’sface, tucking and rolling, and yelling “Whoa!” as one isbeing blown over are behaviors.2

The displacement in space through time phrase alsohighlights the properties of behavior most amenable tomeasurement. Johnston and Pennypacker (1993a) re-ferred to these fundamental properties by which behav-ior can be measured as temporal locus (when in time aspecified behavior occurs), temporal extent (the durationof a given behavioral event), and repeatability (the fre-quency with which a specified behavior occurs overtime).

Acknowledging that the last phrase of the defini-tion—that results in a measurable change in at least oneaspect of the environment—is somewhat redundant, John-ston and Pennypacker (1993a) noted that it emphasizes animportant qualifier for the scientific study of behavior.

Because the organism cannot be separated from an envi-ronment and because behavior is the relation betweenorganism and environment, it is impossible for a behav-ioral event not to influence the environment in someway. . . . This is an important methodological point be-

2Odgen Lindsley originated the dead man test in the mid-1960s as a wayto help teachers determine whether they were targeting real behaviors formeasurement and change as opposed to inanimate states such as “beingquiet.”

3Most behavior analysts use the word behavior both as a mass noun torefer to the subject matter of the field in general or a certain type or classof behavior (e.g., operant behavior, study behavior) and as a count noun torefer to specific instances (e.g., two aggressive behaviors). The wordbehavior is often implied and unnecessary to state. We agree with Friman’s(2004) recommendation that, “If the object of our interest is hitting andspitting, let’s just say ‘hitting’ and ‘spitting.’ Subsequently, when we aregathering our thoughts with a collective term, we can call them behaviors”(p. 105).

cause it says that behavior must be detected and mea-sured in terms of its effects on the environment. (p. 27)

As Skinner (1969) wrote, “To be observed, a re-sponse must affect the environment—it must have an ef-fect upon an observer or upon an instrument which inturn can affect an observer. This is as true of the con-traction of a small group of muscle fibers as of pressinga lever or pacing a figure 8” (p. 130).

The word behavior is usually used in reference to alarger set or class of responses that share certain physi-cal dimensions (e.g., hand-flapping behavior) or func-tions (e.g., study behavior).3 The term response refers toa specific instance of behavior. A good technical defini-tion of response is an “action of an organism’s effector.An effector is an organ at the end of an efferent nervefiber that is specialized for altering its environment me-chanically, chemically, or in terms of other energychanges” (Michael, 2004, p. 8, italics in original). Humaneffectors include the striped muscles (i.e., skeletal mus-cles such as biceps and quadriceps), smooth muscles(e.g., stomach and bladder muscles), and glands (e.g.,adrenal gland).

Like stimulus changes in the environment, behaviorcan be described by its form, or physical characteristics.Response topography refers to the physical shape or formof behavior. For example, the hand and finger movementsused to open a bag of peanuts can be described by theirtopographical elements. However, careful observationwill reveal that the topography differs somewhat eachtime a person opens a bag of snacks. The difference maybe significant or slight, but each “bag opening response”will vary somewhat from all others.

Although it is sometimes useful to describe behaviorby its topography, behavior analysis is characterized bya functional analysis of the effects of behavior on the en-vironment. A group of responses with the same function(that is, each response in the group produces the sameeffect on the environment) is called a response class.Membership in some response classes is open to re-sponses of widely varying form (e.g., there are manyways to open a bag of peanuts), whereas the topograph-ical variation among members of other response classesis limited (e.g., a person’s signature, grip on a golf club).

Another reason underscoring the importance of a functional analysis of behavior over a structural or

46

Basic Concepts

topographical description is that two responses of thesame topography can be vastly different behaviors de-pending on the controlling variables. For example, say-ing the word fire while looking the letters, f-i-r-e, is avastly different behavior from yelling “Fire!” whensmelling smoke or seeing flames in a crowded theatre.

Behavior analysts use the term repertoire in at leasttwo ways. Repertoire is sometimes used to refer to all ofthe behaviors that a person can do. More often the termdenotes a set or collection of knowledge and skills a per-son has learned that are relevant to particular settings ortasks. In the latter sense, each person has acquired orlearned multiple repertoires. For example, each of us hasa repertoire of behaviors appropriate for informal socialsituations that differs somewhat (or a lot) from the be-haviors we use to navigate formal situations. And eachperson has repertoires with respect to language skills,academic tasks, everyday routines, recreation, and so on.When you complete your study of this text, your reper-toire of knowledge and skills in applied behavior analy-sis will be enriched.

EnvironmentAll behavior occurs within an environmental context; be-havior cannot be emitted in an environmental void or vac-uum. Johnston and Pennypacker (1993a) offered thefollowing definition of environment and two critical im-plications of that definition for a science of behavior:

“Environment” refers to the conglomerate of real circum-stances in which the organism or referenced part of theorganism exists. A simple way to summarize its coverageis as “everything except the moving parts of the organisminvolved in the behavior.” One important implication . . .is that only real physical events are included.

Another very important consequence of this con-ception of the behaviorally relevant environment isthat it can include other aspects of the organism. Thatis, the environment for a particular behavior can in-clude not only the organism’s external features butphysical events inside its skin. For instance, scratch-ing our skin is presumably under control of the exter-nal visual stimulus provided by your body, particularlythat part being scratched, as well as the stimulationthat we call itching, which lies inside the skin. In fact,both types of stimulation very often contribute to be-havioral control. This means that the skin is not an especially important boundary in the understanding of behavioral laws, although it can certainly provideobservational challenges to discovering those laws. (p. 28)

The environment is a complex, dynamic universe ofevents that differs from instance to instance. When be-

4Although the concepts of stimulus and response have proven useful forconceptual, experimental, and applied analyses of behavior, it is importantto recognize that stimuli and responses do not exist as discrete events in na-ture. Stimuli and responses are detectable “slices” of the continuous andever-changing interaction between an organism and its environment cho-sen by scientists and practitioners because they have proven useful in un-derstanding and changing behavior. However, the slices imposed by thebehavior analyst may not parallel naturally occurring divisions.5Respondent conditioning and the operant principles mentioned here are in-troduced later in this chapter.

havior analysts describe particular aspects of the envi-ronment, they talk in terms of stimulus conditions orevents.4 A good definition of stimulus is “an energychange that affects an organism through its receptorcells” (Michael, 2004, p. 7). Humans have receptor sys-tems that detect stimulus changes occurring outside andinside the body. Exteroceptors are sense organs that de-tect external stimuli and enable vision, hearing, olfac-tion, taste, and cutaneous touch. Two types of senseorgans sensitive to stimulus changes within the body areinteroceptors, which are sensitive to stimuli originatingin the viscera (e.g., feeling a stomach ache), and pro-prioceptors, which enable the kinesthetic and vestibu-lar senses of movement and balance. Applied behavioranalysts most often study the effects of stimulus changesthat occur outside the body. External stimulus conditionsand events are not only more accessible to observationand manipulation than are internal conditions, but alsothey are key features of the physical and social world inwhich people live.

The environment influences behavior primarily bystimulus change and not static stimulus conditions. AsMichael (2004) noted, when behavior analysts speak ofthe presentation or occurrence of a stimulus, they usu-ally mean stimulus change.

For example, in respondent conditioning the conditionedstimulus may be referred to as a tone. However, the rele-vant event is actually a change from the absence of toneto the tone sounding . . . , and although this is usuallyunderstood without having to be mentioned, it can beoverlooked in the analysis of more complex phenomena.Operant discriminative stimuli, conditioned reinforcers,conditioned punishers, and conditioned motivative vari-ables are also usually important as stimulus changes, notstatic conditions (Michael, 2004, pp. 7–8).5

Stimulus events can be described formally (by theirphysical features), temporally (by when they occur withrespect to a behavior of interest), and functionally (bytheir effects on behavior). Behavior analysts used the termstimulus class to refer to any group of stimuli sharing apredetermined set of common elements in one or more ofthese dimensions.

47

Basic Concepts

Formal Dimensions of Stimuli

Behavior analysts often describe, measure, and manipu-late stimuli according to their formal dimensions, suchas size, color, intensity, weight, and spatial position rel-ative to other objects. Stimuli can be nonsocial (e.g., ared light, a high-pitched sound) or social (e.g., a friendasking, “Want some more peanuts?”).

Temporal Loci of Stimuli

Because behavior and the environmental conditions thatinfluence it occur within and across time, the temporal lo-cation of stimulus changes is important. In particular, be-havior is affected by stimulus changes that occur prior toand immediately after the behavior. The term antecedentrefers to environmental conditions or stimulus changesthat exist or occur prior to the behavior of interest.

Because behavior cannot occur in an environmentalvoid or vacuum, every response takes place in the contextof a particular situation or set of antecedent conditions.These antecedent events play a critical part in learningand motivation, and they do so irrespective of whetherthe learner or someone in the role of behavior analyst orteacher has planned or is even aware of them.

For example, just some of the functionally relevant an-tecedents for a student’s performance on a timed mathtest might include the following: the amount of sleep thestudent had the night before; the temperature, lighting,and seating arrangements in the classroom; the teacherreminding the class that students who beat their personalbest scores on the test will get a free homework pass;and the specific type, format, and sequence of mathproblems on the test. Each of those antecedent variables(and others) has the potential to exert a great deal, a lit-tle, or no noticeable effect on performance as a functionof the student’s experiences with respect to a particularantecedent. (Heward & Silvestri, 2005, p. 1135)

A consequence is a stimulus change that follows abehavior of interest. Some consequences, especially those

that are immediate and relevant to current motivationalstates, have significant influence on future behavior; otherconsequences have little effect. Consequences combinewith antecedent conditions to determine what is learned.Again, this is true whether the individual or someone try-ing to change his behavior is aware of or systematicallyplans the consequences.

Like antecedent stimulus events, consequences mayalso be social or nonsocial events. Table 1 shows ex-amples of various combinations of social and nonsocialantecedent and consequent events for four behaviors.

Behavioral Functions of Stimulus Changes

Some stimulus changes exert immediate and powerfulcontrol over behavior, whereas others have delayed ef-fects, or no apparent effect. Even though we can and oftendo describe stimuli by their physical characteristics (e.g.,the pitch and decibel level of a tone, the topography of aperson’s hand and arm movements), stimulus changesare understood best through a functional analysis of theireffects on behavior. For example, the same decibel tonethat functions in one environment and set of conditionsas a prompt for checking the clothes in the dryer mayfunction as a warning signal to fasten a seat belt in an-other setting or situation; the same hand and arm motionthat produces a smile and a “Hi” from another person inone set of conditions receives a scowl and obscene ges-ture in another.

Stimulus changes can have one or both of two basickinds of functions or effects on behavior: (a) an imme-diate but temporary effect of increasing or decreasing thecurrent frequency of the behavior, and/or (b) a delayed butrelatively permanent effect in terms of the frequency ofthat type of behavior in the future (Michael, 1995). Forexample, a sudden downpour on a cloudy day is likelyto increase immediately the frequency of all behavior thathas resulted in the person successfully escaping rain in the

Table 1 Antecedent (Situation) and Consequent Events Can Be Nonsocial (Italicized), Social (Boldface), or a Combination of Social and Nonsocial

Situation Response Consequence

Drink machine Deposit coins Cold drink

Five cups on table “One-two-three-four-five cups” Teacher nods and smiles

Friend says “turn left” Turn left Arrive at destination

Friend asks “What time is it?” “Six-fifteen” Friend says “Thanks”

From “Individual Behavior, Culture, and Social Change” by S. S. Glenn, 2004, The Behavior Analyst, 27, p. 136. Copyright 2004 by the Associationfor Behavior Analysis. Used by permission.

48

Basic Concepts

past, such as running for cover under an awning or pullingher jacket over her head. If the person had decided not tocarry her umbrella just before leaving the house, thedownpour may decrease the frequency of that behavior oncloudy days in the future.

Respondent BehaviorAll intact organisms enter the world able to respond inpredictable ways to certain stimuli; no learning is re-quired. These ready-made behaviors protect againstharmful stimuli (e.g., eyes watering and blinking to re-move particles on the cornea), help regulate the internalbalance and economy of the organism (e.g., changes inheart rate and respiration in response to changes in tem-perature and activity levels), and promote reproduction(e.g., sexual arousal). Each of these stimulus–responserelations, called a reflex, is part of the organism’s geneticendowment, a product of natural evolution because of its

survival value to the species. Each member of a givenspecies comes equipped with the same repertoire of un-conditioned (or unlearned) reflexes. Reflexes provide theorganism with a set of built-in responses to specific stim-uli; these are behaviors the individual organism wouldnot have time to learn. Table 2 shows examples of re-flexes common to humans.

The response component of the stimulus–responsereflex is called respondent behavior. Respondent be-havior is defined as behavior that is elicited by antecedentstimuli. Respondent behavior is induced, or brought out,by a stimulus that precedes the behavior; nothing else isrequired for the response to occur. For example, brightlight in the eyes (antecedent stimulus) will elicit pupilcontraction (respondent). If the relevant body parts (i.e.,receptors and effectors) are intact, pupil contraction willoccur every time. However, if the eliciting stimulus ispresented repeatedly over a short span of time, thestrength or magnitude of the response will diminish, andin some cases the response may not occur at all. This

Table 2 Examples of Unconditioned Human Reflexes Susceptible to Respondent Conditioning

Unconditioned stimulus Unconditioned response Type of effector

Loud sound or touch to cornea Eye blink (lid closes) Striped muscle

Tactile stimulus under lid or chemical Lachrimal gland secretion Gland (duct)irritant (smoke) (eyes watering)

Irritation to nasal mucosa Sneezing Striped and smooth muscle

Irritation to throat Coughing Striped and smooth muscle

Low temperature Shivering, surface vasoconstriction Striped and smooth muscle

High temperature Sweating, surface vasodilation Gland, smooth muscle

Loud sound Contraction of tensor tympani and Striped musclesstapedius muscles (reduces amplitude of ear drum vibrations)

Food in mouth Salivation Gland

Undigestible food in stomach Vomiting Striped and smooth muscle

Pain stimulus to hand or foot Hand or foot withdrawal Striped muscle

A single stimulus that is painful or very Activation syndrome—all of the following:intense or very unusual Heart rate increase Cardiac muscle

Adrenaline secretion Gland (ductless)

Liver release of sugar into bloodstream Gland (duct)

Constriction of visceral blood vessels Smooth muscle

Dilation of blood vessels in skeletal Smooth musclemuscles

Galvanic skin response (GSR) Gland (duct)

Pupillary dilation (and many more) Smooth muscle

From Concepts and Principles of Behavior Analysis (rev. ed.) by J. L. Michael, 2004, pp. 10–11. Copyright 2004 by Society for the Advancement ofBehavior Analysis, Kalamazoo, MI

49

Basic Concepts

process of gradually diminishing response strength isknown as habituation.

Respondent Conditioning

New stimuli can acquire the ability to elicit respondents.Called respondent conditioning, this type of learningis associated most with the Russian physiologist IvanPetrovich Pavlov (1849–1936).6 While studying the di-gestive system of dogs, Pavlov noticed that the animalssalivated every time his laboratory assistant opened thecage door to feed them. Dogs do not naturally salivate atthe sight of someone in a lab coat, but in Pavlov’s labo-ratory they consistently salivated when the door wasopened. His curiosity aroused, Pavlov (1927) designedand conducted an historic series of experiments. The re-sult of this work was the experimental demonstration ofrespondent conditioning.

Pavlov started a metronome just an instant be-fore feeding the dogs. Prior to being exposed to thisstimulus–stimulus pairing procedure, food in the mouth,an unconditioned stimulus (US), elicited salivation, butthe sound of the metronome, a neutral stimulus (NS),did not. After experiencing several trials consisting of thesound of the metronome followed by the presentation offood, the dogs began salivating in response to the soundof the metronome. The metronome had thus become aconditioned stimulus (CS), and a conditioned reflexwas established.7 Respondent conditioning is most ef-fective when the NS is presented immediately before orsimultaneous with the US. However, some conditioningeffects can sometimes be achieved with considerabledelay between the onset of the NS and the onset of theUS, and even with backward conditioning in which theUS precedes the NS.

Respondent Extinction

Pavlov also discovered that once a conditioned reflexwas established, it would weaken and eventually ceasealtogether if the conditioned stimulus was presented

6Respondent conditioning is also referred to as classical or Pavlovian con-ditioning. Pavlov was not the first to study reflexes; like virtually all sci-entists, his work was an extension of others, most notably Ivan Sechenov(1829–1905) (Kazdin, 1978). See Gray (1979) and Rescorla (1988) for ex-cellent and interesting descriptions of Pavlov’s research.7Unconditioned stimulus and conditioned stimulus are the most commonlyused terms to denote the stimulus component of respondent relations. How-ever, because the terms ambiguously refer to both the immediate evocative(eliciting) effect of the stimulus change and its somewhat permanent anddelayed function-altering effect (the conditioning effect on other stimuli),Michael (1995) recommended that the terms unconditioned elicitor (UE)and conditioned elicitor (CE) be used when referring to the evocative func-tion of these variables.

repeatedly in the absence of the unconditioned stimu-lus. For example, if the sound of the metronome waspresented repeatedly without being accompanied or fol-lowed by food, it would gradually lose its ability toelicit salivation. The procedure of repeatedly present-ing a conditioned stimulus without the unconditionedstimulus until the conditioned stimulus no longer elic-its the conditioned response is called respondent extinction.

Figure 1 shows schematic representations of re-spondent conditioning and respondent extinction. In thisexample, a puff of air produced by a glaucoma-testingmachine is the US for the eye blink reflex. The opthal-mologist’s finger pressing the button of the machinemakes a faint clicking sound. But prior to conditioning,the clicking sound is an NS: It has no effect on eye blink-ing. After being paired with the air puff just a few times,the finger-on-the-button sound becomes a CS: It elicitseye blinking as a conditioned reflex.

Conditioned reflexes can also be established by stim-ulus–stimulus pairing of an NS with a CS. This form ofrespondent conditioning is called higher order (or sec-ondary) conditioning. For example, secondary respon-dent conditioning could occur in a patient who haslearned to blink at the clicking sound of the button dur-ing the glaucoma-testing situation as follows. The patientdetects a slight movement of the ophthalmologist’s fin-ger (NS) just before it contacts the button that makes theclicking sound (CS). After several NS–CS pairings,movement of the ophthalmologist’s finger may becomea CS capable of eliciting blinking.

The form, or topography, of respondent behaviorschanges little, if at all, during a person’s lifetime. Thereare two exceptions: (a) Certain reflexes disappear withmaturity, such as that of grasping an object placed inthe palm of the hand, a reflex usually not seen afterthe age of 3 months (Bijou & Baer, 1965); and (b) sev-eral unconditioned reflexes first appear later in life,such as those related to sexual arousal and reproduc-tion. However, during a person’s lifetime an infiniterange of stimuli that were previously neutral (e.g., thehigh-pitched whine of the dentist’s drill) can cometo elicit respondents (i.e., increased heartbeat andperspiration).

Respondents make up a small percentage of the behaviors typically of interest to the applied behavior analyst. As Skinner (1953) pointed out, “Reflexes, con-ditioned or otherwise, are mainly concerned with the in-ternal physiology of the organism. We are most ofteninterested, however, in behavior which has some effectupon the surrounding world” (p. 59). It is this latter typeof behavior, and the process by which it is learned, thatwe will now examine.

50

Basic Concepts

Before Conditioning US(air puff)

UR (eye blink)

NS(clicking sound)

no eye blink

Respondent Conditioning NS + US(click & air puff)

UR (eye blink)

NS + US(click & air puff)

(more trials)

UR (eye blink)

Product ofRespondent Conditioning

US(air puff)

UR (eye blink)

CS(clicking sound)

CR (eye blink)

Respondent Extinction CS(clicking sound)

CR (eye blink)

CS(clicking sound) CR (eye blink)

CS(clicking sound)

(more trials)

CR (eye blink)

Results ofRespondent Extinction

US(air puff)

UR (eye blink)

NS(clicking sound)

no eye blink

Figure 1 Schematic representation ofrespondent conditioning and respondentextinction. The top panel shows an uncondi-tioned reflex: a puff of air (unconditionedstimulus, or US) elicits an eye blink (anunconditioned response, or UR). Beforeconditioning, a clicking sound (a neutralstimulus, or NS) has no effect on eyeblinking. Respondent conditioning consistsof a stimulus–stimulus pairing procedure in which the clicking sound is presentedrepeatedly just prior to, or simultaneouslywith, the air puff. The product of respondentconditioning is a conditioned reflex (CR): Inthis case the clicking sound has become aconditioned stimulus (CS) that elicits an eyeblink when presented alone. The bottom twopanels illustrate the procedure and outcomeof respondent extinction: Repeated presen-tations of the CS alone gradually weaken itsability to elicit eye blinking to the pointwhere the CS eventually becomes an NSagain. The unconditioned reflex remainsunchanged before, during, and after respon-dent conditioning.

Operant BehaviorA baby in a crib moves her hands and arms, setting inmotion a mobile dangling above. The baby is literally op-erating on her environment, and the mobile’s movementand musical sounds—stimulus changes produced by thebaby’s batting at the toy with her hands—are immediateconsequences of her behavior. Her movements are con-tinuously changing as a result of those consequences.

Members of a species whose only way of interactingwith the world is a genetically determined fixed set ofresponses would find it difficult to survive, let alonethrive, in a complex environment that differed from theenvironment in which their distant ancestors evolved. Al-though respondent behavior comprises a critically im-portant set of “hardwired” responses, respondent behaviordoes not provide an organism with the ability to learnfrom the consequences of its actions. An organism whosebehavior is unchanged by its effects on the environmentwill be unable to adapt to a changing one.

8The verb emit is used in conjunction with operant behavior. Its use fits inwell with the definition of operant behavior, allowing reference to the con-sequences of behavior as the major controlling variables. The verb elicit isinappropriate to use with operant behavior because it implies that an an-tecedent stimulus has primary control of the behavior.

Fortunately, in addition to her repertoire of geneti-cally inherited respondent behaviors, our baby entered herworld with some uncommitted behavior that is highly mal-leable and susceptible to change through its consequences.This type of behavior, called operant behavior, enablesthe baby over the course of her life to learn novel, in-creasingly complex responses to an ever-changing world.8

Operant behavior is any behavior whose future fre-quency is determined primarily by its history of conse-quences. Unlike respondent behavior, which is elicitedby antecedent events, operant behavior is selected,shaped, and maintained by the consequences that havefollowed it in the past.

51

Basic Concepts

Unlike respondent behaviors, whose topography andbasic functions are predetermined, operant behaviors cantake a virtually unlimited range of forms. The form andfunction of respondent behaviors are constant and can beidentified by their topography (e.g., the basic form andfunction of salivation is always the same). By compari-son, however, the “meaning” of operant behavior cannotbe determined by its topography. Operants are definedfunctionally, by their effects. Not only does the same op-erant often include responses of widely different topogra-phies (e.g., a diner may obtain a glass of water by noddinghis head, pointing to a glass of water, or saying yes to awaiter), but also, as Skinner (1969) explained, the samemovements comprise different operants under differentconditions.

Allowing water to pass over one’s hands can perhaps beadequately described as topography, but “washing one’shands” is an “operant” defined by the fact that, whenone has behaved this way in the past, one’s hands havebecome clean—a condition which has become reinforc-ing because, say, it has minimized a threat of criticismor contagion. Behavior of precisely the same topographywould be part of another operant if the reinforcementhad consisted of simple stimulation (e.g., “tickling”) ofthe hands or the evocation of imitative behavior in achild whom one is teaching to wash his hands. (p. 127)

Table 3 compares and contrasts defining features and key characteristics of respondent behavior and oper-ant behavior.

Selection by ConsequencesHuman behavior is the joint product of (i) the contingen-cies of survival responsible for the natural selection ofthe species and (ii) the contingencies of reinforcementresponsible for the repertoires acquired by its members,including (iii) the special contingencies maintained bythe social environment. [Ultimately, of course, it is all amatter of natural selection, since operant conditioning isan evolved process, of which cultural practices are spe-cial applications.]

—B. F. Skinner (1981, p. 502)

Skinner’s discovery and subsequent elucidation of oper-ant selection by consequences have rightly been called“revolutionary” and “the bedrock on which other behav-ioral principles rest” (Glenn, 2004, p. 134). Selection byconsequences “anchors a new paradigm in the life sci-ences known as selectionism. A basic tenet of this posi-tion is that all forms of life, from single cells to complexcultures, evolve as a result of selection with respect tofunction” (Pennypacker, 1994, pp. 12–13).

Selection by consequences operates during the life-time of the individual organism (ontogeny) and is a con-

ceptual parallel to Darwin’s (1872/1958) natural selec-tion in the evolutionary history of a species (phylogeny).In response to the question, “Why do giraffes have longnecks?” Baum (1994) gave this excellent description ofnatural selection:

Darwin’s great contribution was to see that a relativelysimple mechanism could help explain why phylogenyfollowed the particular course it did. The explanationabout giraffes’ necks requires reference to the births,lives, and deaths of countless giraffes and giraffe an-cestors over many millions of years. . . . Within anypopulation of organisms, individuals vary. They varypartly because of environmental factors (e.g., nutri-tion), and also because of genetic inheritance. Amongthe giraffe ancestors that lived in what is now theSerengeti Plain, for instance, variation in genes meantthat some had shorter necks and some had longernecks. As the climate gradually changed however, new,taller types of vegetation became more frequent. Thegiraffe ancestors that had longer necks, being able toreach higher, got a little more to eat, on the average. Asa result, they were a little healthier, resisted disease alittle better, evaded predators a little better—on the av-erage. Any one individual with a longer neck may havedied without offspring, but on the average longer-necked individuals had more offspring, which tendedon the average to survive a little better and producemore offspring. As longer necks became more fre-quent, new genetic combinations occurred, with the re-sult that some offspring had still longer necks thanthose before, and they did still better. As the longer-necked giraffes continued to out-reproduce the shorter-necked ones, the average neck length of the wholepopulation grew. (p. 52)

Just as natural selection requires a population ofindividual organisms with varied physical features (e.g.,giraffes with necks of different lengths), operant se-lection by consequences requires variation in behav-ior. Those behaviors that produce the most favorableoutcomes are selected and “survive,” which leads to amore adaptive repertoire. Natural selection has en-dowed humans with an initial population of uncom-mitted behavior (e.g., babies babbling and moving theirlimbs about) that is highly malleable and susceptibleto the influence of the consequences that follow it. AsGlenn (2004) noted,

By outfitting humans with a largely uncommitted behav-ioral repertoire, natural selection gave our species a longleash for local behavioral adaptations. But the uncom-mitted repertoire of humans would be lethal without the . . . susceptibility of human behavior to operant se-lection. Although this behavioral characteristic is sharedby many species, humans appear to be most exquisitelysensitive to behavioral contingencies of selection.(Schwartz, 1974, p. 139)

52

Basic Concepts

Table 3 Comparing and Contrasting Defining Features and Key Characteristics of Respondent and Operant Behavior

Characteristics or features Respondent behavior Operant behavior

Definition Behavior elicited by antecedent stimuli. Behavior selected by its consequences.

Basic unit Reflex: an antecedent stimulus elicits a par- Operant response class: A group of responses ticular response (S–R). all of which produce the same effect on the

environment; described by three-term contin-gency relation of antecedent stimulus condi-tions, behavior, and consequence (A–B–C).

Examples Newborn’s grasping and suckling to touch; pupil Talking, walking, playing the piano, riding a constriction to bright light; cough/gag to irritation bike, counting change, baking a pie, hitting a in throat; salivation at smell of food; withdrawing curveball, laughing at a joke, thinking about a hand from painful stimulus; sexual arousal to grandparent, reading this book.stimulation.

Body parts (effectors) that Primarily smooth muscles and glands (adrena- Primarily striated (skeletal) muscles; sometimesmost often produce the line squirt); sometimes striated (skeletal) mus- smooth muscles and glands.response (not a defining cles (e.g., knee-jerk to tap just below patella).feature)

Function or usefulness for Maintains internal economy of the organism; Enables effective interaction and adaptation in individual organism provides a set of “ready-made” survival re- an ever-changing environment that could not be

sponses the organism would not have time anticipated by evolution.to learn.

Function or usefulness Promotes continuation of species indirectly Individuals whose behavior is most sensitive to for species (protective reflexes help individuals survive to consequences are more likely to survive and

reproductive age) and directly (reflexes related reproduce.to reproduction).

Conditioning process Respondent (also called, classical or Pavlovian) Operant conditioning: Some stimulus changes conditioning: Through a stimulus–stimulus pair- immediately following a response increase (re-ing procedure in which a neutral stimulus (NS) inforcement) or decrease (punishment) the presented just prior to or simultaneous with an future frequency of similar responses under unconditioned (US) or conditioned (CS) eliciting similar conditions. Previously neutral stim-stimulus, the NS becomes a CS that elicits the ulus changes become conditioned reinforcers response and a conditioned reflex is created. or punishers as result of stimulus–stimulus (See Figure 1) pairing with other reinforcers or punishers.

Repertoire limits Topography and function of respondents deter- Topography and function of each person’s mined by natural evolution of species (phylogeny). repertoire of operant behaviors are selected by All biologically intact members of a species consequences during the individual’s lifetime possess the same set of unconditioned reflexes. (ontogeny). New and more complex operant re-Although new forms of respondent behavior are sponse classes can emerge. Response prod-not learned, an infinite number of conditioned ucts of some human operants (e.g., airplanes) reflexes may emerge in an individual’s repertoire enable some behaviors not possible by anatom-depending on the stimulus–stimulus pairing he ical structure alone (e.g., flying).has experienced (ontogeny).

Operant ConditioningOperant conditioning may be seen everywhere in themultifarious activities of human beings from birth untildeath. . . . It is present in our most delicate discrimina-tions and our subtlest skills; in our earliest crude habitsand the highest refinements of creative thought. —Keller and Schoenfeld (1950, p. 64)

Operant conditioning refers to the process and se-lective effects of consequences on behavior.9 From anoperant conditioning perspective a functional conse-quence is a stimulus change that follows a given behavior

9Unless otherwise noted, the term behavior will refer to operantbehavior throughout the remainder of the text.

53

Basic Concepts

in a relatively immediate temporal sequence and altersthe frequency of that type of behavior in the future. “Inoperant conditioning we ‘strengthen’ an operant in thesense of making a response more probable or, in actualfact, more frequent” (Skinner, 1953, p. 65). If the move-ment and sounds produced by the baby’s batting at themobile with her hands increase the frequency of handmovements in the direction of the toy, operant condi-tioning has occurred.

When operant conditioning consists of an increasein response frequency, reinforcement has taken place, andthe consequence responsible, in this case the movementand sound of the mobile, would be called a reinforcer.10

Although operant conditioning is used most often to referto the “strengthening” effects of reinforcement, as Skin-ner described earlier, it also encompasses the principleof punishment. If the mobile’s movement and musicalsounds resulted in a decrease in the baby’s frequency ofmoving it with her hands, punishment has occurred, andthe mobile’s movement and sound would be called puni-shers. Before we examine the principles of reinforce-ment and punishment further, it is important to identifyseveral important qualifications concerning how conse-quences affect behavior.

Consequences Can Affect Only Future Behavior

Consequences affect only future behavior. Specifically, abehavioral consequence affects the relative frequencywith which similar responses will be emitted in the futureunder similar stimulus conditions. This point may seemtoo obvious to merit mention because it is both logicallyand physically impossible for a consequent event to af-fect a behavior that preceded it, when that behavior isover before the consequent event occurs. Nevertheless,the statement “behavior is controlled by its conse-quences” raises the question. (See Box 1 for further discussion of this apparent logical fallacy.)

Consequences Select Response Classes,Not Individual Responses

Responses emitted because of the effects of reinforce-ment of previous responses will differ slightly from theprevious responses but will share enough common ele-

10Skinner (1966) used rate of responding as the fundamental datum for hisresearch. To strengthen an operant is to make it more frequent. However,rate (or frequency) is not the only measurable and malleable dimension ofbehavior. As we will see in chapters entitled “Selecting and Defining Tar-get Behaviors” and “Measuring Behavior,” sometimes the duration, la-tency, magnitude, and/or topography of behavior changes are of pragmaticimportance.

ments with the former responses to produce the sameconsequence.

Reinforcement strengthens responses which differ intopography from the response reinforced. When we rein-force pressing a lever, for example, or saying Hello, re-sponses differing quite widely in topography grow moreprobable. This is a characteristic of behavior which hasstrong survival value . . . , since it would be very hardfor an organism to acquire an effective repertoire if rein-forcement strengthened only identical responses. (Skin-ner, 1969, p. 131)

These topographically different, but functionallysimilar, responses comprise an operant response class.Indeed, “an operant is a class of acts all of which have thesame environmental effect” (Baum, 1994, p. 75). It is theresponse class that is strengthened or weakened by op-erant conditioning. The concept of response class is “im-plied when it is said that reinforcement increases thefuture frequency of the type of behavior that immedi-ately preceded the reinforcement” (Michael, 2004, p. 9).And the concept of response class is a key to the devel-opment and elaboration of new behavior.

If consequences (or natural evolution) selected onlya very narrow range of responses (or genotypes), the ef-fect would “tend toward uniformity and a perfection ofsorts” (Moxley, 2004, p. 110) that would place the be-havior (or species) at risk of extinction should the envi-ronment change. For example, if the mobile’s movementand sound reinforced only arm and hand movements thatfell within an exact and narrow range of motion and nosimilar movements survived, the baby would be unableto contact that reinforcement if one day her mothermounted the mobile in a different location above the crib.

Immediate Consequences Have the Greatest Effect

Behavior is most sensitive to stimulus changes that occurimmediately after, or within a few seconds of, the responses.

It is essential to emphasize the importance of the imme-diacy of reinforcement. Events that are delayed morethan a few seconds after the response do not directly in-crease its future frequency. When human behavior is ap-parently affected by long-delayed consequences, thechange is accomplished by virtue of the human’s com-plex social and verbal history, and should not be thoughtof as an instance of the simple strengthening of behaviorby reinforcement. . . . [As with reinforcement,] thelonger the time delay between the occurrence of the re-sponse and the occurrence of the stimulus change (be-tween R and SP), the less effective the punishment willbe in changing the relevant response frequency, but not

54

Basic Concepts

The professor was ready to move on to his next point,but a raised hand in the front row caught his attention.

Professor: Yes?

Student: You say that operant behavior, like talking,writing, running, reading, driving a car, most every-thing we do—you say all of those behaviors are con-trolled by their consequences, by things that happenafter the response was emitted?

Professor: Yes, I said that. Yes.

Student: Well, I have a hard time with that. When mytelephone rings and I pick up the receiver, that’s anoperant response, right? I mean, answering thephone when it rings certainly didn’t evolve geneti-cally as a reflex to help our species survive. So, weare talking about operant behavior, correct?

Professor: Correct.

Student: All right then. How can we say that my pick-ing up my telephone is controlled by its conse-quence? I pick up the phone because it is ringing.So does everybody else. Ringing controls the re-sponse. And ringing can’t be a consequence becauseit comes before the response.

The professor hesitated with his reply just long enoughfor the student to believe himself the hero, nailing aprofessor for pontificating about some theoreticalconcept with little or no relevance to the everydayreal world. Simultaneously sensing victory, otherstudents began to pile on with their comments.

Another Student: How about stepping on the brakewhen you see a stop sign? The sign controls thebraking response, and that’s not a consequence either.

A Student from the Back of the Room: And take a com-mon classroom example. When a kid sees the prob-lem 2 + 2 on his worksheet and he writes 4, theresponse of writing 4 has to be controlled by the writ-ten problem itself. Otherwise, how could anyone learnthe correct answers to any question or problem?

Most of the Class: Yah, that’s right!

Professor: (with a wry smile) All of you are cor-rect. . . . So too am I.

Someone Else in the Class: What do you mean?

Professor: That was exactly my next point, and I washoping you would pick up on it. (The professorsmiled a thank you at the student who had startedthe discussion and went on.) All around us, everyday, we are exposed to thousands of changing stim-ulus conditions. All of the situations you’ve de-scribed are excellent examples of what behavioranalysts call stimulus control. When the frequencyof a given behavior is higher in the presence of agiven stimulus than when that stimulus is absent, wesay that stimulus control is at work. Stimulus controlis a very important and useful principle in behavioranalysis, and it will be the subject of much discus-sion this semester.

But, and here’s the important point: A dis-criminative stimulus, the antecedent event thatcomes before the response of interest, acquires itsability to control a particular response class becauseit has been associated with certain consequences inthe past. So it is not just the sound of the phone’sring that causes you to pick up the receiver. It is thefact that in the past answering the phone when itwas ringing was followed by a person’s voice. It’sthat person talking to you, the consequence of pick-ing up the receiver, that really controlled the be-havior in the first place, but you pick up the phoneonly when you hear it ringing. Why? Because youhave learned that there’s someone on the other endonly when the phone’s ringing. So we can still speakof consequences as having the ultimate control interms of controlling operant behavior, but by beingpaired with differential consequences, antecedentstimuli can indicate what kind of consequence islikely. This concept is called the three-term contin-gency, and its understanding, analysis, and manip-ulation is central to applied behavior analysis.

Box 1When the Phone Rings:

A Dialogue about Stimulus Control

55

Basic Concepts

much is known about upper limits. (Michael, 2004, p.110, 36 emphasis in original, words in brackets added)

Consequences Select Any Behavior

Reinforcement and punishment are “equal opportunity”selectors. No logical or healthy or (in the long run) adap-tive connection between a behavior and the consequencethat functions to strengthen or weaken it is necessary.Any behavior that immediately precedes reinforcement(or punishment) will be increased (or decreased).

It is the temporal relation between behavior and con-sequence that is functional, not the topographical or log-ical ones. “So far as the organism is concerned, the onlyimportant property of the contingency is temporal. Thereinforcer simply follows the response. How this isbrought about does not matter” (Skinner, 1953, p. 85,emphasis in original). The arbitrary nature of which be-haviors are reinforced (or punished) in operant condi-tioning is exemplified by the appearance of idiosyncraticbehaviors that have no apparent purpose or function. Anexample is the superstitious routine of a poker playerwho taps and arranges his cards in a peculiar fashion be-cause similar movements in the past were followed bywinning hands.

Operant Conditioning Occurs Automatically

Operant conditioning does not require a person’s aware-ness. “A reinforcing connection need not be obvious tothe individual [whose behavior is] reinforced” (Skinner,1953, p. 75, words in brackets added). This statementrefers to the automaticity of reinforcement; that is, be-havior is modified by its consequences regardless ofwhether the individual is aware that she is being re-inforced.11 A person does not have to understand or verbalize the relation between her behavior and a consequence, or even know that a consequence has oc-curred, for reinforcement to “work.”

Reinforcement

Reinforcement is the most important principle of behav-ior and a key element of most behavior change programsdesigned by behavior analysts (Flora, 2004; Northup,Vollmer, & Serrett, 1993). If a behavior is followed

11Automaticity of reinforcement is a different concept from that of automaticreinforcement, which refers to responses producing their “own” reinforce-ment (e.g., scratching an insect bite). Automatic reinforcement is describedin chapter entitled “Positive Reinforcement.”

12The basic effect of reinforcement is often described as increasing theprobability or strength of the behavior, and at times we use these phrasesalso. In most instances, however, we use frequency when referring to thebasic effect of operant conditioning, following Michael’s (1995) rationale:“I use frequency to refer to number of responses per unit time, or numberof response occurrences relative to the number of opportunities for a re-sponse. In this way I can avoid such terms as probability, likelihood andstrength when referring to behavior. The controlling variables for theseterms are problematic, and because of this, their use encourages a languageof intervening variables, or an implied reference to something other thanan observable aspect of behavior” (p. 274).13Malott and Trojan Suarez (2004) referred to these two operations as “stimulus addition” and “stimulus subtraction.”

closely in time by a stimulus event and as a result the fu-ture frequency of that type of behavior increases in sim-ilar conditions, reinforcement has taken place.12

Sometimes the delivery of just one reinforcer results insignificant behavior change, although most often severalresponses must be followed by reinforcement before sig-nificant conditioning will occur.

Most stimulus changes that function as reinforcerscan be described operationally as either (a) a new stim-ulus added to the environment (or increased in intensity),or (b) an already present stimulus removed from the en-vironment (or reduced in intensity).13 These two opera-tions provide for two forms of reinforcement, calledpositive and negative (see Figure 2).

Positive reinforcement occurs when a behavior isfollowed immediately by the presentation of a stimulusand, as a result, occurs more often in the future. Ourbaby’s increased frequency of batting the mobile withher hands, when doing so produces movement and music,is an example of positive reinforcement. Likewise, achild’s independent play is reinforced when it increasesas a result of his parent’s giving praise and attention whenhe plays.

When the frequency of a behavior increases becausepast responses have resulted in the withdrawal or termi-nation of a stimulus, the operation is called negative re-inforcement. Skinner (1953) used the term aversivestimulus to refer to, among other things, stimulus con-ditions whose termination functioned as reinforcement.Let us assume now that a parent programs the mobile toautomatically play music for a period of time. Let us alsoassume that if the baby bats the mobile with hands orfeet, the music immediately stops for a few seconds. If thebaby bats the mobile more frequently when doing so ter-minates the music, negative reinforcement is at work, andthe music can be called aversive.

Negative reinforcement is characterized by escapeor avoidance contingencies. The baby escaped the musicby striking the mobile with her hand. A person who jumpsout of the shower when water suddenly becomes too hot

56

Basic Concepts

Positive Reinforcement Negative Reinforcement

Positive Punishment Negative Punishment

Present or IncreaseIntensity of Stimulus

Withdraw or DecreaseIntensity of Stimulus

Eff

ect

on

Fu

ture

Fre

qu

ency

of

Beh

avio

rType of Stimulus Change Figure 2 Positive and negative re-

inforcement and positive and negativepunishment are defined by the type ofstimulus change operation that immedi-ately follows a behavior and the effect thatoperation has on the future frequency ofthat type of behavior.

escapes the overly hot water. Likewise, when the fre-quency of a student’s disruptive behavior increases as aresult of being sent to the principal’s office, negative re-inforcement has occurred. By acting out, the misbehav-ing student escapes (or avoids altogether, depending onthe timing of his misbehavior) the aversive (to him) class-room activity.

The concept of negative reinforcement has confusedmany students of behavior analysis. Much of the confu-sion can be traced to the inconsistent early history anddevelopment of the term and to psychology and educationtextbooks and professors who have used the term inac-curately.14 The most common mistake is equating nega-tive reinforcement with punishment. To help avoid theerror, Michael (2004) suggested the following:

Think about how you would respond if someone askedyou (1) whether or not you like negative reinforcement;also if you were asked (2) which you prefer, positive ornegative reinforcement. Your answer to the first questionshould be that you do indeed like negative reinforce-ment, which consists of the removal or termination of anaversive condition that is already present. The termnegative reinforcement refers only to the termination ofthe stimulus. In a laboratory procedure the stimulusmust, of course, be turned on and then its terminationcan be made contingent upon the critical response. Noone wants an aversive stimulus turned on, but once it ison, its termination is usually desirable. Your answer tothe second question should be that you cannot choosewithout knowing the specifics of the positive and nega-tive reinforcement involved. The common error is to

14For examples and discussions of the implications of inaccurate repre-sentations of principles of behavior and behaviorism in psychology andeducation textbooks, see Cameron (2005), Cooke (1984), Heward (2005),Heward and Cooper (1992), and Todd and Morris (1983, 1992).

choose positive reinforcement, but removal of a very se-vere pain would certainly be preferred over the presenta-tion of a small monetary reward or an edible, unless thefood deprivation was very severe. (p. 32, italics and boldtype in original)

Remembering that the term reinforcement alwaysmeans an increase in response rate and that the modifierspositive and negative describe the type of stimulus changeoperation that best characterizes the consequence (i.e.,adding or withdrawing a stimulus) should facilitate thediscrimination of the principles and application of posi-tive and negative reinforcement.

After a behavior has been established with rein-forcement, it need not be reinforced each time it occurs.Many behaviors are maintained at high levels by sched-ules of intermittent reinforcement. However, if rein-forcement is withheld for all members of a previouslyreinforced response class, a procedure based on the prin-ciple of extinction, the frequency of the behavior willgradually decrease to its prereinforcement level or ceaseto occur altogether.

Punishment

Punishment, like reinforcement, is defined functionally.When a behavior is followed by a stimulus change thatdecreases the future frequency of that type of behavior insimilar conditions, punishment has taken place. Also,like reinforcement, punishment can be accomplished byeither of two types of stimulus change operations. (Seethe bottom two boxes of Figure 2.)

57

Basic Concepts

Although most behavior analysts support the defi-nition of punishment as a consequence that decreasesthe future frequency of the behavior it follows (Azrin &Holz, 1966), a wide variety of terms have been used inthe literature to refer to the two types of consequenceoperations that fit the definition. For example, the Be-havior Analyst Certification Board (BACB, 2005) andtextbook authors (e.g., Miltenberger, 2004) use theterms positive punishment and negative punishment,paralleling the terms positive reinforcement and negativereinforcement. As with reinforcement, the modifierspositive and negative used with punishment connote nei-ther the intention nor the desirability of the behaviorchange produced; they only specify how the stimuluschange that served as the punishing consequence wasaffected—whether it was presented (positive) or with-drawn (negative).

Although the terms positive punishment andnegative punishment are consistent with the terms usedto differentiate the two reinforcement operations, theyare less clear than the descriptive terms for the two pun-ishment operations—punishment by contingent stimu-lation and punishment by contingent withdrawal of apositive reinforcer—first introduced by Whaley andMalott (1971) in their classic text, Elementary Princi-ples of Behavior. These terms highlight the proceduraldifference between the two forms of punishment. Dif-ferences in procedure as well as in the type of stimuluschange involved—reinforcer or punisher—hold im-portant implications for application when a punish-ment-based behavior-reduction technique is indicated.Foxx (1982) introduced the terms Type I punishmentand Type II punishment for punishment by contingentstimulation and punishment by contingent withdrawalof a stimulus, respectively. Many behavior analysts andteachers continue to use Foxx’s terminology today.Other terms such as penalty principle have also beenused to refer to negative punishment. (Malott & Tro-jan Suarez, 2004). However, it should be rememberedthat these terms are simply brief substitutes for themore complete terminology introduced by Whaley andMalott.

As with positive and negative reinforcement, nu-merous behavior change procedures incorporate the twobasic punishment operations. Although some textbooksreserve the term punishment for procedures involvingpositive (or Type I) punishment and describe time-outfrom positive reinforcement and response cost as sepa-rate “principles” or types of punishment, both the meth-ods for reducing behavior are derivatives of negative (orType II) punishment. Therefore, time-out and responsecost should be considered behavior change tactics andnot basic principles of behavior.

15Michael (1975) and Baron and Galizio (2005) present cogent argumentsfor why positive and negative reinforcement are examples of the same fun-damental operant relation. This issue is discussed further in chapter enti-tled “Negative Reinforcement.”16Some authors use the modifiers primary or unlearned to identify un-conditioned reinforcers and unconditioned punishers.

Reinforcement and punishment can each be accom-plished by either of two different operations, dependingon whether the consequence consists of presenting a newstimulus (or increasing the intensity of a current stimu-lus) or withdrawing (or decreasing the intensity of ) a cur-rently present stimulus in the environment (Morse &Kelleher, 1977; Skinner, 1953). Some behavior analystsargue that from a functional and theoretical standpointonly two principles are required to describe the basic ef-fects of behavioral consequences—reinforcement andpunishment.15 However, from a procedural perspective(a critical factor for the applied behavior analyst), a num-ber of behavior change tactics are derived from each ofthe four operations represented in Figure 2.

Most behavior change procedures involve severalprinciples of behavior (see Box 2). It is critical for thebehavior analyst to have a solid conceptual understand-ing of the basic principles of behavior. Such knowledgepermits better analysis of current controlling variables aswell as more effective design and assessment of behav-ioral interventions that recognize the role various princi-ples may be playing in a given situation.

Stimulus Changes That Function as Reinforcers and Punishers

Because operant conditioning involves the consequencesof behavior, it follows that anyone interested in using op-erant conditioning to change behavior must identify andcontrol the occurrence of relevant consequences. For theapplied behavior analyst, therefore, an important ques-tion becomes, What kinds of stimulus changes functionas reinforcers and punishers?

Unconditioned Reinforcementand Punishment

Some stimulus changes function as reinforcement eventhough the organism has had no particular learning his-tory with those stimuli. A stimulus change that can in-crease the future frequency of behavior without priorpairing with any other form of reinforcement is called anunconditioned reinforcer.16 For example, stimuli suchas food, water, and sexual stimulation that support thebiological maintenance of the organism and survival ofthe species often function as unconditioned reinforcers.The words can and often in the two previous sentences

58

Basic Concepts

A principle of behavior describes a basic behavior–en-vironment relation that has been demonstrated repeat-edly in hundreds, even thousands, of experiments. Aprinciple of behavior describes a functional relationbetween behavior and one or more of its controlling vari-ables (in the form of y = fx) that has thorough generalityacross individual organisms, species, settings, and be-haviors. A principle of behavior is an empirical general-ization inferred from many experiments. Principlesdescribe how behavior works. Some examples of prin-ciples are reinforcement, punishment, and extinction.

In general, a behavior change tactic is a method foroperationalizing, or putting into practice, the knowledgeprovided by one or more principles of behavior. A be-havior change tactic is a research-based, technologi-cally consistent method for changing behavior that hasbeen derived from one or more basic principles of be-havior and that possesses sufficient generality across sub-jects, settings, and/or behaviors to warrant its codificationand dissemination. Behavior change tactics constitute thetechnological aspect of applied behavior analysis. Ex-amples of behavior change procedures include backward

chaining, differential reinforcement of other behavior,shaping, response cost, and time-out.

So, principles describe how behavior works, and be-havior change tactics are how applied behavior analystsput the principles to work to help people learn and use so-cially significant behaviors. There are relatively few prin-ciples of behavior, but there are many derivative behaviorchange tactics. To illustrate further, reinforcement is abehavioral principle because it describes a lawful rela-tion between behavior, an immediate consequence, andan increased frequency of the behavior in the future undersimilar conditions. However, the issuance of checkmarksin a token economy or the use of contingent social praiseare behavior change tactics derived from the principle ofreinforcement. To cite another example, punishment is aprinciple behavior because it describes the establishedrelations between the presentation of a consequence andthe decreased frequency of similar behavior in the fu-ture. Response cost and time-out, on the other hand, aremethods for changing behavior; they are two differenttactics used by practitioners to operationalize the princi-ple of punishment.

Box 2Distinguishing between Principles

of Behavior and Behavior Change Tactics

recognize the important qualification that the momentaryeffectiveness of an unconditioned reinforcer is a functionof current motivating operations. For example, a cer-tain level of food deprivation is necessary for the pre-sentation of food to function as a reinforcer. However,food is unlikely to function as reinforcement for a personwho has recently eaten a lot of food (a condition of sat-iation).

Similarly, an unconditioned punisher is a stimuluschange that can decrease the future frequency of any be-havior that precedes it without prior pairing with anyother form of punishment. Unconditioned punishers in-clude painful stimulation that can cause tissue damage(i.e., harm body cells). However, virtually any stimulusto which an organism’s receptors are sensitive—light,sound, and temperature, to name a few—can be intensi-fied to the point that its delivery will suppress behavioreven though the stimulus is below levels that actuallycause tissue damage (Bijou & Baer, 1965).

Events that function as unconditioned reinforcers andpunishers are the product of the natural evolution of thespecies (phylogeny). Malott, Tillema, and Glenn (1978)

17In addition to using aversive stimulus as a synonym for a negative rein-forcer, Skinner (1953) also used the term to refer to stimuli whose onset orpresentation functions as punishment, a practice continued by many be-havior analysts (e.g., Alberto & Troutman, 2006; Malott & Trojan Suarez,2004; Miltenberger, 2004). The term aversive stimulus (and aversive con-trol when speaking of behavior change techniques involving such stimuli)is used widely in the behavior analysis literature to refer to one or more ofthree different behavioral functions: an aversive stimulus may be (a) a neg-ative reinforcer if its termination increases behavior, (b) a punisher if its pre-sentation decreases behavior, and/or (c) a motivating operation if itspresentation increases the current frequency of behaviors that have termi-nated it in the past (see chapter entitled “Motivating Operations”). Whenspeaking or writing technically, behavior analysts must be careful that theiruse of omnibus terms such as aversive does not imply unintended func-tions (Michael, 1995).

described the natural selection of “rewards” and “aver-sives” as follows:17

Some rewards and aversives control our actions becauseof the way our species evolved; we call these unlearnedrewards or aversives. We inherit a biological structurethat causes some stimuli to be rewarding or aversive.This structure evolved because rewards helped our an-cestors survive, while aversives hurt their survival. Someof these unlearned rewards, such as food and fluid, helpus survive by strengthening our body cells. Others help

59

Basic Concepts

our species survive by causing us to produce and carefor our offspring—these stimuli include the rewardingstimulation resulting from copulation and nursing. Andmany unlearned aversives harm our survival by damag-ing our body cells; such aversives include burns, cutsand bruises. (p. 9)

While unconditioned reinforcers and punishers arecritically important and necessary for survival, relativelyfew behaviors that comprise the everyday routines of peo-ple as they go about working, playing, and socializingare directly controlled by such events. For example, al-though going to work each day earns the money that buysfood, eating that food is far too delayed for it to exert anydirect operant control over the behavior that earned it.Remember: Behavior is most affected by its immediateconsequences.

Conditioned Reinforcers and Punishers

Stimulus events or conditions that are present or thatoccur just before or simultaneous with the occurrence ofother reinforcers (or punishers) may acquire the ability toreinforce (or punish) behavior when they later occur ontheir own as consequences. Called conditioned rein-forcers and conditioned punishers, these stimuluschanges function as reinforcers and punishers only be-cause of their prior pairing with other reinforcers or pun-ishers.18 The stimulus–stimulus pairing procedureresponsible for the creation of conditioned reinforcers orpunishers is the same as that used for respondent condi-tioning except that the “outcome is a stimulus that func-tions as a reinforcer [or punisher] rather than a stimulusthat will elicit a response” (Michael, 2004, p. 66, wordsin brackets added).

Conditioned reinforcers and punishers are not relatedto any biological need or anatomical structure; their abil-ity to modify behavior is a result of each person’s uniquehistory of interactions with his or her environment (on-togeny). Because no two people experience the world inexactly the same way, the roster of events that can serveas conditioned reinforcers and punishers at any particu-lar time (given a relevant motivating operation) is idio-syncratic to each individual and always changing. On theother hand, to the extent that two people have had simi-lar experiences (e.g., schooling, profession, the culturein general), they are likely to be affected in similar waysto many similar events. Social praise and attention areexamples of widely effective conditioned reinforcers inour culture. Because social attention and approval (aswell as disapproval) are often paired with so many otherreinforcers (and punishers), they exert powerful con-

18Some authors use the modifiers secondary or learned to identifyconditioned reinforcers and conditioned punishers.

trol over human behavior and will be featured when spe-cific tactics for changing behavior are presented.

Because people who live in a common culture sharesimilar histories, it is not unreasonable for a practitionerto search for potential reinforcers and punishers for agiven client among classes of stimuli that have proveneffective with other similar clients. However, in an effortto help the reader establish a fundamental understandingof the nature of operant conditioning, we have purposelyavoided presenting a list of stimuli that may function asreinforcers and punishers. Morse and Kelleher (1977)made this important point very well.

Reinforcers and punishers, as environmental “things,”appear to have a greater reality than orderly temporalchanges in ongoing behavior. Such a view is deceptive.There is no concept that predicts reliably when eventswill be reinforcers or punishers; the defining character-istics of reinforcers and punishers are how they changebehavior [italics added). Events that increase or de-crease the subsequent occurrence of one response maynot modify other responses in the same way.

In characterizing reinforcement as the presentationof a reinforcer contingent upon a response, the tendencyis to emphasize the event and to ignore the importanceof both the contingent relations and the antecedent andsubsequent behavior. It is how [italics added] theychange behavior that defines the terms reinforcer andpunisher; thus it is the orderly change in behavior that is the key to these definitions. It is not [italics added] appropriate to presume that particular environmentalevents such as the presentation of food or electric shockare reinforcers or punishers until a change in the rate ofresponding has occurred when the event is scheduled inrelation to specified responses.

A stimulus paired with a reinforcer is said to havebecome a conditioned reinforcer, but actually it is thebehaving subject that has changed, not the stimulus. . . .It is, of course, useful shorthand to speak of conditionedreinforcers . . . just as it is convenient to speak about areinforcer rather than speaking about an event that hasfollowed an instance of a specific response and resultedin a subsequent increase in the occurrence of similar re-sponses. The latter may be cumbersome, but it has theadvantage of empirical referents. Because many differ-ent responses can be shaped by consequent events, andbecause a given consequent event is often effective inmodifying the behavior of different individuals, it be-comes common practice to refer to reinforcers withoutspecifying the behavior that is being modified. Thesecommon practices have unfortunate consequences. Theylead to erroneous views that responses are arbitrary andthat the reinforcing or punishing effect of an event is aspecific property of the event itself. (pp. 176–177, 180)

The point made by Morse and Kelleher (1977) is ofparamount importance to understanding behavior–

60

Basic Concepts

environment relations. Reinforcement and punishmentare not simply the products of certain stimulus events,which are then called reinforcers and punishers withoutreference to a given behavior and environmental condi-tions. There are no inherent or standard physical proper-ties of stimuli that determine their permanent status asreinforcers and punishers. In fact, a stimulus can func-tion as a positive reinforcer under one set of conditionsand a negative reinforcer under different conditions. Justas positive reinforcers are not defined with terms such aspleasant or satisfying, aversive stimuli should not be de-fined with terms such as annoying or unpleasant. Theterms reinforcer and punisher should not to be used on thebasis of a stimulus event’s assumed effect on behavior oron any inherent property of the stimulus event itself.Morse and Kelleher (1977) continued:

When the borders of the table are designated in terms ofstimulus classes (positive–negative; pleasant–noxious)and experimental operations (stimulus presentation–stimulus withdrawal), the cells of the table are, by defin-ition, varieties of reinforcement and punishment. Oneproblem is that the processes indicated in the cells havealready been assumed in categorizing stimuli as positiveor negative; a second is that there is a tacit assumptionthat the presentation or withdrawal of a particular stimu-lus will have an invariant effect. These relations areclearer if empirical operations are used to designate theborder conditions. . . . The characterization of behavioralprocesses depends upon empirical observations. Thesame stimulus event, under different conditions, may in-crease behavior or decrease behavior. In the former casethe process is called reinforcement and in the latter theprocess is called punishment. (p. 180)

At the risk of redundancy, we will state this impor-tant concept again. Reinforcers and punishers denotefunctional classes of stimulus events, the membership towhich is not based on the physical nature of the stimuluschanges or events themselves. Indeed, given a person’s in-dividual history and current motivational state, and thecurrent environmental conditions, “any stimulus changecan be a ‘reinforcer’ if the characteristics of the change,and the temporal relation of the change to the responseunder observation, are properly selected” (Schoenfeld,1995, p. 184). Thus, the phrase “everything is relative”is thoroughly relevant to understanding functional behavior–environment relations.

The Discriminated Operantand Three-Term Contingency

We have discussed the role of consequences in influenc-ing the future frequency of behavior. But operant condi-tioning does much more than establish a functionalrelation between behavior and its consequences. Oper-

ant conditioning also establishes functional relations be-tween behavior and certain antecedent conditions.

In contrast to if-A-then-B formulations (such as S-R for-mulations), the AB-because-of-C formulation is a gen-eral statement that the relation between an event (B) andits context (A) is because of consequences (C). . . . Ap-plied to Skinner’s three-term contingency, the relationbetween (A) the setting and (B) behavior exists becauseof (C) consequences that occurred for previous AB(setting-behavior) relations. The idea [is] that reinforce-ment strengthens the setting-behavior relation rather thansimply strengthening behavior. (Moxley, 2004, p. 111)

Reinforcement selects not just certain forms of be-havior; it also selects the environmental conditions that inthe future will evoke (increase) instances of the responseclass. A behavior that occurs more frequently under someantecedent conditions than it does in others is called adiscriminated operant. Because a discriminated operantoccurs at a higher frequency in the presence of a givenstimulus than it does in the absence of that stimulus, it issaid to be under stimulus control. Answering the phone,one of the everyday behaviors discussed by the professorand his students in Box 1, is a discriminated operant. The telephone’s ring functions as a discriminative stim-ulus (SD) for answering the phone. We answer the phonewhen it is ringing, and we do not answer the phone whenit is silent.

Just as reinforcers or punishers cannot be identifiedby their physical characteristics, stimuli possess no in-herent dimensions or properties that enable them to func-tion as discriminative stimuli. Operant conditioningbrings behavior under the control of various propertiesor values of antecedent stimuli (e.g., size, shape, color,spatial relation to another stimulus), and what those fea-tures are cannot be determined a priori.

Any stimulus present when an operant is reinforced ac-quires control in the sense that the rate will be higherwhen it is present. Such a stimulus does not act as agoad; it does not elicit the response in the sense of forc-ing it to occur. It is simply an essential aspect of the oc-casion upon which a response is made and reinforced.The difference is made clear by calling it a discrimina-tive stimulus (or SD). An adequate formulation of the in-teraction between an organism and its environment mustalways specify three things: (1) the occasion upon whicha response occurs; (2) the response itself; and (3) the re-inforcing consequences. The interrelationships amongthem are the ”contingencies of reinforcement.” (Skinner,1969, p. 7)

The discriminated operant has its origin in the three-term contingency. The three-term contingency—antecedent, behavior, and consequence—is sometimes

61

Basic Concepts

Antecedent Stimulus Behavior Consequence

“Name a carnivorousdinosaur.” “Tyrannosaurus Rex.” “Well done!”

Foul smell underkitchen sink Take trash outside Foul smell is gone

Icy road Drive at normal speed Crash into car ahead

Popup box asks, “Warnwhen deleting unread

messages?”Click on “No”

Important e-mailmessage is lost

Future Frequencyof Behavior in

Similar ConditionsOperation

PositiveReinforcement

NegativeReinforcement

PositivePunishment

NegativePunishment

Figure 3 Three-term contingencies illustrating reinforcement and punishment operations.

called the ABCs of behavior analysis. Figure 3 shows examples of three-term contingencies for positive rein-forcement, negative reinforcement, positive punishment,and negative punishment.19 Most of what the science ofbehavior analysis has discovered about the prediction andcontrol of human behavior involves the three-term con-tingency, which is “considered the basic unit of analysisin the analysis of operant behavior” (Glenn, Ellis, &Greenspoon, 1992, p. 1332).

The term contingency appears in behavior analysisliterature with several meanings signifying various typesof temporal and functional relations between behaviorand antecedent and consequent variables (Lattal, 1995;Lattal & Shahan, 1997; Vollmer & Hackenberg, 2001).Perhaps the most common connotation of contingencyrefers to the dependency of a particular consequence onthe occurrence of the behavior. When a reinforcer (orpunisher) is said to be contingent on a particular behav-ior, the behavior must be emitted for the consequence tooccur. For example, after saying, “Name a carnivorous

19Contingency diagrams, such as those shown in Figure 3, are an effec-tive way to illustrate temporal and functional relationships between be-havior and various environmental events. See Mattaini (1995) for examplesof other types of contingency diagrams and suggestions for using them toteach and learn about behavior analysis. State notation is another means forvisualizing complex contingency relations and experimental procedures(Mechner, 1959; Michael & Shafer, 1995).

20The phrase to make reinforcement contingent describes the behavior ofthe researcher or practitioner: delivering the reinforcer only after the tar-get behavior has occurred.

dinosaur,” a teacher’s “Well done!” depends on the stu-dent’s response, “Tyrannosaurus Rex” (or another di-nosaur of the same class).20

The term contingency is also used in reference to thetemporal contiguity of behavior and its consequences. Asstated previously, behavior is selected by the con-sequences that immediately follow it, irrespective ofwhether those consequences were produced by or de-pended on the behavior. This is the meaning of contin-gency in Skinner’s (1953) statement, “So far as theorganism is concerned, the only important property ofthe contingency is temporal” (1953, p. 85).

Recognizing the Complexityof Human Behavior

Behavior—human or otherwise—remains an extremelydifficult subject matter.

—B. F. Skinner (1969, p. 114)

The experimental analysis of behavior has discovered anumber of basic principles—statements about how be-havior works as a function of environmental variables.These principles, several of which have been introduced

62

Basic Concepts

in this chapter, have been demonstrated, verified, andreplicated in hundreds and even thousands of experi-ments; they are scientific facts.21 Tactics for changing be-havior derived for these principles have also been applied,in increasingly sophisticated and effective ways, to a widerange of human behaviors in natural settings. A summaryof what has been learned from many of those applied be-havior analyses comprises the majority of this book.

The systematic application of behavior analysis tech-niques sometimes produces behavior changes of greatmagnitude and speed, even for clients whose behaviorhad been unaffected by other forms of treatment and ap-peared intractable. When such a happy (but not rare) out-come occurs, the neophyte behavior analyst must resistthe tendency to believe that we know more than we doabout the prediction and control of human behavior. Applied behavior analysis is a young science that has yetto achieve anything near a complete understanding andtechnological control of human behavior.

A major challenge facing applied behavior analysislies in dealing with the complexity of human behavior, es-pecially in applied settings where laboratory controls areimpossible, impractical, or unethical. Many of the fac-tors that contribute to the complexity of behavior stemfrom three general sources: the complexity of the humanrepertoire, the complexity of controlling variables, andindividual differences.

Complexity of the Human Repertoire

Humans are capable of learning an incredible range ofbehaviors. Response sequences, sometimes of no appar-ent logical organization, contribute to the complexity ofbehavior (Skinner, 1953). In a response chain, effectsproduced by one response influence the emission of otherresponses. Returning a winter coat to the attic leads torediscovering a scrapbook of old family photographs,which evokes a phone call to Aunt Helen, which sets theoccasion for finding her recipe for apple pie, and so on.

Verbal behavior may be the most significant con-tributor to the complexity of human behavior (Donahoe& Palmer, 1994; Michael, 2003; Palmer, 1991; Skinner,1957). Not only is a problem generated when the differ-ence between saying and doing is not recognized, butverbal behavior itself is often a controlling variable formany other verbal and nonverbal behaviors.

Operant learning does not always occur as a slow,gradual process. Sometimes new, complex, repertoires

21Like all scientific findings, these facts are subject to revision and even re-placement should future research reveal better ones.

appear quickly with little apparent direct conditioning(Epstein, 1991; Sidman, 1994). One type of rapid learn-ing has been called contingency adduction, a processwhereby a behavior that was initially selected and shapedunder one set of conditions is recruited by a different set of contingencies and takes on a new function in theperson’s repertoire (Adronis, 1983; Layng & Adronis,1984). Johnson and Layng (1992, 1994) described sev-eral examples of contingency adduction in which sim-ple (component) skills (e.g., addition, subtraction, andmultiplication facts, isolating and solving for X in a sim-ple linear equation), when taught to fluency, combinedwithout apparent instruction to form new complex (com-posite) patterns of behavior (e.g., factoring complexequations).

Intertwined lineages of different operants combineto form new complex operants (Glenn, 2004), which pro-duce response products that in turn make possible the ac-quisition of behaviors beyond the spatial and mechanicalrestraints of anatomical structure.

In the human case, the range of possibilities may be infi-nite, especially because the products of operant behaviorhave become increasingly complex in the context ofevolving cultural practices. For example, anatomicalconstraints prevented operant flying from emerging in ahuman repertoire only until airplanes were constructedas behavioral products. Natural selection’s leash hasbeen greatly relaxed in the ontogeny of operant units.(Glenn et al., 1992, p. 1332)

Complexity of Controlling Variables

Behavior is selected by its consequences. This megaprin-ciple of operant behavior sounds deceptively (and naively)simple. However, “Like other scientific principles, itssimple form masks the complexity of the universe it de-scribes” (Glenn, 2004, p. 134). The environment and itseffects on behavior are complex.

Skinner (1957) noted that, “(1) the strength of a sin-gle response may be, and usually is, a function of morethan one variable and (2) a single variable usually affectsmore than one response” (p. 227). Although Skinner waswriting in reference to verbal behavior, multiple causesand multiple effects are characteristics of many behavior–environment relations. Behavioral covariation illustratesone type of multiple effect. For example, Sprague andHorner (1992) found that blocking the emission of oneproblem behavior decreased the frequency of that be-havior but produced a collateral increase in other topogra-phies of problem behaviors in the same functional class.As another example of multiple effects, the presentationof an aversive stimulus may, in addition to suppressing the future occurrences of the behavior it follows, elicit

63

Basic Concepts

respondent behaviors and evoke escape and avoidancebehaviors—three different effects from one event.

Many behaviors are the result of multiple causes. Ina phenomena called joint control (Lowenkron, 2004), twodiscriminative stimuli can combine to evoke a commonresponse class. Concurrent contingencies can also com-bine to make a behavior more or less likely to occur in agiven situation. Perhaps we finally return our neighbor’sweed trimmer not just because he usually invites us in fora cup of coffee, but also because returning the tool re-duces the “guilt” we are feeling for keeping it for 2 weeks.

Concurrent contingencies often vie for control of in-compatible behaviors. We cannot watch “BaseballTonight” and study (properly) for an upcoming exam. Al-though not a technical term in behavior analysis,algebraic summation is sometimes used to describe theeffect of multiple, concurrent contingencies on behavior.The behavior that is emitted is thought to be the productof the competing contingencies “canceling portions ofeach other out” as in an equation in algebra.

Hierarchies of response classes within what was pre-sumed to be a single response class may be under multi-ple controlling variables. For example, Richman, Wacker,Asmus, Casey, and Andelman (1999) found that onetopography of aggressive behavior was maintained by onetype of reinforcement contingency while another form ofaggression was controlled by a different contingency.

All of these complex, concurrent, interrelated con-tingencies make it difficult for behavior analysts to iden-tify and control relevant variables. It should not besurprising that the settings in which applied behavior an-alysts ply their trade are sometimes described as placeswhere “reinforcement occurs in a noisy background”(Vollmer & Hackenberg, 2001, p. 251).

Consequently, as behavior analysts, we should rec-ognize that meaningful behavior change might take timeand many trials and errors as we work to understand theinterrelationships and complexities of the controlling vari-ables. Don Baer (1987) recognized that some of the largerproblems that beset society (e.g., poverty, substance ad-diction, illiteracy), given our present level of technology,might be too difficult to solve. He identified three barri-ers to solving such complex problems:

(a) We are not empowered to solve these bigger remain-ing problems, (b) we have not yet made the analysis ofhow to empower ourselves to try them, and (c) we havenot yet made the system-analytic task analyses that willprove crucial to solving those problems when we do em-power ourselves sufficiently to try them. . . . In my ex-perience, those projects that seem arduously long arearduous because (a) I do not have a strong interim rein-forcer compared to those in the existing system for sta-tus quo and must wait for opportunities when weakcontrol may operate, even so, or (b) I do not yet have a

correct task analysis of the problem and must strugglethrough trials and errors. By contrast (c) when I have aneffective interim reinforcer and I know the correct taskanalysis of this problem, long problems are simply thosein which the task analysis requires a series of many be-havior changes, perhaps in many people, and althougheach of them is relatively easy and quick, the series ofthem requires not so much effort as time, and so it is notarduous but merely tedious. (pp. 335, 336–337)

Individual Differences

You did not need to read this textbook to know that peo-ple often respond very differently to the same set of en-vironmental conditions. The fact of individual differencesis sometimes cited as evidence that principles of behav-ior based on environmental selection do not exist, at leastnot in a form that could provide the basis for a robust andreliable technology of behavior change. It is then arguedthat because people often respond differently to the sameset of contingencies, control of behavior must come fromwithin each person.

As each of us experiences varying contingencies ofreinforcement (and punishment), some behaviors arestrengthened (selected by the contingencies) and othersare weakened. This is the nature of operant conditioning,which is to say, human nature. Because no two peopleever experience the world in exactly the same way, eachof us arrives at a given situation with a different historyof reinforcement. The repertoire of behaviors each per-son brings to any situation has been selected, shaped, andmaintained by his or her unique history of reinforcement.Each human’s unique repertoire defines him or her as aperson. We are what we do, and we do what we havelearned to do. “He begins as an organism and becomes aperson or self as he acquires a repertoire of behavior”(Skinner, 1974, p. 231).

Individual differences in responding to current stim-ulus conditions, then, do not need to be attributed to dif-ferences in internal traits or tendencies, but to the orderlyresult of different histories of reinforcement. The behav-ior analyst must also consider people’s varying sensitiv-ities to stimuli (e.g., hearing loss, visual impairment) anddifferences in response mechanisms (e.g., cerebral palsy)and design program components to ensure that all par-ticipants have maximum contact with relevant contin-gencies (Heward, 2006).

Additional Obstacles to ControllingBehavior in Applied Settings

Compounding the difficulty of tackling the complexityof human behavior in the “noisy” applied settings wherepeople live, work, and play, applied behavior anal-ysts are sometimes prevented from implementing an

64

Behavior

1. In general, behavior is the activity of living organisms.

2. Technically, behavior is “that portion of an organism’s in-teraction with its environment that is characterized by de-tectable displacement in space through time of some partof the organism and that results in a measurable changein at least one aspect of the environment” (Johnston &Pennypacker, 1993a, p. 23).

3. The term behavior is usually used in reference to a largerset or class of responses that share certain topographical di-mensions or functions.

4. Response refers to a specific instance of behavior.

5. Response topography refers to the physical shape or formof behavior.

6. A response class is a group of responses of varyingtopography, all of which produce the same effect on theenvironment.

7. Repertoire can refer to all of the behaviors a person can door to a set of behaviors relevant to a particular setting or task.

Environment

8. Environment is the physical setting and circumstancesin which the organism or referenced part of the organ-ism exists.

9. Stimulus is “an energy change that affects an organismthrough its receptor cells” (Michael, 2004, p. 7).

10. The environment influences behavior primarily by stimu-lus change, not static stimulus conditions.

11. Stimulus events can be described formally (by their phys-ical features), temporally (by when they occur), and func-tionally (by their effects on behavior).

12. A stimulus class is a group of stimuli that share specifiedcommon elements along formal, temporal, and/or func-tional dimensions.

13. Antecedent conditions or stimulus changes exist or occurprior to the behavior of interest.

14. Consequences are stimulus changes that follow a behav-ior of interest.

15. Stimulus changes can have one or both of two basic ef-fects on behavior: (a) an immediate but temporary effectof increasing or decreasing the current frequency of thebehavior, and/or (b) a delayed but relatively permanent ef-fect in terms of the frequency of that type of behavior inthe future.

Respondent Behavior

16. Respondent behavior is elicited by antecedent stimuli.

17. A reflex is a stimulus–response relation consisting of anantecedent stimulus and the respondent behavior it elicits(e.g., bright light–pupil contraction).

18. All healthy members of a given species are born with thesame repertoire of unconditioned reflexes.

Basic Concepts

effective behavior change program due to logistical, fi-nancial, sociopolitical, legal, and/or ethical factors. Mostapplied behavior analysts work for agencies with lim-ited resources, which may make the data collection re-quired for a more complete analysis impossible. Inaddition, participants, parents, administrators, and eventhe general public may at times limit the behavior ana-lyst’s options for effective intervention (e.g., “We don’twant students working for tokens”). Legal or ethical con-siderations may also preclude determining experimen-tally the controlling variables for an important behavior.

Each of these practical complexities combines withthe behavioral and environmental complexities previouslymentioned to make the applied behavior analysis of so-cially important behavior a challenging task. However, thetask need not be overwhelming, and few tasks are as re-warding or as important for the betterment of humankind.

It is sometimes expressed that a scientific account ofbehavior will somehow diminish the quality or enjoy-ment of the human experience. For example, will our in-

creasing knowledge of the variables responsible for cre-ative behavior lessen the feelings evoked by a powerfulpainting or a beautiful symphony, or reduce our appreci-ation of the artists who produced them? We think not,and we encourage you, as you read and study about thebasic concepts introduced in this chapter and examinedin more detail throughout the book, to consider Nevin’s(2005) response to how a scientific account of behavioradds immeasurably to the human experience:

At the end of Origin of Species (1859), Darwin invitesus to contemplate a tangled bank, with its plants and itsbirds, its insects and its worms; to marvel at the com-plexity, diversity, and interdependence of its inhabitants;and to feel awe at the fact that all of it follows from thelaws of reproduction, competition, and natural selection.Our delight in the tangled bank and our love for its in-habitants are not diminished by our knowledge of thelaws of evolution; neither should our delight in the com-plex world of human activity and our love for its actorsbe diminished by our tentative but growing knowledgeof the laws of behavior. (Tony Nevin, personal commu-nication, December 19, 2005)

Summary

65

Basic Concepts

19. An unconditioned stimulus (e.g., food) and the respondentbehavior it elicits (e.g., salivation) are called unconditionedreflexes.

20. Conditioned reflexes are the product of respondent con-ditioning: a stimulus–stimulus pairing procedure in whicha neutral stimulus is presented with an unconditioned stim-ulus until the neutral stimulus becomes a conditioned stim-ulus that elicits the conditioned response.

21. Pairing a neutral stimulus with a conditioned stimulus canalso produce a conditioned reflex—a process called higherorder (or secondary) respondent conditioning.

22. Respondent extinction occurs when a conditioned stimu-lus is presented repeatedly without the unconditioned stim-ulus until the conditioned stimulus no longer elicits theconditioned response.

Operant Behavior

23. Operant behavior is selected by its consequences.

24. Unlike respondent behavior, whose topography and basicfunctions are predetermined, operant behavior can take avirtually unlimited range of forms.

25. Selection of behavior by consequences operates duringthe lifetime of the individual organism (ontogeny) and isa conceptual parallel to Darwin’s natural selection in theevolutionary history of a species (phylogeny).

26. Operant conditioning, which encompasses reinforcementand punishment, refers to the process and selective effectsof consequences on behavior:

• Consequences can affect only future behavior.

• Consequences select response classes, not individual responses.

• Immediate consequences have the greatest effect.

• Consequences select any behavior that precedes them.

• Operant conditioning occurs automatically.

27. Most stimulus changes that function as reinforcers or pun-ishers can be described as either (a) a new stimulus addedto the environment, or (b) an already present stimulus re-moved from the environment.

28. Positive reinforcement occurs when a behavior is followedimmediately by the presentation of a stimulus that in-creases the future frequency of the behavior.

29. Negative reinforcement occurs when a behavior is fol-lowed immediately by the withdrawal of a stimulus that in-creases the future frequency of the behavior.

30. The term aversive stimulus is often used to refer to stimulusconditions whose termination functions as reinforcement.

31. Extinction (withholding all reinforcement for a previouslyreinforced behavior) produces a decrease in response fre-quency to the behavior’s prereinforcement level.

32. Positive punishment occurs when a behavior is followedby the presentation of a stimulus that decreases the futurefrequency of the behavior.

33. Negative punishment occurs when a behavior is followedimmediately by the withdrawal of a stimulus that decreasesthe future frequency of the behavior.

34. A principle of behavior describes a functional relation be-tween behavior and one or more of its controlling vari-ables that has thorough generality across organisms,species, settings, and behaviors.

35. A behavior change tactic is a technologically consistentmethod for changing behavior that has been derived fromone or more basic principles of behavior.

36. Unconditioned reinforcers and punishers function irre-spective of any prior learning history.

37. Stimulus changes that function as conditioned reinforcersand punishers do so because of previous pairing with otherreinforcers or punishers.

38. One important function of motivating operations is alter-ing the current value of stimulus changes as reinforcementor punishment. For example, deprivation and satiation aremotivating operations that make food more or less effec-tive as reinforcement.

39. A discriminated operant occurs more frequently undersome antecedent conditions than it does under others, anoutcome called stimulus control.

40. Stimulus control refers to differential rates of operant re-sponding observed in the presence or absence of an-tecedent stimuli. Antecedent stimuli acquire the ability tocontrol operant behavior by having been paired with cer-tain consequences in the past.

41. The three-term contingency—antecedent, behavior, andconsequence—is the basic unit of analysis in the analysisof operant behavior.

42. If a reinforcer (or punisher) is contingent on a particularbehavior, the behavior must be emitted for the conse-quence to occur.

43. All applied behavior analysis procedures involve ma-nipulation of one or more components of the three-termcontingency.

Recognizing the Complexity of Human Behavior

44. Humans are capable of acquiring a huge repertoire of be-haviors. Response chains and verbal behavior also makehuman behavior extremely complex.

45. The variables that govern human behavior are often highlycomplex. Many behaviors have multiple causes.

46. Individual differences in histories of reinforcement andorganic impairments also make the analysis and controlof human behavior difficult.

47. Applied behavior analysts are sometimes prevented fromconducting an effective analysis of behavior because ofpractical, logistical, financial, sociopolitical, legal, and/orethical reasons.

66

67

Selecting and Defining Target Behaviors

Key TermsABC recordinganecdotal observationbehavior checklistbehavioral assessmentbehavioral cusp

ecological assessmentfunction-based definitionhabilitationnormalizationpivotal behavior

reactivityrelevance of behavior rulesocial validitytarget behaviortopography-based definition

Behavior Analyst Certification Board® BCBA® & BCABA®Behavior Analyst Task List ©, Third Edition

Content Area 1: Ethical Considerations

1-5 Assist the client with identifying lifestyle or systems change goals and targetsfor behavior change that are consistent with:

(a) the applied dimension of applied behavior analysis.

(b) applicable laws.

(c) the ethical and professional standards of the profession of applied behavioranalysis.

Content Area 8: Selecting Intervention Outcomes and Strategies

8-2 Make recommendations to the client regarding target outcomes based on such factors as client preferences, task analysis, current repertoires,supporting environments, constraints, social validity, assessment results, and best available scientific evidence.

8-3 State target intervention outcomes in observable and measurable terms.

8-5 Make recommendations to the client regarding behaviors that must be estab-lished, strengthened, and/or weakened to attain the stated intervention outcomes.

Content Area 6: Measurement of Behavior

6-2 Define behavior in observable and measurable terms.

© 2006 The Behavior Analyst Certification Board, Inc.,® (BACB®) all rights reserved. A current version ofthis document may be found at www.bacb.com. Requests to reprint, copy, or distribute this document andquestions about this document must be submitted directly to the BACB.

From Chapter 3 of Applied Behavior Analysis, Second Edition. John O. Cooper, Timothy E. Heron, William L. Heward. Copyright © 2007by Pearson Education, Inc. All rights reserved.

68

Applied behavior analysis is concerned withproducing predictable and replicable improve-ments in behavior. However, not just any behav-

ior will do: Applied behavior analysts improve sociallyimportant behaviors that have immediate and long-lastingmeaning for the person and for those who interact withthat person. For instance, applied behavior analysts de-velop language, social, motor, and academic skills thatmake contact with reinforcers and avoid punishers. Animportant preliminary step involves choosing the right be-haviors to target for measurement and change.

This chapter describes the role of assessment in ap-plied behavior analysis, including preassessment con-siderations, assessment methods used by behavioranalysts, issues in determining the social significance ofpotential target behaviors, considerations for prioritizingtarget behaviors, and the criteria and dimensions by whichselected behaviors should be defined to enable accurateand reliable measurement.

Role of Assessmentin Applied Behavior AnalysisAssessment is considered a linchpin in a four-phase sys-tematic intervention model that includes: assessment,planning, implementation, and evaluation (Taylor, 2006).

Definition and Purpose of Behavioral Assessment

Traditional psychological and educational assessmentstypically involve a series of norm- and/or criterion-refer-enced standardized tests to determine a person’s strengthsand weaknesses within cognitive, academic, social, and/orpsychomotor domains. Behavioral assessment involvesa variety of methods including direct observations, inter-views, checklists, and tests to identify and define targetsfor behavior change. In addition to identifying behav-ior(s) to change, comprehensive behavioral assessmentwill discover resources, assets, significant others, com-peting contingencies, maintenance and generalization fac-tors, and potential reinforcers and/or punishers that mayinform or be included in intervention plans to change thetarget behavior (Snell & Brown, 2006).1

Linehan (1977) offered a succinct and accurate de-scription of the purpose of behavioral assessment: “Tofigure out what the client’s problem is and how to change

1The behavioral assessment of problem behaviors sometimes includes athree-step process, called functional behavior assessment, for identifyingand systematically manipulating antecedents and/or consequences that maybe functioning as controlling variables for problems. Chapter entitled“Functional Behavior Assessment” describes this process in detail.

2See O’Neill and colleagues (1997) for their discussion of competing be-havior analysis as a bridge between functional assessment and subsequentintervention programs.

it for the better” (p. 31). Implicit in Linehan’s statementis the idea that behavioral assessment is more than an ex-ercise in describing and classifying behavioral abilitiesand deficiencies. Behavioral assessment goes beyond try-ing to obtain a psychometric score, grade equivalent data,or rating measure, as worthy as such findings might befor other purposes. Behavioral assessment seeks to dis-cover the function that behavior serves in the person’s en-vironment (e.g., positive reinforcement by social attention,negative reinforcement by escape from a task). Resultsof a comprehensive behavioral assessment give the be-havior analyst a picture of variables that increase, de-crease, maintain, or generalize the behavior of interest.A well-constructed and thorough behavioral assessmentprovides a roadmap from which the variables controllingthe behavior can be identified and understood. Conse-quently, subsequent interventions can be aimed more directly and have a much better chance of success. As Bourret, Vollmer, and Rapp (2004) pointed out: “Thecritical test of . . . assessment is the degree to which itdifferentially indicates an effective teaching strategy”(p. 140).

Phases of Behavioral Assessment

Hawkins (1979) conceptualized behavioral assessmentas funnel shaped, with an initial broad scope leading toan eventual narrow and constant focus. He described fivephases or functions of behavioral assessment: (a) screen-ing and general disposition, (b) defining and generallyquantifying problems or desired achievement criteria, (c)pinpointing the target behavior(s) to be treated, (d) mon-itoring progress, and (e) following up. Although the fivephases form a general chronological sequence, there isoften overlap. This chapter is concerned with the prein-tervention functions of assessment, the selection and de-finition of a target behavior—the specific behaviorselected for change.

To serve competently, applied behavior analysts mustknow what constitutes socially important behavior, havethe technical skills to use appropriate assessment meth-ods and instruments, and be able to match assessmentdata with an intervention strategy.2 For instance, the re-medial reading specialist must understand the critical be-haviors of a competent reader, be able to determine whichof those skills a beginning or struggling reader lacks, anddeliver an appropriate and effective instruction as inter-vention. Likewise, the behaviorally trained marriage andfamily therapist must be knowledgeable about the range

Selecting and Defining Target Behaviors

69

Selecting and Defining Target Behaviors

of behaviors that constitutes a functional family, be ableto assess family dynamics accurately, and provide so-cially acceptable interventions that reduce dysfunctionalinteractions. In short, any analyst must be knowledge-able about the context of the target behavior.

A Preassessment Consideration

Before conducting an informal or formal behavioral as-sessment for the purpose of pinpointing a target behav-ior, the analyst must address the fundamental question,Who has the authority, permission, resources, and skillsto complete an assessment and intervene with the be-havior? If a practitioner does not have authority or per-mission, then his role in assessment and intervention isrestricted. For example, suppose that a behavior analystis standing in a checkout line near a parent attempting tomanage an extremely disruptive child. Does the behavioranalyst have the authority or permission to assess theproblem or suggest an intervention to the parent? No.However, if the same episode occurred after the parenthad requested assistance with such problems, the behav-ior analyst could offer assessment and advice. In effect,applied behavior analysts must not only recognize therole of assessment in the assessment–intervention con-tinuum, but also recognize those situations in which usingtheir knowledge and skills to assess and change behavioris appropriate.3

Assessment Methods Usedby Behavior AnalystsFour major methods for obtaining assessment informationare (a) interviews, (b) checklists, (c) tests, and (d) directobservation. Interviews and checklists are indirect as-sessment approaches because the data obtained fromthese measures are derived from recollections, recon-structions, or subjective ratings of events. Tests and directobservation are considered direct assessment approachesbecause they provide information about a person’s behavior as it occurs (Miltenberger, 2004). Although in-direct assessment methods often provide useful infor-mation, direct assessment methods are preferred becausethey provide objective data on the person’s actual per-formance, not an interpretation, ranking, or qualitativeindex of that performance (Hawkins, Mathews, & Ham-dan, 1999; Heward, 2003). In addition to being aware of these four major assessment methods, analysts canprovide further assistance to those they serve by increas-ing their skills relative to the ecological implications of assessment.

3“Ethical Considerations for Applied Behavior Analysts,” examines thisimportant issue in detail.

Interviews

Assessment interviews can be conducted with the targetperson and/or with people who come into daily or regu-lar contact with the individual (e.g., teachers, parents,care providers).

Interviewing the Person

A behavioral interview is often a first and important stepin identifying a list of potential target behaviors, whichcan be verified or rejected by subsequent direct observa-tion. The interview can be considered a direct assessmentmethod when the person’s verbal behavior is of interestas a potential target behavior (Hawkins, 1975).

A behavioral interview differs from a traditional in-terview by the type of questions asked and the level of in-formation sought. Behavior analysts rely primarily onwhat and when questions that focus on the environmen-tal conditions that exist before, during, and after a be-havioral episode, instead of why questions, which tendto evoke mentalistic explanations that are of little valuein understanding the problem.

Asking the clients why they do something presumesthey know the answer and is often frustrating to clients,because they probably do not know and it seems thatthey should (Kadushin, 1972).

“Why” questions encourage the offering of “motiva-tional” reasons that are usually uninformative such as“I’m just lazy.” Instead, the client could be asked “Whathappens when . . . ?” One looks closely at what actuallyhappens in the natural environment. Attention is directedtoward behavior by questions that focus on it, such as,“Can you give me an example of what [you do]?” Whenone example is gained, then another can be requesteduntil it seems that the set of behaviors to which theclient refers when he employs a given word have beenidentified. (Gambrill, 1977, p. 153)

Figure 1 provides examples of the kinds of whatand when questions that can be applied during a behav-ioral assessment interview. This sequence of questionswas developed by a behavioral consultant in response toa teacher who wanted to reduce the frequency of her neg-ative reactions to acting out and disruptive students. Sim-ilar questions could be generated to address situations inhomes or community settings (Sugai & Tindal, 1993).

The main purpose of the behavioral assessment in-terview questions in Figure 1 is to identify variables that occur before, during, and/or after the occurrence ofnegative teacher attending behavior. Identifying envi-ronmental events that correlate with the behavior pro-vides valuable information for formulating hypothesesabout the controlling function of these variables and for

70

Selecting and Defining Target Behaviors

Figure 1 Sample behavioral interview questions.

Problem Identification Interview FormReason for referral: The teacher requested help reducing her negative attention to actingout and disruptive students who were yelling and noncompliant.

1. In your own words, can you define the problem behaviors that prompted yourrequest?

2. Are there any other teacher-based behaviors that concern you at this time?3. When you engage in negative teacher attention (i.e., when you attend to yelling or

noncompliant behavior), what usually happens immediately before the negativeteacher attending behavior occurs?

4. What usually happens after the negative teacher attention behavior occurs?5. What are the students’ reactions when you yell or attend to their noncompliant

behavior?6. What behaviors would the students need to perform so that you would be less likely

to attend to them in a negative way?7. Have you tried other interventions? What has been their effect?

planning interventions. Hypothesis generation leads toexperimental manipulation and the discovery of func-tional relations.

As an outgrowth of being interviewed, clients maybe asked to complete questionnaires or so-called needsassessment surveys. Questionnaires and needs assessmentsurveys have been developed in many human servicesareas to refine or extend the interview process (Altschuld& Witkin, 2000). Sometimes as a result of an initial in-terview, the client is asked to self-monitor her behaviorin particular situations. Self-monitoring can entail writ-ten or tape-recorded accounts of specific events.4 Client-collected data can be useful in selecting and defining targetbehaviors for further assessment or for intervention. Forexample, a client seeking behavioral treatment to quitsmoking might self-record the number of cigarettes hesmokes each day and the conditions when he smokes (e.g.,morning coffee break, after dinner, stuck in a traffic jam).These client-collected data may shed light on antecedentconditions correlated with the target behavior.

Interviewing Significant Others

Sometimes the behavior analyst either cannot interviewthe client personally or needs information from otherswho are important in the client’s life (e.g., parents, teach-ers, coworkers). In such cases, the analyst will interviewone or more of these significant others. When asked to de-scribe a behavioral problem or deficit, significant othersoften begin with general terms that do not identify spe-cific behaviors to change and often imply causal factorsintrinsic to the client (e.g., she is afraid, aggressive, un-motivated or lazy, withdrawn). By asking variations ofwhat, when, and how questions, the behavior analyst can

4Procedures for self-monitoring, a major component of many self-management interventions, are described in Chapter entitled “Self-Management.”

help significant others describe the problem in terms ofspecific behaviors and environmental conditions andevents associated with those behaviors. For example, thefollowing questions could be used in interviewing par-ents who have asked the behavior analyst for help be-cause their child is “noncompliant” and “immature.”

• What is Derek doing when you would be mostlikely to call him immature or noncompliant?

• During what time of day does Derek seem most im-mature (or noncompliant)? What does he do then?

• Are there certain situations or places where Derekis noncompliant or acts immature? If so, where,and what does he do?

• How many different ways does Derek act imma-ture (or noncompliant)?

• What’s the most frequent noncompliant thing thatDerek does?

• How do you and other family members respondwhen Derek does these things?

• If Derek were to be more mature and independentas you would like, what would he do differentlythan he does now?

Figure 2 shows a form that parents or significant others can use to begin to identify target behaviors.

In addition to seeking help from significant others inidentifying target behaviors and possible controlling vari-ables that can help inform an intervention plan, the be-havior analyst can sometimes use the interview todetermine the extent to which significant others are will-ing and able to help implement an intervention to changethe behavior. Without the assistance of parents, siblings,teacher aides, and staff, many behavior change programscannot be successful.

71

Selecting and Defining Target Behaviors

Figure 2 Form that parents or significant others in a person’s life can use togenerate starter lists of possible target behaviors.

The 5 + 5 Behavior List

Child’s name: ____________________________________________________________

Person completing this list: __________________________________________________

Listmaker’s relationship to child: ____________________________________________

5 good things _____ does now 5 things I’d like to see _____ learn to do more (or less) often

1. ______________________________ 1. ________________________________

2. ______________________________ 2. ________________________________

3. ______________________________ 3. ________________________________

4. ______________________________ 4. ________________________________

5. ______________________________ 5. ________________________________

Directions: Begin by listing in the left-hand column 5 desirable behaviors your child (orstudent) does regularly now; things that you want him or her to continue doing. Next, list inthe right-hand column 5 behaviors you would like to see your child do more often (thingsthat your child does sometimes but should do with more regularity) and/or undesirablebehaviors that you want him or her to do less often (or not at all). You may list more than 5 behaviors in either column, but try to identify at least 5 in each.

Checklists

Behavior checklists and rating scales can be used aloneor in combination with interviews to identify potentialtarget behaviors. A behavior checklist provides de-scriptions of specific behaviors (usually in hierarchicalorder) and the conditions under which each behaviorshould occur. Situation- or program-specific checklistscan be created to assess one particular behavior (e.g.,tooth brushing) or a specific skill area (e.g., a social skill),but most practitioners use published checklists to rate awide range of areas (e.g., The Functional AssessmentChecklist for Teachers and Staff [March et al., 2000]).

Usually a Likert scale is used that includes informa-tion about antecedent and consequence events that mayaffect the frequency, intensity, or duration of behaviors.For example, the Child Behavior Checklist (CBCL)comes in teacher report, parent report, and child report

forms and can be used with children ages 5 through 18(Achenbach & Edelbrock, 1991). The teacher’s form in-cludes 112 behaviors (e.g., “cries a lot,” “not liked byother pupils”) that are rated on a 3-point scale: “not true,”“somewhat or sometimes true,” or “very true or often true.”The CBCL also includes items representing social com-petencies and adaptive functioning such as getting alongwith others and acting happy.

The Adaptive Behavior Scale—School (ABS-S)(Lambert, Nihira, & Leland, 1993) is another frequentlyused checklist for assessing children’s adaptive behav-ior. Part 1 of the ABS-S contains 10 domains related to in-dependent functioning and daily living skills (e.g., eating,toilet use, money handling, numbers, time); Part 2 as-sesses the child’s level of maladaptive (inappropriate) be-havior in seven areas (e.g., trustworthiness, self-abusivebehavior, social engagement). Another version of theAdaptive Behavior Scale, the ABS-RC, assesses adaptive

72

Selecting and Defining Target Behaviors

behavior in residential and community settings (Nihira,Leland, & Lambert, 1993).

Information obtained from good behavior checklists(i.e., those with objectively stated items of relevance tothe client’s life) can help identify behaviors worthy ofmore direct and intensive assessment.

Standardized Tests

Literally thousands of standardized tests and assessmentdevices have been developed to assess behavior (cf., Spies& Plake, 2005). Each time a standardized test is admin-istered, the same questions and tasks are presented in aspecified way and the same scoring criteria and proce-dures are used. Some standardized tests yield norm-referenced scores. When a norm-referenced test is beingdeveloped, it is administered to a large sample of peopleselected at random from the population for whom the testis intended. Test scores of people in the norming sampleare then used to represent how scores on the test are gen-erally distributed throughout the population.

The majority of standardized tests on the market,however, are not conducive to behavioral assessment be-cause the results cannot be translated directly into targetbehaviors for instruction or treatment. For example, re-sults from standardized tests commonly used in theschools, such as the Iowa Tests of Basic Skills (Hoover,Hieronymus, Dunbar, & Frisbie, 1996), the Peabody In-dividual Achievement Test–R/NU (Markwardt, 2005), andthe Wide Range Achievement Test—3 (WRAT) (Wilkin-son, 1994), might indicate that a fourth-grader is per-forming at the third-grade level in mathematics and atthe first-grade level in reading. Such information mightbe useful in determining how the student performs inthese subjects compared to students in general, but it nei-ther indicates the specific math or reading skills the stu-dent has mastered nor provides sufficient direct contextwith which to launch an enrichment or remedial pro-gram. Further, behavior analysts may not be able to ac-tually administer a given test because of licensingrequirements. For instance, only a licensed psychologistcan administer some types of intelligence tests and per-sonality inventories.

Tests are most useful as behavioral assessment de-vices when they provide a direct measure of the person’sperformance of the behaviors of interest. In recent years,an increasing number of behaviorally oriented teachershave recognized the value of criterion-referenced andcurriculum-based assessments to indicate exactly whichskills students need to learn and, equally important, whichskills they have mastered (Browder, 2001; Howell, 1998).Curriculum-based assessments can be considered directmeasures of student performance because the data that

are obtained bear specifically on the daily tasks that thestudent performs (Overton, 2006).

Direct Observation

Direct and repeated observations of the client’s behaviorin the natural environment are the preferred method fordetermining which behaviors to target for change. A basicform of direct continuous observation, first described byBijou, Peterson, and Ault (1968), is called anecdotal ob-servation, or ABC recording. With anecdotal observa-tion the observer records a descriptive, temporallysequenced account of all behavior(s) of interest and theantecedent conditions and consequences for those be-haviors as those events occur in the client’s natural envi-ronment (Cooper, 1981). This technique producesbehavioral assessment data that can be used to identifypotential target behaviors.

Rather than providing data on the frequency of a spe-cific behavior, anecdotal observation yields an overalldescription of a client’s behavior patterns. This detailedrecord of the client’s behavior within its natural contextprovides accountability to the individual and to others in-volved in the behavior change plan and is extremely help-ful in designing interventions (Hawkins et al., 1999).

Accurately describing behavioral episodes as theyoccur in real time is aided by using a form to record rel-evant antecedents, behaviors, and consequences in tem-poral sequence. For example, Lo (2003) used the formshown in Figure 3 to record anecdotal observations of a fourth-grade special education student whose teacherhad complained that the boy’s frequent talk-outs and out-of-seat behavior were impairing his learning and oftendisrupted the entire class. (ABC observations can also berecorded on a checklist of specific antecedents, behav-iors, and consequent events individually created for theclient based on information from interviews and/or initialobservations.)

ABC recording requires the observer to commit fullattention to the person being observed. A classroomteacher, for example, could not use this assessment pro-cedure while engaging in other activities, such as man-aging a reading group, demonstrating a mathematicsproblem on the chalkboard, or grading papers. Anecdo-tal observation is usually conducted for continuous peri-ods of 20 to 30 minutes, and when responsibilities canbe shifted temporarily (e.g., during team teaching). Fol-lowing are some additional guidelines and suggestionsfor conducting anecdotal direct observations:

• Write down everything the client does and saysand everything that happens to the client.

73

Selecting and Defining Target Behaviors

Figure 3 Example of an anecdotal ABC recording form.

Student: Student 4 Date: 3/10/03 Setting: SED resource room (math period)

Observer: Experimenter Starting time: 2:40 P.M. Ending time: 3:00 P.M.

Time Antecedents (A) Behavior (B) Consequences (C)

2:40 T tells students to work Walks around the room T says,“Everyone is working,quietly on their math and looks at other students but you. I don’t need to

worksheets tell you what to do.”

✓ Sits down, makes funny noises A female peer says:“Would with mouth you please be quiet?”

✓ Says to the peer,“What? Me?” Peer continues to workStops making noises

2:41 Math worksheet Sits in his seat and works No one pays attention quietly to him

Math worksheet Pounds on desk with hands SED aide asks him to stop

2:45 Math worksheet Makes vocal noises Ignored by others

Math worksheet Yells-out T’s name three times T helps him with the and walks to her with her questions

worksheet

2:47 Everyone is working quietly Gets up and leaves his seat T asks him to sit down and work

✓ Sits down and works Ignored by others

Everyone is working quietly Gets up and talks to a peer T asks him to sit down and work

✓ Comes back to his seat Ignored by othersand works

2:55 Math worksheet, no one is Hand grabs a male peer and Peer refusesattending to him asks him to help on the

worksheet

✓ Asks another male peer to Peer helps himhelp him

2:58 ✓ Tells T he’s finished the work T asks him to turn in his and it’s his turn to work on work and tells him that it’s

computer not his turn on the computer

✓ Whines about why it’s not T explains to him that other his turn students are still working

on the computer and he needs to find a book to read

✓ Stands behind a peer who Ignored by Tis playing computer and

watches him play the game

Adapted from Functional Assessment and Individualized Intervention Plans: Increasing the BehaviorAdjustment of Urban Learners in General and Special Education Settings (p. 317) by Y. Lo. Unpublisheddoctoral dissertation. Columbus, OH:The Ohio State University. Used by permission.

• Use homemade shorthand or abbreviations to makerecording more efficient, but be sure the notes canbe and are accurately expanded immediately afterthe observation session.

• Record only actions that are seen or heard, not in-terpretations of those actions.

• Record the temporal sequence of each response ofinterest by writing down what happened just beforeand just after it.

• Record the estimated duration of each instance ofthe client’s behavior. Mark the beginning and end-ing time of each behavioral episode.

• Be aware that continuous anecdotal observation isoften an obtrusive recording method. Most peoplebehave differently when they see someone with apencil and clipboard staring at them. Knowing this,observers should be as unobtrusive as possible (e.g.,stay a reasonable distance away from the subject).

74

Selecting and Defining Target Behaviors

• Carry out the observations over a period of severaldays so that the novelty of having someone observethe client will lessen and the repeated observationscan produce a valid picture of day-to-day behavior.

Ecological Assessment

Behavior analysts understand that human behavior is afunction of multiple events and that many events havemultiple effects on behavior (cf., Michael, 1995). An eco-logical approach to assessment recognizes the complexinterrelationships between environment and behavior. Inan ecological assessment a great deal of information isgathered about the person and the various environmentsin which that person lives and works. Among the manyfactors that can affect a person’s behavior are physiolog-ical conditions, physical aspects of the environment (e.g.,lighting, seating arrangements, noise level), interactionswith others, home environment, and past reinforcementhistory. Each of these factors represents a potential areafor assessment.

Although a thorough ecological assessment will pro-vide a tremendous amount of descriptive data, the basicpurpose of assessment—to identify the most pressing be-havior problem and possible ways to alleviate it—shouldnot be forgotten. It is easy to go overboard with the eco-logical approach, gathering far more information thannecessary. Ecological assessment can be costly in termsof professional and client time, and it may raise ethicaland perhaps legal questions regarding confidentiality(Koocher & Keith-Spiegel, 1998). Ultimately, good judg-ment must be used in determining how much assessmentinformation is necessary. Writing about the role of eco-logical assessment for special education teachers, Heronand Heward (1988) suggested that

The key to using an ecological assessment is to knowwhen to use it. Full-scale ecological assessments fortheir own sake are not recommended for teacherscharged with imparting a great number of importantskills to many children in a limited amount of time. Inmost cases, the time and effort spent conducting an ex-haustive ecological assessment would be better used indirect instruction. While the results of an ecological as-sessment might prove interesting, they do not alwayschange the course of a planned intervention. Under whatconditions then will an ecological assessment yield datathat will significantly affect the course of treatment?Herein lies the challenge. Educators must strive to be-come keen discriminators of: (1) situations in which aplanned intervention has the potential for affecting stu-dent behaviors other than the behavior of concern; and(2) situations in which an intervention, estimated to beeffective if the target behavior were viewed in isolation,may be ineffective because other ecological variables

5Reactive effects of assessment are not necessarily negative. Self monitor-ing has become as much a treatment procedure as it is an assessment pro-cedure; see Chapter entitled “Self-Management.”

come into play. Regardless of the amount and range ofinformation available concerning the student, a teachermust still make instructional decisions based on an em-pirical analysis of the target behavior. Ultimately, thiscareful analysis (i.e., direct and daily measurement) ofthe behavior of interest may be ineffective because otherecological variables come into play. (p. 231)

Reactive Effects of Direct Assessment

Reactivity refers to the effects of an assessment proce-dure on the behavior being assessed (Kazdin, 1979). Re-activity is most likely when observation is obtrusive—thatis, the person being observed is aware of the observer’spresence and purpose (Kazdin, 2001). Numerous stud-ies have demonstrated that the presence of observers inapplied settings can influence a subject’s behavior (Mer-catoris & Craighead, 1974; Surratt, Ulrich, & Hawkins,1969; White, 1977). Perhaps the most obtrusive assess-ment procedures are those that require the subject to mon-itor and record her own behavior. Research onself-monitoring shows that the procedure commonly af-fects the behavior under assessment (Kirby, Fowler, &Baer, 1991).5

Although research suggests that even when the pres-ence of an observer alters the behavior of the people beingobserved, the reactive effects are usually temporary (e.g.,Haynes & Horn, 1982; Kazdin, 1982). Nevertheless, be-havior analysts should use assessment methods that areas unobtrusive as possible, repeat observations until ap-parent reactive effects subside, and take possible reactiveeffects into account when interpreting the results of ob-servations.

Assessing the SocialSignificance of PotentialTarget BehaviorsIn the past, when a teacher, therapist, or other human ser-vices professional determined that a client’s behaviorshould be changed, few questions were asked. It was as-sumed that the change would be beneficial to the person.This presumption of benevolence is no longer ethicallyacceptable (not that it ever was). Because behavior ana-lysts possess an effective technology to change behaviorin predetermined directions, accountability must beserved. Both the goals and the rationale supporting

75

Selecting and Defining Target Behaviors

behavior change programs must be open to critical ex-amination by the consumers (clients and their families)and by others who may be affected (society) by the be-havior analyst’s work. In selecting target behaviors, prac-titioners should consider whose behavior is beingassessed—and changed—and why.

Target behaviors should not be selected for the pri-mary benefit of others (e.g., “Be still, be quiet, be docile,”in Winett & Winkler, 1972), to simply maintain the sta-tus quo (Budd & Baer, 1976; Holland, 1978), or becausethey pique the interest of someone in a position to changethe behaviors, as illustrated in the following incident:

A bright, conscientious graduate student was interestedin doing his thesis in a program for seriously malad-justed children. He wanted to teach cursive writing to a . . . child [who] could not read (except his name), print,or even reliably identify all the letters of the alphabet. Iasked “Who decided that was the problem to work onnext?” (Hawkins, 1975, p. 195)

Judgments about which behaviors to change are dif-ficult to make. Still, practitioners are not without direc-tion when choosing target behaviors. Numerous authorshave suggested guidelines and criteria for selecting tar-get behaviors (e.g., Ayllon & Azrin, 1968; Bailey &Lessen, 1984; Bosch & Fuqua, 2001; Hawkins, 1984;Komaki, 1998; Rosales-Ruiz & Baer, 1997), all of whichrevolve around the central question, To what extent willthe proposed behavior change improve the person’s lifeexperience?

A Definition of Habilitation

Hawkins (1984) suggested that the potential meaning-fulness of any behavior change should be judged withinthe context of habilitation, which he defined as follows:

Habilitation (adjustment) is the degree to which the per-son’s repertoire maximizes short and long term rein-forcers for that individual and for others, and minimizesshort and long term punishers. (p. 284)

Hawkins (1986) cited several advantages of the de-finition in that it (a) is conceptually familiar to behavioranalysts, (b) defines treatment using measurable out-comes, (c) is applicable to a wide range of habilitativeactivities, (d) deals with individual and societal needs ina nonjudgmental way, (e) treats adjustment along a con-tinuum of adaptive behavior and is not deficit driven, and(f) is culturally and situationally relative.

Judgments about how much a particular behaviorchange will contribute to a person’s overall habilitation(adjustment, competence) are difficult to make. In manycases we simply do not know how useful or functional agiven behavior change will prove to be (Baer, 1981, 1982),

even when its short-term utility can be predicted. Appliedbehavior analysts, however, must place the highest im-portance on the selection of target behaviors that are trulyuseful and habilitative (Hawkins, 1991). In effect, if a po-tential target behavior meets the habilitation standard, thenthe individual is much more likely to acquire additional re-inforcement in the future and avoid punishment.

From both ethical and pragmatic perspectives, anybehavior targeted for change must benefit the person ei-ther directly or indirectly. Examining potential target be-haviors according to the 10 questions and considerationsdescribed in the following sections should help clarifytheir relative social significance and habilitative value.Figure 4 summarizes these considerations in a work-sheet format that can be used in evaluating the social sig-nificance of potential target behaviors.

Is This Behavior Likely to ProduceReinforcement in the Client’sNatural Environment AfterTreatment Ends?

To determine whether a particular target behavior is func-tional for the client, the behavior analyst, significant oth-ers, and whenever possible the client should ask whetherthe proposed behavior change will be reinforced in theperson’s daily life. Ayllon and Azrin (1968) called thisthe relevance of behavior rule; it means that a target be-havior should be selected only when it can be determinedthat the behavior is likely to produce reinforcement inthe person’s natural environment. The likelihood that anew behavior will result in reinforcement after the be-havior change program is terminated is the primary determinant of whether the new behavior will be main-tained, thereby having the possibility of long-term ben-efits for that person.

Judging whether occurrences of the target behaviorwill be reinforced in the absence of intervention can alsohelp to clarify whether the proposed behavior change isprimarily for the individual’s benefit or for someone else.For instance, despite parental wishes or pressure, it wouldbe of little value to try to teach math skills to a studentwith severe developmental disabilities with pervasivedeficits in communication and social skills. Teachingcommunication skills that would enable the student tohave more effective interactions in her current environ-ment should take precedence over skills that she might beable to use in the future (e.g., making change in the gro-cery store). Sometimes target behaviors are selected ap-propriately not because of their direct benefit to theperson, but because of an important indirect benefit. In-direct benefits can occur in several different ways as de-scribed by the three questions that follow.

76

Selecting and Defining Target Behaviors

Figure 4 Worksheet for evaluating the social significance of potential targetbehaviors.

Client’s/Student’s name: __________________________________ Date: ______________

Person completing worksheet: ________________________________________________

Rater’s relationship to client/student: ____________________________________________

Behavior: __________________________________________________________________

Considerations Assessment Rationale/Comments

Is this behavior likely to Yes No Not sureproduce reinforcement in the client’s natural envi-ronment after intervention ends?

Is this behavior a neces- Yes No Not suresary prerequisite for a more complex and functional skill?

Will this behavior in- Yes No Not surecrease the client’s access to environments in which other important behaviors can be acquired and used?

Will changing this be- Yes No Not surehavior predispose others to interact with the client in a more appropriate and supportive manner?

Is this behavior a pivotal Yes No Not surebehavior or behavioral cusp?

Is this an age-appropriate Yes No Not surebehavior?

If this behavior is to be Yes No Not surereduced or eliminated from the client’s repertoire, has an adaptive and functional behavior been selected to replace it?

Does this behavior repre- Yes No Not suresent the actual problem/goal, or is it only indirectly related?

Is this “just talk,” or is it Yes No Not surethe real behavior of interest?

If the goal itself is not a Yes No Not surespecific behavior (e.g., losing 20 lbs.), will this behavior help achieve it?

Summary notes/comments: __________________________________________________

77

Selecting and Defining Target Behaviors

Is This Behavior a NecessaryPrerequisite for a Useful Skill?

Some behaviors that, in and of themselves, are not im-portant are targeted for instruction because they are nec-essary prerequisites to learning other functionalbehaviors. For example, advances in reading researchhave demonstrated that teaching phonemic awarenessskills (e.g., sound isolation: What is the first sound innose?; phoneme segmentation: What sounds do you hearin the word fat?; odd word out: What word starts with adifferent sound: cat, couch, fine, cake?) to nonreadershas positive effects on their acquisition of reading skills(National Reading Panel, 2000).6

Will This Behavior Increase theClient’s Access to Environments inWhich Other Important BehaviorsCan Be Learned and Used?

Hawkins (1986) described the targeting of “access be-haviors” as a means of producing indirect benefits toclients. For example, special education students are some-times taught to complete their workbook pages neatly,interact politely with the general education classroomteacher, and stay in their seats during the teacher’s pre-sentation. These behaviors are taught with the expectationthat they will increase acceptance into a general educa-tion classroom, thereby increasing access to general ed-ucation and instructional programs.

Will Changing This BehaviorPredispose Others to Interact withthe Client in a More Appropriateand Supportive Manner?

Another type of indirect benefit occurs when a behaviorchange is of primary interest to a significant other in theperson’s life. The behavior change may enable the sig-nificant other to behave in a manner more beneficial to theperson. For example, suppose a teacher wants the par-ents of his students to implement a home-based instruc-tion program, believing that the students’ language skillswould improve considerably if their parents spent just 10minutes per night playing a vocabulary game with them.In meeting with one student’s parents, however, the

6A target behavior’s indirect benefit as a necessary prerequisite for anotherimportant behavior should not be confused with indirect teaching. Indirectteaching involves selecting a target behavior different from the true purposeof the behavior because of a belief that they are related (e.g., having stu-dents with poor reading skills practice shape discrimination or balancebeam walking). The importance of directness in target behavior selectionis discussed later in this section.

teacher realizes that although the parents are also con-cerned about poor language skills, they have other and,in their opinion, more pressing needs—the parents wanttheir child to clean her room and help with the dinnerdishes. Even though the teacher believes that straighten-ing up a bedroom and washing dishes are not as importantto the child’s ultimate welfare as language development,these tasks may indeed be important target behaviors ifa sloppy room and a sink full of dirty dishes impede pos-itive parent–child interactions (including playing theteacher’s vocabulary-building games). In this case, thedaily chores might be selected as the target behaviors forthe direct, immediate benefit of the parents, with the expectation that the parents will be more likely to helptheir daughter with school-related activities if they arehappier with her because she straightens her bedroomand helps with the dishes.

Is This Behavior a Behavioral Cuspor a Pivotal Behavior?

Behavior analysts often use a building block method todevelop repertoires for clients. For example, in teachinga complex skill (e.g., two-digit multiplication), simplerand more easily attainable skills are taught first (e.g., ad-dition, regrouping, single-digit multiplication), or withshoe tying, crossing laces, making bows, and tying knotsare taught systematically. As skill elements are mastered,they are linked (i.e., chained) into increasingly complexrepertoires. At any point along this developmental skillcontinuum when the person performs the skill to crite-rion, reinforcement follows, and the practitioner makesthe determination to advance to the next skill level. Assystematic and methodical as this approach has proven tobe, analysts are researching ways to improve the effi-ciency of developing new behavior. Choosing target be-haviors that are behavioral cusps and pivotal behaviorsmay increase this efficiency.

Behavioral Cusps.

Rosales-Ruiz and Baer (1997) defined a behavioral cuspas:

a behavior that has consequences beyond the change it-self, some of which may be considered important. . . .What makes a behavior change a cusp is that it exposesthe individual’s repertoire to new environments, espe-cially new reinforcers and punishers, new contingen-cies, new responses, new stimulus controls, and newcommunities of maintaining or destructive contingen-cies. When some or all of those events happen, the individual’s repertoire expands; it encounters a differen-tially selective maintenance of the new as well as some

78

Selecting and Defining Target Behaviors

old repertoires, and perhaps that leads to some furthercusps. (p. 534)

Rosales-Ruiz and Baer (1997) cited as examples ofpossible behavioral cusps behaviors such as crawling,reading, and generalized imitation, because they “suddenlyopen the child’s world to new contingencies that will de-velop many new, important behaviors” (p. 535, emphasisadded). Cusps differ from component or prerequisite be-haviors. For an infant, specific arm, head, leg, or posi-tional movements would be component behaviors forcrawling, but crawling is the cusp because it enables theinfant to contact new environments and stimuli as sourcesof motivation and reinforcement (e.g., toys, parents),which in turn opens a new world of contingencies that canfurther shape and select other adaptive behaviors.

The importance of cusps is judged by (a) the extent ofthe behavior changes they systematically enable, (b)whether they systematically expose behavior to newcusps, and (c) the audience’s view of whether thesechanges are important for the organism, which in turn isoften controlled by societal norms and expectations ofwhat behaviors should develop in children and whenthat development should happen. (Rosales-Ruiz & Baer,1997, p. 537)

Bosch and Fuqua (2001) suggested that clarifying “apriori dimensions for determining if a behavior might bea cusp is an important step in realizing the potential of thecusp concept” (p. 125). They stated that a behavior mightbe a cusp if it meets one or more of five criteria: “(a) ac-cess to new reinforcers, contingencies, and environments;(b) social validity; (c) generativeness; (d) competitionwith inappropriate responses; and (e) number and the rel-ative importance of people affected” (p. 123). The morecriteria that a behavior satisfies, the stronger the case thatthe behavior is a cusp. By identifying and assessing tar-get behaviors based on their cusp value, practitioners mayindeed open “a new world” of far-reaching potential forthose they serve.

Pivotal Behavior

Pivotal behavior has emerged as an interesting andpromising concept in behavioral research, especially asit relates to the treatment of people with autism and de-velopmental disabilities. R. L. Koegel and L. K. Koegeland their colleagues have examined pivotal behavior as-sessment and treatment approaches across a wide rangeof areas (e.g., social skills, communication ability, dis-ruptive behaviors) (Koegel & Frea, 1993; Koegel &Koegel, 1988; Koegel, Koegel, & Schreibman, 1991).Briefly stated, a pivotal behavior is a behavior that, once

learned, produces corresponding modifications or co-variations in other adaptive untrained behaviors. For in-stance, Koegel, Carter, and Koegel (2003) indicated thatteaching children with autism to “self-initiate” (e.g., ap-proach others) may be a pivotal behavior. The “longitu-dinal outcome data from children with autism suggestthat the presence of initiations may be a prognostic in-dicator of more favorable long-term outcomes and there-fore may be ‘pivotal’ in that they appear to result inwidespread positive changes in a number of areas”(p. 134). That is, improvement in self-initiations may bepivotal for the emergence of untrained response classes,such as asking questions and increased production anddiversity of talking.

Assessing and targeting pivotal behaviors can be ad-vantageous for both the practitioner and the client. Fromthe practitioner’s perspective, it might be possible to as-sess and then train pivotal behaviors within relatively fewsessions that would later be emitted in untrained settingsor across untrained responses (Koegel et al., 2003). Fromthe client’s perspective, learning a pivotal behavior wouldshorten intervention, provide the person with a new reper-toire with which to interact with his environment, im-prove the efficiency of learning, and increase the chancesof coming into contact with reinforcers. As Koegel andcolleagues concluded: “The use of procedures that teachthe child with disabilities to evoke language learning op-portunities in the natural environment may be particu-larly useful for speech and language specialists or otherspecial educators who desire ongoing learning outside oflanguage teaching sessions” (p. 143).

Is This an Age-Appropriate Behavior?

A number of years ago it was common to see adults withdevelopmental disabilities being taught behaviors that a nondisabled adult would seldom, if ever, do. It wasthought—perhaps as a by-product of the concept of men-tal age—that a 35-year-old woman with the verbal skillsof a 10-year-old should play with dolls. Not only is theselection of such target behaviors demeaning, but theiroccurrence lessens the probability that other people inthe person’s environment will set the occasion for andreinforce more desirable, adaptive behaviors, which couldlead to a more normal and rewarding life.

The principle of normalization refers to the use ofprogressively more typical environments, expectations,and procedures “to establish and/or maintain personalbehaviors which are as culturally normal as possible”(Wolfensberger, 1972, p. 28). Normalization is not a sin-gle technique, but a philosophical position that holds thegoal of achieving the greatest possible physical and so-cial integration of people with disabilities into the main-stream of society.

79

Selecting and Defining Target Behaviors

In addition to the philosophical and ethical reasonsfor selecting age- and setting-appropriate target behav-iors, it should be reemphasized that adaptive, indepen-dent, and social behaviors that come into contact withreinforcement are more likely to be maintained than arebehaviors that do not. For example, instruction in leisure-time skills such as sports, hobbies, and music-related ac-tivities would be more functional for a 17-year-old boythan teaching him to play with toy trucks and buildingblocks. An adolescent with those behaviors—even in anadapted way—has a better chance of interacting in a typ-ical fashion with his peer group, which may help to ensurethe maintenance of his newly learned skills and provideopportunities for learning other adaptive behaviors.

If the Proposed Target Behavior Isto Be Reduced or Eliminated, WhatAdaptive Behavior Will Replace It?

A practitioner should never plan to reduce or eliminate abehavior from a person’s repertoire without (a) determin-ing an adaptive behavior that will take its place and (b)designing the intervention plan to ensure that the replace-ment behavior is learned. Teachers and other human ser-vices professionals should be in the business of buildingpositive, adaptive repertoires, not merely reacting to andeliminating behaviors they find troublesome (Snell &Brown, 2006). Even though a child’s maladaptive behav-iors may be exceedingly annoying to others, or even dam-aging physically, those undesirable responses have provenfunctional for the child. That is, the maladaptive behaviorhas worked for the child in the past by producing rein-forcers and/or helping the child avoid or escape punishers.A program that only denies that avenue of reinforcementis a nonconstructive approach. It does not teach adaptivebehaviors to replace the inappropriate behavior.

Some of the most effective and recommended meth-ods for eliminating unwanted behavior focus primarilyon the development of desirable replacement behaviors.Goldiamond (1974) recommended that a “constructional”approach—as opposed to an eliminative approach—beused for the analysis of and intervention into behavioralproblems. Under the constructional approach the “solu-tion to problems is the construction of repertoires (or theirreinstatement or transfer to new situations) rather thanthe elimination of repertoires” (Goldiamond, 1974, p. 14).

If a strong case cannot be made for specific, positivereplacement behaviors, then a compelling case has notbeen made for eliminating the undesirable target behav-ior. The classroom teacher, for example, who wants a be-havior change program to maintain students staying intheir seats during reading period must go beyond the sim-ple notion that ”they need to be in their seats to do the

work.” The teacher must select materials and design con-tingencies that facilitate that goal and motivate the stu-dents to accomplish their work.

Does This Behavior Representthe Actual Problem or Goal,or Is It Only Indirectly Related?

An all-too-common error in education is teaching a re-lated behavior rather than the behavior of interest. Manybehavior change programs have been designed to increaseon-task behaviors when the primary objective should havebeen to increase production or work output. On-task be-haviors are chosen because people who are productivealso tend to be on task. However, as on task is usuallydefined, it is quite possible for a student to be on task(i.e., in her seat, quiet, and oriented toward or handlingacademic materials) yet produce little or no work.

Targeting needed prerequisite skills should not beconfused with selecting target behaviors that do not di-rectly represent or fulfill the primary reasons for the be-havior analysis effort. Prerequisite skills are not taughtas terminal behaviors for their own sake, but as neces-sary elements of the desired terminal behavior. Related,but indirect, behaviors are not necessary to perform thetrue objective of the program, nor are they really intendedoutcomes of the program by themselves. In attemptingto detect indirectness, behavior analysts should ask twoquestions: Is this behavior a necessary prerequisite to theintended terminal behavior? Is this behavior what the in-structional program is really all about? If either questioncan be answered affirmatively, the behavior is eligiblefor target behavior status.

Is This Just Talk, or Is It the RealBehavior of Interest?

Many nonbehavioral therapies rely heavily on what peo-ple say about what they do and why they do it. Theclient’s verbal behavior is considered important becauseit is believed to reflect the client’s inner state and themental processes that govern the client’s behavior. There-fore, getting a person to talk differently about himself(e.g., in a more healthful, positive, and less self-effacingway) is viewed as a significant step in solving the per-son’s problem. Indeed, this change in attitude is consid-ered by some to be the primary goal of therapy.

Behavior analysts, on the other hand, distinguish be-tween what people say and what they do (Skinner, 1953).Knowing and doing are not the same. Getting someoneto understand his maladaptive behavior by being able totalk logically about it does not necessarily mean that hisbehavior will change in more constructive directions. Thegambler may know that compulsive betting is ruining his

80

Selecting and Defining Target Behaviors

life and that his losses would cease if he simply stoppedplacing bets. He may even be able to verbalize these factsto a therapist and state quite convincingly that he will notgamble in the future. Still, he may continue to bet.

Because verbal behavior can be descriptive of whatpeople do, it is sometimes confused with the performanceitself. A teacher at a school for juvenile offenders intro-duced a new math program that included instructionalgames, group drills, timed tests, and self-graphing. Thestudents responded with many negative comments: “Thisis stupid,” “Man, I’m not writin’ down what I do,” “I’mnot even going to try on these tests.” If the teacher had at-tended only to the students’ talk about the program, itwould probably have been discarded on the first day. Butthe teacher was aware that negative comments aboutschool and work were expected in the peer group of ado-lescent delinquents and that many of her students’ nega-tive remarks had enabled them in the past to avoid tasksthey thought they would not enjoy. Consequently, theteacher ignored the negative comments and attended toand rewarded her students for accuracy and rate of mathcomputation when they participated in the program. Inone week’s time the negative talk had virtually ceased,and the students’ math production was at an all-time high.

There are, of course, situations in which the behav-ior of interest is what the client says. Helping a person re-duce the number of self-effacing comments he makes andincrease the frequency of positive self-descriptions is anexample of a program in which talk should be the targetbehavior—not because the self-effacing comments areindicative of a poor self-concept, but because the client’sverbal behavior is the problem.

In every case, a determination must be made of ex-actly which behavior is the desired functional outcome ofthe program: Is it a skill or motor performance, or is itverbal behavior? In some instances, doing and talkingbehaviors might be important. A trainee applying for alawn mower repair position may be more likely to get ajob if he can describe verbally how he would fix a crankystarter on a mower. However, it is possible that, oncehired, he can hold his job if he is skilled and efficient inrepairing lawn mowers and does not talk about what hedoes. However, it is highly unlikely that a person will lastvery long on the job if he talks about how he would fix alawn mower but is not able to do so. Target behaviorsmust be functional.

What If the Goal of the BehaviorChange Program Is Not a Behavior?

Some of the important changes people want to make intheir lives are not behaviors, but are instead the result orproduct of certain other behaviors. Weight loss is an ex-ample. On the surface it might appear that target behav-

ior selection is obvious and straightforward—losingweight. The number of pounds can be measured accu-rately; but weight, or more precisely losing weight, is nota behavior. Losing weight is not a specific response thatcan be defined and performed; it is the product or resultof other behaviors—most notably reduced food con-sumption and/or increased exercise. Eating and exerciseare behaviors and can be specifically defined and mea-sured in precise units.

Some otherwise well-designed weight loss programshave not been successful because behavior change con-tingencies were placed on the goal (reduced weight) andnot on the behaviors necessary to produce the goal. Tar-get behaviors in a weight loss program should be mea-sures of food consumption and exercise level, withintervention strategies designed to address those behav-iors (e.g., De Luca & Holborn, 1992; McGuire, Wing,Klem, & Hill, 1999). Weight should be measured andcharted during a weight loss program, not because it is thetarget behavior of interest, but because weight loss showsthe positive effects of increased exercise or decreasedfood consumption.

There are numerous other examples of importantgoals that are not behaviors, but are the end products ofbehavior. Earning good grades, for example, is a goalthat must be analyzed to determine what behaviors pro-duce better grades (e.g., solving math problems viaguided and independent practice). Behavior analysts canbetter help clients achieve their goals by selecting targetbehaviors that are the most directly and functionally re-lated to those goals.

Some goals expressed by and for clients are not the di-rect product of a specific target behavior, but broader, moregeneral goals: to be more successful, to have more friends,to be creative, to learn good sportsmanship, to develop animproved self-concept. Clearly, none of these goals are de-fined by specific behaviors, and all are more complex interms of their behavioral components than losing weight orgetting a better grade in math. Goals such as being suc-cessful represent a class of related behaviors or a generalpattern of responding. They are labels that are used to de-scribe people who behave in certain ways. Selecting targetbehaviors that will help clients or students attain these kindsof goals is even more difficult than their complexity sug-gests because the goals themselves often mean differentthings to different people. Being a success can entail a widevariety of behaviors. One person may view success in termsof income and job title. For another, success might meanjob satisfaction and good use of leisure time. An importantrole of the behavior analyst during assessment and targetbehavior identification is to help the client select and de-fine personal behaviors, the sum of which will result in theclient and others evaluating her repertoire in the intendedfashion.

81

Selecting and Defining Target Behaviors

Prioritizing TargetBehaviorsOnce a “pool” of eligible target behaviors has been iden-tified, decisions must be made about their relative prior-ity. Sometimes the information obtained from behavioralassessment points to one particular aspect of the person’srepertoire in need of improvement more so than another.More often, though, assessment reveals a constellationof related, and sometimes not-so-related, behaviors inneed of change. Direct observations, along with a be-havioral interview and needs assessment, may produce along list of important behaviors to change. When morethan one eligible target behavior remains after carefulevaluation of the considerations described in the previoussection, the question becomes, Which behavior shouldbe changed first? Judging each potential target behaviorin light of the following nine questions may help deter-mine which behavior deserves attention first, and the rel-ative order in which the remaining behaviors will beaddressed.

1. Does this behavior pose any danger to the client or toothers? Behaviors that cause harm or pose a seriousthreat to the client’s or to others’ personal safety orhealth must receive first priority.

2. How many opportunities will the person have touse this new behavior? or How often does thisproblem behavior occur? A student who consis-tently writes reversed letters presents more of aproblem than does a child who reverses lettersonly occasionally. If the choice is between firstteaching a prevocational student to pack his lunchor to learn how to plan his two-week vacationeach year, the former skill takes precedence be-cause the employee-to-be may need to pack hislunch every workday.

3. How long-standing is the problem or skill deficit?A chronic behavior problem (e.g., bullying) or skilldeficit (e.g., lack of social interaction skills) shouldtake precedence over problems that appear sporadi-cally or that have just recently surfaced.

4. Will changing this behavior produce higher ratesof reinforcement for the person? If all other con-siderations are equal, a behavior that results inhigher, sustained levels of reinforcement shouldtake precedence over a behavior that produces littleadditional reinforcement for the client.

5. What will be the relative importance of this targetbehavior to future skill development and indepen-dent functioning? Each target behavior should bejudged in terms of its relation (i.e., prerequisite or

supportive) to other critical behaviors needed foroptimal learning and development and maximumlevels of independent functioning in the future.

6. Will changing this behavior reduce negative or un-wanted attention from others? Some behaviors arenot maladaptive because of anything inherent inthe behavior itself, but because of the unnecessaryproblems the behavior causes the client. Some peo-ple with developmental and motoric disabilitiesmay have difficulty at mealtimes with using uten-sils and napkins appropriately, thus reducing op-portunities for positive interaction in public.Granted, public education and awareness are war-ranted as well, but it would be naive not to con-sider the negative effects of public reaction. Also,not teaching more appropriate mealtime skills maybe a disservice to the person. Idiosyncratic publicdisplays or mannerisms may be high-priority targetbehaviors if their modification is likely to provideaccess to more normalized settings or importantlearning environments.

7. Will this new behavior produce reinforcement forsignificant others? Even though a person’s behav-ior should seldom, if ever, be changed simply forthe convenience of others or for maintenance of thestatus quo, neither should the effect of a person’sbehavior change on the significant others in his lifebe overlooked. This question is usually answeredbest by the significant others themselves becausepeople not directly involved in the person’s lifewould often have

no idea how rewarding it is to see your retarded 19year old acquire the skill of toilet flushing on com-mand or pointing to food when she wants a secondhelping. I suspect that the average taxpayer wouldnot consider it “meaningful” to him or her for Kar-rie to acquire such skills. And, although we cannotreadily say how much Karrie’s being able to flushthe toilet enhances her personal reinforcement/pun-ishment ratio, I can testify that it enhances mine asa parent. (Hawkins, 1984, p. 285)

8. How likely is success in changing this target be-havior? Some behaviors are more difficult tochange than others. At least three sources of infor-mation can help assess the level of difficulty or,more precisely, predict the ease or degree of suc-cess in changing a particular behavior. First, whatdoes the literature say about attempts to changethis behavior? Many of the target behaviors thatconfront applied behavior analysts have been stud-ied. Practitioners should stay abreast of publishedresearch reports in their areas of application. Not

82

Selecting and Defining Target Behaviors

only is such knowledge likely to improve the selec-tion of proven and efficient techniques for behaviorchange, but also it may help to predict the level ofdifficulty or chance of success.

Second, how experienced is the practitioner?The practitioner’s own competencies and experi-ences with the target behavior in question shouldbe considered. A teacher who has worked success-fully for many years with acting-out, aggressivechildren may have an array of effective behaviormanagement strategies ready to employ and mightpredict success with even the most challengingchild. However, that same teacher might decidethat he is less able to improve a student’s writtenlanguage skills.

Third, to what extent can important variablesin the client’s environment be controlled effec-tively? Whether a certain behavior can be changedis not the question. In an applied setting, however,identifying and then consistently manipulating thecontrolling variables for a given target behaviorwill determine whether the behavior will bechanged.

Fourth, are the resources available to imple-ment and maintain the intervention at a level of fi-delity and intensity long enough that is likely toachieve the desired outcomes? No matter how ex-pertly designed a treatment plan, implementing itwithout the personnel and other resources neededto carry out the intervention properly is likely toyield disappointing results.

9. How much will it cost to change this behavior? Costshould be considered before implementing any sys-tematic behavior change program. However, acost–benefit analysis of several potential target be-haviors does not mean that if a teaching program isexpensive, it should not be implemented. Majorcourts have ruled that the lack of public funds maynot be used as an excuse for not providing an appro-priate education to all children regardless of theseverity of their disability (cf., Yell & Drasgow,2000). The cost of a behavior change program cannotbe determined by simply adding dollar amounts thatmight be expended on equipment, materials, trans-portation, staff salaries, and the like. Considerationshould also be given to how much of the client’s timethe behavior change program will demand. If, for ex-ample, teaching a fine motor skill to a child with se-vere disabilities would consume so much of thechild’s day that there would be little time remainingfor her to learn other important behaviors—such ascommunication, leisure, and self-help skills—or sim-ply to have some free time, the fine motor skill objec-tive may be too costly.

Developing and Using a TargetBehavior Ranking Matrix

Assigning a numerical rating to each of a list of potentialtarget behaviors can produce a priority ranking of thosebehaviors. One such ranking matrix is shown in Figure5; it is an adaptation of a system described by Dardig and Heward (1981) for prioritizing and selecting learninggoals for students with disabilities. Each behavior is givena number representing the behavior’s value on each ofthe prioritizing variables (e.g., 0 to 4 with 0 representingno value or contribution and 4 representing maximumvalue or benefit).

Professionals involved in planning behavior changeprograms for certain student or client populations willusually want to weigh some of the variables differentially,require a maximum rating on certain selection variables,and/or add other variables that are of particular impor-tance to their overall goals. For example, professionalsplanning behavior change programs for senior citizenswould probably insist that target behaviors with imme-diate benefits receive high priority. Educators servingsecondary students with disabilities would likely advocatefor factors such as the relative importance of a target behavior for future skill development and independentfunctioning.

Sometimes the behavior analyst, the client, and/orsignificant others have conflicting goals. Parents maywant their teenage daughter in the house by 10:30 P.M.on weekends, but the daughter may want to stay out later.The school may want a behavior analyst to develop a pro-gram to increase students’ adherence to dress and socialcodes. The behavior analyst may believe that these codesare outdated and are not in the purview of the school.Who decides what is best for whom?

One way to minimize and work through conflicts isto obtain client, parent, and staff/administration partici-pation in the goal determination process. For example,the active participation of parents and, when possible, thestudent in the selection of short- and long-term goals andtreatment procedures is required by law in planning spe-cial education services for students with disabilities (In-dividuals with Disabilities Education Improvement Act of2004). Such participation by all of the significant partiescan avoid and resolve goal conflicts, not to mention theinvaluable information the participants can provide rela-tive to other aspects of program planning (e.g., identifi-cation of likely reinforcers). Reviewing the results ofassessment efforts and allowing each participant to pro-vide input on the relative merits of each proposed goal ortarget behavior can often produce consensus on the bestdirection. Program planners should not commit a priorithat whatever behavior is ranked first will necessarily beconsidered the highest priority target behavior. However,

83

Selecting and Defining Target Behaviors

Figure 5 Worksheet for prioritizing potential target behaviors.

Client’s/Student’s name:________________________________________ Date:__________

Person completing worksheet:__________________________________________________

Rater’s relationship to client/student: ____________________________________________

Directions: Use the key below to rank each potential target behavior by the extent to whichit meets or fulfills each prioritization criteria. Add each team member’s ranking of eachpotential target behavior. The behavior(s) with the highest total scores would presumablybe the highest priority for intervention. Other criteria relevant to a particular program orindividual’s situation can be added, and the criteria can be differentially weighted.

Key: 0 = No or Never; 1 = Rarely; 2 = Maybe or Sometimes; 3 = Probably or Usually;4 = Yes or Always

Potential Target Behaviors

(1) _______ (2) _______ (3) _______ (4) _______

Prioritization Criteria

Does this behavior pose danger 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4to the person or to others?

How many opportunities will the 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4person have to use this new skill in the natural environment? or How often does the problem behavior occur?

How long-standing is the 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4problem or skill deficit?

Will changing this behavior pro- 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4duce a higher rate of reinforce-ment for the person?

What is the relative importance 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4of this target behavior to future skill development and indepen-dent functioning?

Will changing this behavior re- 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4duce negative or unwanted attention from others?

Will changing this behavior pro- 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4duce reinforcement for signifi-cant others?

How likely is success in chang- 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4ing this behavior?

How much will it cost to change 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4this behavior?

Totals _____ _____ _____ _____

84

Selecting and Defining Target Behaviors

if the important people involved in a person’s life gothrough a ranking process such as the one shown inFigure 5, they are likely to identify areas of agreementand disagreement, which can lead to further discussionsof target behavior selection and concentration on the crit-ical concerns of those involved.

Defining Target BehaviorsBefore a behavior can undergo analysis, it must be de-fined in a clear, objective, and concise manner. In con-structing target behavior definitions, applied behavioranalysts must consider the functional and topographicalimplications of their definitions.

Role and Importance of TargetBehavior Definitions in AppliedBehavior Analysis

Applied behavior analysis derives its validity from itssystematic approach to seeking and organizing knowl-edge about human behavior. Validity of scientific knowl-edge in its most basic form implies replication. Whenpredicted behavioral effects can be reproduced, princi-ples of behavior are confirmed and methods of practicedeveloped. If applied behavior analysts employ defini-tions of behavior not available to other scientists, repli-cation is less likely. Without replication, the usefulness ormeaningfulness of data cannot be determined beyond thespecific participants themselves, thereby limiting the or-derly development of the discipline as a useful technol-ogy (Baer, Wolf, & Risley, 1968). Without explicit,well-written definitions of target behaviors, researcherswould be unable to accurately and reliably measure thesame response classes within and across studies; or toaggregate, compare, and interpret their data.7

Explicit, well-written definitions of target behaviorare also necessary for the practitioner, who may not be soconcerned with replication by others or development ofthe field. Most behavior analysis programs are not con-ducted primarily for the advancement of the field; they areimplemented by educators, clinicians, and other humanservices professionals to improve the lives of their clients.However, implicit in the application of behavior analysisis an accurate, ongoing evaluation of the target behavior,for which an explicit definition of behavior is a must.

A practitioner concerned only with evaluating his ef-forts in order to provide optimum service to his clients,might ask, “As long as I know what I mean by [name of

7Procedures for measuring behavior accurately and reliably are discussedin Chapter entitled “Measuring Behavior.”

target behavior], why must I write down a specific defi-nition?” First, a good behavioral definition is operational.It provides the opportunity to obtain complete informa-tion about the behavior’s occurrence and nonoccurrence,and it enables the practitioner to apply procedures in aconsistently accurate and timely fashion. Second, a gooddefinition increases the likelihood of an accurate and be-lievable evaluation of the program’s effectiveness. Notonly does an evaluation need to be accurate to guide on-going program decisions, but also the data must be be-lievable to those with a vested interest in the program’seffectiveness. Thus, even though the practitioner may notbe interested in demonstrating an analysis to the field atlarge, she must always be concerned with demonstratingeffectiveness (i.e., accountability) to clients, parents, andadministrators.

Two Types of TargetBehavior Definitions

Target behaviors can be defined functionally or topo-graphically.

Function-Based Definitions

A function-based definition designates responses asmembers of the targeted response class solely by theircommon effect on the environment. For example, Irvin,Thompson, Turner, and Williams (1998) defined handmouthing as any behavior that resulted in “contact of thefingers, hand, or wrist with the mouth, lips, or tongue (p.377). Figure 6 shows several examples of function-based definitions.

Applied behavior analysts should use function-baseddefinitions of target behaviors whenever possible for thefollowing reasons:

• A function-based definition encompasses all rele-vant forms of the response class. However, targetbehavior definitions based on a list of specifictopographies might omit some relevant membersof the response class and/or include irrelevant re-sponse topographies. For example, defining chil-dren’s offers to play with peers in terms of specificthings the children say and do might omit re-sponses to which peers respond with reciprocalplay and/or include behaviors that peers reject.

• The outcome, or function, of behavior is most im-portant. This holds true even for target behaviorsfor which form or aesthetics is central to theirbeing valued as socially significant. For example,the flowing strokes of the calligrapher’s pen andthe gymnast’s elegant movements during a floor

85

Selecting and Defining Target Behaviors

Figure 6 Function-based definitions of various target behaviors.

Creativity in Children’s BlockbuildingThe child behaviors of blockbuilding were defined according to their products, block forms.The researchers created a list of 20 arbitrary, but frequently seen forms, including:

Arch—any placement of a block atop two lower blocks not in contiguity.

Ramp—a block leaned against another, or a triangular block placed contiguous toanother, to simulate a ramp.

Story—two or more blocks placed one atop another, the upper block(s) restingsolely upon the lower.

Tower—any story of two or more blocks in which the lowest block is at least twice astall as it is wide. (Goetz & Baer, 1973, pp. 210–211).

Exercise by Obese BoysRiding a stationary bicycle—each wheel revolution constituted a response, which wasautomatically recorded by magnetic counters (DeLuca & Holborn, 1992, p. 672).

Compliance at Stop Signs by MotoristsComing to a complete stop—observers scored a vehicle as coming to a complete stop ifthe tires stopped rolling prior to the vehicle entering the intersection (Van Houten &Retting, 2001, p. 187).

Recycling by Office EmployeesRecycling office paper—number of pounds and ounces of recyclable office paper found inrecycling and trash containers. All types of paper accepted as recyclable were identified aswell as examples of nonrecyclable paper (Brothers, Krantz, & McClannahan, 1994, p. 155).

Safety Skills to Prevent Gun Play by ChildrenTouching the firearm—the child making contact with the firearm with any part of his or herbody or with any object (e.g., a toy) resulting in the displacement of the firearm.Leaving the area—the child removing himself or herself from the room in which the firearmwas located within 10 seconds of seeing the firearm (Himle, Miltenberger, Flessner, &Gatheridge, 2004, p. 3).

routine are important (i.e., have been selected) be-cause of their effects or function on others (e.g.,praise from the calligraphy teacher, high scoresfrom gymnastics judges).

• Functional definitions are often simpler and moreconcise than topography-based definitions, whichleads to easier and more accurate and reliable mea-surement and sets the occasion for the consistentapplication of intervention. For example, in theirstudy on skill execution by college football play-ers, Ward and Carnes (2002) recorded a correcttackle according to the clear and simple definition,“if the offensive ball carrier was stopped” (p. 3).

Function-based definitions can also be used in somesituations in which the behavior analyst does not havedirect and reliable access to the natural outcome of thetarget behavior, or cannot use the natural outcome of thetarget behavior for ethical or safety reasons. In such cases,a function-based definition by proxy can be considered.

For example, the natural outcome of elopement (i.e., run-ning or walking away from a caregiver without consent)is a lost child. By defining elopement as “any movementaway from the therapist more than 1.5 m without per-mission” (p. 240), Tarbox, Wallace, and Williams (2003)were able to measure and treat this socially significanttarget behavior in a safe and meaningful manner.

Topography-Based Definitions

A topography-based definition identifies instances ofthe target behavior by the shape or form of the behavior.Topography-based definitions should be used when thebehavior analyst (a) does not have direct, reliable, or easyaccess to the functional outcome of the target behavior,and/or (b) cannot rely on the function of the behavior be-cause each instance of the target behavior does not pro-duce the relevant outcome in the natural environment orthe outcome might be produced by other events. For ex-ample, Silvestri (2004) defined and measured two classes

86

Selecting and Defining Target Behaviors

of positive teacher statements according to the words thatmade up the statements, not according to whether thecomments produced specific outcomes (see Figure 7).

Topography-based definitions can also be used fortarget behaviors for which the relevant outcome is some-times produced in the natural environment by undesir-able variations of the response class. For example,because a duffer’s very poor swing of a golf club some-times produces a good outcome (i.e., the ball lands onthe green), it is better to define a correct swing by the po-sition and movement of the golf club and the golfer’s feet,hips, head, and hands.

A topography-based definition should encompass allresponse forms that would typically produce the relevantoutcome in the natural environment. Although topogra-phy provides an important element for defining target be-haviors, the applied behavior analyst must be especiallycareful not to select target behaviors solely on the basisof topography (see Box 1).

Writing Target Behavior Definitions

A good definition of a target behavior provides an accu-rate, complete, and concise description of the behaviorto be changed (and therefore measured). It also stateswhat is not included in the behavioral definition. Askingto be excused from the dinner table is an observable andmeasurable behavior that can be counted. By compari-son, “exercising good manners” is not a description ofany particular behavior; it merely implies a general re-sponse class of polite and socially acceptable behaviors.Hawkins and Dobes (1977) described three characteris-tics of a good definition:

1. The definition should be objective, referring onlyto observable characteristics of the behavior (andenvironment, if needed) or translating any inferen-tial terms (such as “expressing hostile feelings,”“intended to help,” or “showing interest in”) intomore objective ones.

Figure 7 Topography-based definitions for two types of teacher statements.

Generic Positive StatementsGeneric positive statements were defined as audible statements by the teacher thatreferred to one or more student’s behavior or work products as desirable or commendable(e.g., “I’m proud of you!”, “Great job, everyone.”). Statements made to other adults in theroom were recorded if they were loud enough to be heard by the students and made directreference to student behavior or work products (e.g., “Aren’t you impressed at how quietlymy students are working today?”). A series of positive comments that specified neitherstudent names nor behaviors with less than 2 seconds between comments was recordedas one statement. For example, if the teacher said, “Good, good, good. I’m so impressed”when reviewing three or four students’ work, it was recorded as one statement.

Teacher utterances not recorded as generic positive statements included (a)statements that referred to specific behavior or student names, (b) neutral statementsindicating only that an academic response was correct (e.g., “Okay”, “Correct”), (c) positivestatements not related to student behavior (e.g., saying “Thanks for dropping off myattendance forms” to a colleague), and (d) incomprehensible or inaudible statements.

Behavior-Specific Positive StatementsBehavior-specific positive statements made explicit reference to an observable behavior(e.g., “Thank you for putting your pencil away”). Specific positive statements could refer togeneral classroom behavior (e.g., “You did a great job walking back to your seat quietly”)or academic performance (e.g., “That was a super smart answer!”). To be recorded asseparate responses, specific positive statements were separated from one another by 2seconds or by differentiation of the behavior praised. In other words, if a teacher named adesirable behavior and then listed multiple students who were demonstrating the behavior,this would be recorded as one statement (e.g., “Marissa, Tony, and Mark, you did a greatjob of returning your materials when you were finished with them”). However, a teacher’spositive comment noting several different behaviors would be recorded as multiplestatements regardless of the interval between the end of one comment and the start of thenext. For example, “Jade, you did a great job cleaning up so quickly; Charles, thanks forputting the workbooks away; and class, I appreciate that you lined up quietly ” would berecorded as three positive statements.

Adapted from The Effects of Self-Scoring on Teachers’ Positive Statements during Classroom Instruction (pp.48–49) by S. M. Silvestri. Unpublished doctoral dissertation. Columbus, OH: The Ohio State University. Usedby permission.

87

Suppose you are a behavior analyst in a position to de-sign and help implement an intervention to change thefollowing four behaviors:

1. A child repeatedly raises her arm, extending andretracting her fingers toward her palm in a grip-ping/releasing type of motion.

2. An adult with developmental disabilities pusheshis hand hard against his eye, making a fist andrubbing his eye rapidly with his knuckles.

3. Several times each day a high school studentrhythmically drums her fingers up and down,sometimes in bursts of 10 to 15 minutes in dura-tion.

4. A person repeatedly grabs at and squeezes anotherperson’s arms and legs so hard that the other per-son winces and says “Ouch!”

How much of a problem does the behavior pose forthe person or for others who share his or her current and fu-ture environments? Would you rate each behavior as a mild,moderate, or serious problem? How important do you thinkit would be to target each of these behaviors for reductionor elimination from the repertoires of the four individuals?

Appropriate answers to these questions cannot befound in topographical descriptions alone. The meaningand relative importance of any operant behavior can bedetermined only in the context of the environmental an-tecedents and consequences that define the behavior.Here is what each of the four people in the previous ex-amples were actually doing:

1. An infant learning to wave “bye-bye.”

2. A man with allergies rubbing his eye to relieve theitching.

3. A student typing unpredictable text to increase herkeyboarding fluency and endurance.

4. A massage therapist giving a relaxing, deep-muscle massage to a grateful and happy customer.

Applied behavior analysts must remember that themeaning of any behavior is determined by its function,not its form. Behaviors should not be targeted for changeon the basis of topography alone.

Note: Examples 1 and 2 are adapted from Meyer and Evans, 1989,p. 53.

Box 1How Serious Are These Behavior Problems?

2. The definition should be clear in that it should bereadable and unambiguous so that experiencedobservers could read it and readily paraphrase itaccurately.

3. The definition should be complete, delineating the“boundaries” of what is to be included as an in-stance of the response and what is to be excluded,thereby directing the observers in all situations thatare likely to occur and leaving little to their judg-ment. (p. 169)

Stated succinctly, a good definition must be objec-tive, assuring that specific instances of the defined targetbehavior can be observed and recorded reliably. An ob-jective definition increases the likelihood of an accurateand believable evaluation of program effectiveness. Sec-ond, a clear definition is technological, meaning that it en-ables others to use and replicate it (Baer et al., 1968). Aclear definition therefore becomes operational for pre-sent and future purposes. Finally, a complete definitiondiscriminates between what is and what is not an instanceof the target behavior. A complete definition allows oth-

ers to record an occurrence of a target behavior, but notrecord instances of nonoccurrence, in a standard fashion.A complete definition is a precise and concise descriptionof the behavior of interest. Note how the target behaviordefinitions in Figures 6 and 7 meet the standard for being objective, clear, and complete.

Morris (1985) suggested testing the definition of atarget behavior by asking three questions:

1. Can you count the number of times that the behav-ior occurs in, for example, a 15-minute period, a 1-hour period, or one day? Or, can you count thenumber of minutes that it takes for the child to per-form the behavior? That is, can you tell someonethat the behavior occurred “x” number of times of“x” number of minutes today? (Your answershould be “yes.”)

2. Will a stranger know exactly what to look for whenyou tell him/her the target behavior you are plan-ning to modify? That is, can you actually see thechild performing the behavior when it occurs?(Your answer should be “yes.”)

Selecting and Defining Target Behaviors

88

3. Can you break down the target behavior intosmaller behavioral components, each of which ismore specific and observable than the original tar-get behavior? (Your answer should be “no.”).

In responding to the suggestion that perhaps asourcebook of standard target behavior definitions be de-veloped because it would increase the likelihood of exactreplications among applied researchers and would savethe considerable time spent in developing and testingsituation-specific definitions, Baer (1985) offered the fol-lowing perspectives. Applied behavior analysis programsare implemented because someone (e.g., teacher, parent,individual himself) has “complained” that a behaviorneeds to be changed. A behavioral definition has validityin applied behavior analysis only if it enables observersto capture every aspect of the behavior that the com-plainer is concerned with and none other. Thus, to bevalid from an applied perspective, definitions of targetbehaviors should be situation specific. Attempts to stan-dardize behavior definitions assume an unlikely similar-ity across all situations.

Setting Criteria forBehavior ChangeTarget behaviors are selected for study in applied behav-ior analysis because of their importance to the people in-volved. Applied behavior analysts attempt to increase,maintain, and generalize adaptive, desirable behaviorsand decrease the occurrence of maladaptive, undesirablebehaviors. Behavior analysis efforts that not only target

important behaviors but also change those behaviors to anextent that a person’s life is changed in a positive andmeaningful way, are said to have social validity.8 Buthow much does a target behavior need to change beforeit makes a meaningful difference in the person’s life?

Van Houten (1979) made a case for specifying thedesired outcome criteria before efforts to modify the tar-get behavior begin.

This step [specifying outcome criteria] becomes as im-portant as the previous step [selecting socially importanttarget behaviors] if one considers that for most behav-iors there exists a range of responding within which per-formance is most adaptive. When the limits of this rangeare unknown for a particular behavior, it is possible thatone could terminate treatment when performance isabove or below these limits. Hence, the behavior wouldnot be occurring within its optimal range. . . .

In order to know when to initiate and terminate atreatment, practitioners require socially validated stan-dards for which they can aim. (pp. 582, 583)

Van Houten (1979) suggested two basic approachesto determining socially valid goals: (a) Assess the perfor-mance of people judged to be highly competent, and (b)experimentally manipulate different levels of performanceto determine empirically which produces optimal results.

Regardless of the method used, specifying treatmentgoals before intervention begins provides a guideline forcontinuing or terminating a treatment. Further, settingobjective, predetermined goals helps to eliminate dis-agreements or biases among those involved in evaluat-ing a program’s effectiveness.

Role of Assessment in Behavior Analysis

1. Behavioral assessment involves a full range of inquiry meth-ods including direct observations, interviews, checklists,and tests to identify and define targets for behavior change.

2. Behavioral assessment can be conceptualized as funnelshaped, with an initial broad scope leading to an eventualnarrow and constant focus.

3. Behavioral assessment consists of five phases or functions:(a) screening, (b) defining and quantifying problems orgoals, (c) pinpointing the target behavior(s) to be treated(d) monitoring progress, and (e) following up.

4. Before conducting a behavioral assessment, the behav-ior analyst must determine whether he has the authorityand permission, resources, and skills to assess and changethe behavior.

Summary

8A third component of social validity concerns the social acceptability ofthe treatment methods and procedures employed to change the behavior.The importance of social validity in evaluating applied behavior analysiswill be discussed in Chapter entitled “Planning and Evaluating AppliedBehavior Analysis Research.”

Assessment Methods Used by Behavior Analysts

5. Four major methods for obtaining assessment informationare (a) interviews, (b) checklists, (c) tests, and (d) directobservations.

6. The client interview is used to determine the client’s de-scription of problem behaviors or achievement goals.What, when, and where questions are emphasized, focus-ing on the actual behavior of the client and the responsesof significant others to that behavior.

Selecting and Defining Target Behaviors

89

Selecting and Defining Target Behaviors

7. Questionnaires and needs assessment surveys are some-times completed by the client to supplement the informa-tion gathered in the interview.

8. Clients are sometimes asked to self-monitor certain situ-ations or behaviors. Self-collected data may be useful inselecting and defining target behaviors.

9. Significant others can also be interviewed to gather assess-ment information and, in some cases, to find out whetherthey will be willing and able to assist in an intervention.

10. Direct observation with a behavior checklist that containsspecific descriptions of various skills can indicate possi-ble target behaviors.

11. Anecdotal observation, also called ABC recording, yieldsa descriptive, temporally sequenced account of all behav-ior(s) of interest and the antecedent conditions and con-sequences for those behaviors as those events occur in theclient’s natural environment.

12. Ecological assessment entails gathering a large amount ofinformation about the person and the environments inwhich that person lives and works (e.g., physiological con-ditions, physical aspects of the environment, interactionswith others, past reinforcement history). A complete eco-logical assessment is neither necessary nor warranted formost applied behavior analysis programs.

13. Reactivity, the effects of an assessment procedure on thebehavior being assessed, is most likely when the personbeing observed is aware of the observer’s presence andpurpose. Behavior analysts should use assessment meth-ods that are as unobtrusive as possible, repeat observationsuntil apparent reactive effects subside, and take possible re-active effects into account when interpreting the results ofobservations.

Assessing the Social Significance of Potential Target Behaviors

14. Target behaviors in applied behavior analysis must be so-cially significant behaviors that will increase a person’shabilitation (adjustment, competence).

15. The relative social significance and habilitative value of apotential target behavior can be clarified by viewing it inlight of the following considerations:

• Will the behavior be reinforced in the person’s dailylife? The relevance of behavior rule requires that a tar-get behavior produce reinforcement for the person inthe postintervention environment.

• Is the behavior a necessary prerequisite for a useful skill?

• Will the behavior increase the person’s access to envi-ronments in which other important behaviors can belearned or used?

• Will the behavior predispose others to interact with theperson in a more appropriate and supportive manner?

• Is the behavior a cusp or pivotal behavior? Behavioralcusps have sudden and dramatic consequences that ex-tend well beyond the idiosyncratic change itself be-cause they expose the person to new environments,reinforcers, contingencies, responses, and stimulus con-trols. Learning a pivotal behavior produces corre-sponding modifications or covariations in otheruntrained behaviors.

• Is the behavior age appropriate?

• Whenever a behavior is targeted for reduction or elim-ination, a desirable, adaptive behavior must be selectedto replace it.

• Does the behavior represent the actual problem orachievement goal, or is it only indirectly related?

• A person’s verbal behavior should not be confused withthe actual behavior of interest. However, in some situ-ations the client’s verbal behavior should be selected asthe target behavior because it is the behavior of interest.

• If a person’s goal is not a specific behavior, a target be-havior(s) must be selected that will produce the desiredresults or state.

Prioritizing Target Behaviors

16. Assessment often reveals more than one possible behav-ior or skill area for targeting. Prioritization can be ac-complished by rating potential target behavior against keyquestions related to their relative danger, frequency, long-standing existence, potential for reinforcement, relevancefor future skill development and independent functioning,reduced negative attention from others, likelihood of suc-cess, and cost.

17. Participation by the person whose behavior is to bechanged, parents and/or other important family members,staff, and administration in identifying and prioritizing tar-get behaviors can help reduce goal conflicts.

Defining Target Behaviors

18. Explicit, well-written target behavior definitions are nec-essary for researchers to accurately and reliably measurethe same response classes within and across studies or toaggregate, compare, and interpret their data.

19. Good target behaviors definitions are necessary for prac-titioners to collect accurate and believable data to guideongoing program decisions, apply procedures consis-tently, and provide accountability to clients, parents, andadministrators.

20. Function-based definitions designate responses as mem-bers of the targeted response class solely by their commoneffect on the environment.

21. Topography-based definitions define instances of the tar-geted response class behavior by the shape or form ofthe behavior.

90

Selecting and Defining Target Behaviors

22. A good definition must be objective, clear, and complete,and must discriminate between what is and what is not aninstance of the target behavior.

23. A target behavior definition is valid if it enables observersto capture every aspect of the behavior that the “com-plainer” is concerned with and none other.

Setting Criteria for Behavior Change

24. A behavior change has social validity if it changes someaspect of the person’s life in an important way.

25. Outcome criteria specifying the extent of behavior changedesired or needed should be determined before efforts tomodify the target behavior begin.

26. Two approaches to determining socially validated perfor-mance criteria are (a) assessing the performance of peoplejudged to be highly competent and (b) experimentally ma-nipulating different levels of performance to determinewhich produces optimal results.

91

artifactcelerationceleration time periodceleration trend linecountdiscrete trialdurationevent recordingfree operant

frequencyinterresponse time (IRT)magnitudemeasurementmeasurement by permanent productmomentary time samplingpartial-interval recordingpercentageplanned activity check (PLACHECK)

raterepeatabilityresponse latencytemporal extenttemporal locustime samplingtopographytrials-to-criterionwhole-interval recording

Behavior Analyst Certification Board® BCBA® & BCABA®Behavior Analyst Task List©, Third Edition

Content Area 6: Measurement of Behavior

6-1 Identify the measurable dimensions of behavior (e.g., rate, duration, latency, orinterresponse times).

6-3 State the advantages and disadvantages of using continuous measurementprocedures and sampling techniques (e.g., partial- and whole-interval record-ing, momentary time sampling).

6-4 Select the appropriate measurement procedure given the dimensions of thebehavior and the logistics of observing and recording.

6-6 Use frequency (i.e., count).

6-7 Use rate (i.e., count per unit of time).

6-8 Use duration.

6-9 Use latency.

6-10 Use interresponse time (IRT).

6-11 Use percentage of occurrence.

6-12 Use trials-to-criterion.

6-13 Use interval recording methods.

© 2006 The Behavior Analyst Certification Board, Inc.,® (BACB®) all rights reserved. A current version ofthis document may be found at www.bacb.com. Requests to reprint, copy, or distribute this document andquestions about this document must be submitted directly to the BACB.

Measuring Behavior

Key Terms

From Chapter 4 of Applied Behavior Analysis, Second Edition. John O. Cooper, Timothy E. Heron, William L. Heward. Copyright © 2007by Pearson Education, Inc. All rights reserved.

92

Lord Kelvin, the British mathematician andphysicist, supposedly said, “Until you can ex-press what you are talking about in numbers and

can measure it, your knowledge is meager and unsatis-factory.” Measurement (applying quantitative labels todescribe and differentiate natural events) provides thebasis for all scientific discoveries and for the develop-ment and successful application of technologies derivedfrom those discoveries. Direct and frequent measurementprovides the foundation for applied behavior analysis.Applied behavior analysts use measurement to detect andcompare the effects of various environmental arrange-ments on the acquisition, maintenance, and generalizationof socially significant behaviors.

But what is it about behavior that applied behavioranalysts can and should measure? How should thosemeasures be obtained? And, what should we do withthese measures once we have obtained them? This chap-ter identifies the dimensions by which behavior can bemeasured and describes the methods behavior analystscommonly use to measure them. But first, we examinethe definition and functions of measurement in appliedbehavior analysis.

Definition and Functions of Measurement in AppliedBehavior AnalysisMeasurement is “the process of assigning numbers andunits to particular features of objects or events. . . . [It] in-volves attaching a number representing the observed ex-tent of a dimensional quantity to an appropriate unit. Thenumber and the unit together constitute the measure of theobject or event” (Johnston & Pennypacker, 1993a, pp.91, 95). A dimensional quantity is the particular featureof an object or event that is measured. For example, asingle occurrence of a phenomenon—say, the response of“c-a-t” to the question, “How do you spell cat?”—wouldbe assigned the number 1 and the unit label correct. Ob-serving another instance of the same response class wouldchange the label to 2 correct. Other labels could also beapplied based on accepted nomenclature. For example,if 8 of 10 observed responses met a standard definition of“correct,” accuracy of responding could be described withthe label, 80% correct. If the 8 correct responses wereemitted in 1 minute, a rate or frequency label of 8 perminute would apply.

Bloom, Fischer, and Orme (2003)—who describedmeasurement as the act or process of applying quantita-tive or qualitative labels to events, phenomena, or ob-served properties using a standard set of consensus-basedrules by which to apply labels to those occurrences—

pointed out that the concept of measurement also includesthe characteristics of what is being measured, the qualityand appropriateness of the measurement tools, the tech-nical skill of the measurer, and how the measures obtained are used. In the end, measurement gives re-searchers, practitioners, and consumers a common meansfor describing and comparing behavior with a set of la-bels that convey a common meaning.

Researchers Need Measurement

Measurement is how scientists operationalize empiri-cism. Objective measurement enables (indeed, it re-quires) scientists to describe the phenomena they observein precise, consistent, and publicly verifiable ways. With-out measurement, all three levels of scientific knowl-edge—description, prediction, and control—would berelegated to guesswork subject to the “individual preju-dices, tastes, and private opinions of the scientist”(Zuriff, 1985, p. 9). We would live in a world in whichthe alchemist’s suppositions about a life-prolongingelixir would prevail over the chemist’s propositions de-rived from experimentation.

Applied behavior analysts measure behavior to ob-tain answers to questions about the existence and natureof functional relations between socially significant be-havior and environmental variables. Measurement en-ables comparisons of a person’s behavior within andbetween different environmental conditions, thereby af-fording the possibility of drawing empirically based con-clusions about the effects of those conditions on behavior.For example, to learn whether allowing students withemotional and behavioral challenges to choose the aca-demic tasks they will work on would influence their en-gagement with those tasks and disruptive behavior,Dunlap and colleagues (1994) measured students’ taskengagement and disruptive behaviors during choice andno-choice conditions. Measurement revealed the level ofboth target behaviors during each condition, whether andhow much the behaviors changed when choice was in-troduced or withdrawn, and how variable or stable thebehaviors were during each condition.

The researcher’s ability to achieve a scientific un-derstanding of behavior depends on her ability to measureit. Measurement makes possible the detection and verifi-cation of virtually everything that has been discoveredabout the selective effects of the environment on behav-ior. The empirical databases of the basic and appliedbranches of behavior analysis consist of organized col-lections of behavioral measurements. Virtually everygraph in the Journal of the Experimental Analysis of Be-havior and the Journal of Applied Behavior Analysis dis-plays an ongoing record or summary of behavioralmeasurement. In short, measurement provides the very

Measuring Behavior

93

basis for learning and talking about behavior in scientif-ically meaningful ways.1

Practitioners Need Measurement

Behavioral practitioners are dedicated to improving thelives of the clients they serve by changing socially sig-nificant behaviors. Practitioners measure behavior ini-tially to determine the current level of a target behaviorand whether that level meets a threshold for further in-tervention. If intervention is warranted, the practitionermeasures the extent to which his efforts are successful.Practitioners measure behavior to find out whether andwhen it has changed; the extent and duration of behaviorchanges; the variability or stability of behavior before,during, and after treatment; and whether important be-havior changes have occurred in other settings or situa-tions and spread to other behaviors.

Practitioners compare measurements of the target be-havior before and after treatment (sometimes includingpre- and posttreatment measures obtained in nontreat-ment settings or situations) to evaluate the overall effectsof behavior change programs (summative evaluation).Frequent measures of behavior during treatment(formative assessment) enable dynamic, data-based de-cision making concerning the continuation, modification,or termination of treatment.

The practitioner who does not obtain and attend tofrequent measures of the behavior targeted for interven-tion is vulnerable to committing two kinds of preventablemistakes: (a) continuing an ineffective treatment whenno real behavior change has occurred, or (b) discontinu-ing an effective treatment because subjective judgmentdetects no improvement (e.g., without measurement, ateacher would be unlikely to know that a student’s oralreading has increased from 70 words per minute to 80word per minute). Thus, direct and frequent measurementenables practitioners to detect their successes and, equallyimportant, their failures so they can make changes tochange failure to success (Bushell & Baer, 1994; Green-wood & Maheady, 1997).

Our technology of behavior change is also a technologyof behavior measurement and of experimental design; itdeveloped as that package, and as long as it stays in thatpackage, it is a self-evaluating enterprise. Its successesare successes of known magnitude; its failures are al-most immediately detected as failures; and whatever itsoutcomes, they are attributable to known inputs and pro-cedures rather than to chance events or coincidences. (D. M. Baer, personal communication, October 21, 1982)

1Measurement is necessary but not sufficient to the attainment of scien-tific understanding. See chapter entitled “Analyzing Behavior Change:Basic Assumptions and Strategies.”

In addition to enabling ongoing program monitoringand data-based decision making, frequent measurementprovides other important benefits to practitioners and theclients they serve:

• Measurement helps practitioners optimize their ef-fectiveness. To be optimally effective, a practitioner mustmaximize behavior change efficiency in terms of timeand resources. Only by maintaining close, continual con-tact with relevant outcome data can a practitioner hope toachieve optimal effectiveness and efficiency (Bushell &Baer, 1994). Commenting on the critical role of directand frequent measurement on maximizing the effective-ness of classroom practice, Sidman (2000) noted thatteachers “must remain attuned to the pupil’s messagesand be ready to try and to evaluate modifications [in in-structional methods]. Teaching, then, is not just a matterof changing the behavior of pupils; it is an interactive so-cial process” (p. 23, words in brackets added). Direct andfrequent measurement is the process by which practi-tioners hear their clients’ messages.

• Measurement enables practitioners to verify the le-gitimacy of treatments touted as “evidence based.” Prac-titioners are increasingly expected, and in some fieldsmandated by law, to use evidence-based interventions.An evidence-based practice is a treatment or interventionmethod that has been demonstrated to be effective throughsubstantial, high-quality scientific research. For example,the federal No Child Left Behind Act of 2001 requiresthat all public school districts ensure that all children aretaught by “highly qualified” teachers using curriculumand instructional methods validated by rigorous scientificresearch. Although guidelines and quality indicators re-garding the type (e.g., randomized clinical trials, single-subject studies) and quantity (e.g., number of studiespublished in peer-reviewed journals, minimum numberof participants) of research needed to qualify a treatmentmethod as an evidence-based practice have been pro-posed, the likelihood of complete consensus is slim (e.g.,Horner, Carr, Halle, McGee, Odom, & Wolery, 2005; U.S.Department of Education, 2003). When implementingany treatment, regardless of the type or amount of re-search evidence to support it, practitioners can and shoulduse direct and frequent measurement to verify its effec-tiveness with the students or clients they serve.

• Measurement helps practitioners identify and endthe use of treatments based on pseudoscience, fad, fash-ion, or ideology. Numerous educational methods andtreatments have been claimed by their advocates as break-throughs. Many controversial treatments and proposedcures for people with developmental disabilities andautism (e.g., facilitated communication, holding therapy,megadoses of vitamins, strange diets, weighted vests,

Measuring Behavior

94

Measuring Behavior

dolphin-assisted therapy) have been promoted in the ab-sence of sound scientific evidence of effectiveness (Heflin& Simpson, 2002; Jacobson, Foxx, & Mulick, 2005). Theuse of some of these so-called breakthrough therapieshas led to disappointment and loss of precious time atbest for many people and their families, and in somecases, to disastrous consequences (Maurice, 1993). Eventhough well-controlled studies have shown many of thesemethods to be ineffective, and even though these pro-grams are not justified because they lacked sound, sci-entific evidence of effects, risks, and benefits, parentsand practitioners are still bombarded with sincere andwell-meaning testimonials. In the quest to find and ver-ify effective treatments and root out those whose strongestsupport is in the form of testimonials and slick adver-tisements on the Internet, measurement is the practi-tioner’s best ally. Practitioners should maintain a healthyskepticism regarding claims for effectiveness.

Using Plato’s “Allegory of the Cave” as a metaphorfor teachers and practitioners who use untested andpseudo-instructional ideas, Heron, Tincani, Peterson, andMiller (2002) argued that practitioners would be betterserved if they adopted a scientific approach and cast asidepseudo-educational theories and philosophies. Practi-tioners who insist on direct and frequent measurementof all intervention and treatment programs will have em-pirical support to defend against political or social pres-sures to adopt unproven treatments. In a real sense theywill be armed with what Carl Sagan (1996) called a“baloney detection kit.”

• Measurement enables practitioners to be ac-countable to clients, consumers, employers, and society.Measuring the outcomes of their efforts directly and fre-quently helps practitioners answer confidently questionsfrom parents and other caregivers about the effects oftheir efforts.

• Measurement helps practitioners achieve ethicalstandards. Ethical codes of conduct for behavior ana-lytic practitioners require direct and frequent measure-ment of client behavior. Determining whether a client’sright to effective treatment or effective education is beinghonored requires measurement of the behavior(s) forwhich treatment was sought or intended (Nakano, 2004;Van Houten et al., 1988). A behavioral practitioner whodoes not measure the nature and extent of relevant be-havior changes of the clients she serves borders on mal-practice. Writing in the context of educators, Kauffman(2005) offered this perspective on the relationship be-tween measurement and ethical practice:

[T]he teacher who cannot or will not pinpoint and mea-sure the relevant behaviors of the students he or she isteaching is probably not going to be very effective. . . .

Not to define precisely and to measure these behavioralexcesses and deficiencies, then, is a fundamental error; itis akin to the malpractice of a nurse who decides not tomeasure vital signs (heart rate, respiration rate, tempera-ture, and blood pressure), perhaps arguing that he or sheis too busy, that subjective estimates of vital signs arequite adequate, that vital signs are only superficial esti-mates of the patient’s health, or that vital signs do notsignify the nature of the underlying pathology. Theteaching profession is dedicated to the task of changingbehavior—changing behavior demonstrably for the bet-ter. What can one say, then, of educational practice thatdoes not include precise definition and reliable measure-ment of the behavioral change induced by the teacher’smethodology? It is indefensible. (p. 439)

Measurable Dimensions of BehaviorIf a friend asked you to measure a coffee table, you wouldprobably ask why he wants the table measured. In otherwords, what does he want measurement to tell him aboutthe table? Does he need to know its height, width, anddepth? Does he want to know how much the tableweighs? Perhaps he is interested in the color of the table?Each of these reasons for measuring the table requiresmeasuring a different dimensional quantity of the table(i.e., length, mass, and light reflection).

Behavior, like coffee tables and all entities in thephysical world, also has features that can be measured(though length, weight, and color are not among them).Because behavior occurs within and across time, it hasthree fundamental properties, or dimensional quantities,that behavior analysts can measure. Johnston and Pen-nypacker (1993a) described these properties as follows:

• Repeatability (also called countability): Instancesof a response class can occur repeatedly throughtime (i.e., behavior can be counted).

• Temporal extent: Every instance of behavior oc-curs during some amount of time (i.e., the durationof behavior can be measured).

• Temporal locus: Every instance of behavior occursat a certain point in time with respect to other events(i.e., when behavior occurs can be measured).

Figure 1 shows a schematic representation of re-peatability, temporal extent, and temporal locus. Aloneand in combination, these dimensional quantities providethe basic and derivative measures used by applied be-havior analysts. In the following pages these and twoother measurable dimensions of behavior—its form andstrength—will be discussed.

95

Measuring Behavior

R1 R2 R4

R3Time

S1 S2

L L

Figure 1 Schematic representation of thedimensional quantities of repeatability, tem-poral extent, and temporal locus. Repeat-ability is shown by a count of four instances of a given response class (R1, R2, R3, and R4)within the observation period. The temporalextent (i.e., duration) of each response isshown by the raised and shaded portions of the time line. One aspect of the temporallocus (response latency) of two responses isshown by the elapsed time ( L ) betweenthe onset of two antecedent stimulus events(S1 and S2) and the initiation of the responsesthat follow (R2 and R4).

➞➞

Measures Based on Repeatability

Count

Count is a simple tally of the number of occurrences ofa behavior—for example, the number of correct and in-correct answers written on an arithmetic practice sheet,the number of words written correctly during a spellingtest, the number of times an employee carpools to work,the number of class periods a student is tardy, the num-ber of widgets produced.

Although how often a behavior occurs is often of pri-mary interest, measures of count alone may not provideenough information to allow practitioners to make usefulprogram decisions or analyses. For example, data show-ing that Katie wrote correct answers to 5, 10, and 15 longdivision problems over three consecutive math class pe-riods suggests improving performance. However, if thethree measures of count were obtained in observation periods of 5 minutes, 20 minutes, and 60 minutes, re-spectively, a much different interpretation of Katie’s per-formance is suggested. Therefore, the observation period,or counting time, should always be noted when reportingmeasures of count.

Rate/Frequency

Combining observation time with count yields one of themost widely used measures in applied behavior analysis,rate (or frequency) of responding, defined as the num-ber of responses per unit of time.2 A rate or frequencymeasure is a ratio consisting of the dimensional quanti-ties of count (number of responses) and time (observationperiod in which the count was obtained).

2Although some technical distinctions exist between rate and frequency, thetwo terms are often used interchangeably in the behavior analysis literature.For discussion of various meanings of the two terms and examples of dif-ferent methods for calculating ratios combining count and observationtime, see Johnston and Pennypacker (1993b, Reading 4: Describing Be-havior with Ratios of Count and Time).

Converting count to rate or frequency makes mea-surement more meaningful. For example, knowing thatYumi read 95 words correctly and 4 words incorrectly in1 minute, that Lee wrote 250 words in 10 minutes, andthat Joan’s self-injurious behavior occurred 17 times in 1hour provides important information and context. Ex-pressing the three previously reported measures of Katie’sperformance in math class as rate reveals that she correctlyanswered long division problems at rates of 1.0, 0.5, and0.25 per minute over three consecutive class periods.

Researchers and practitioners typically report ratedata as a count per 10 seconds, count per minute, countper day, count per week, count per month, or count peryear. As long as the unit of time is standard within oracross experiments, rate measures can be compared. It ispossible to compare rates of responding from counts ob-tained during observation periods of different lengths.For example, a student who, over four daily class activ-ities of different durations, had 12 talk-outs in 20 minutes,8 talk-outs in 12 minutes, 9 talk-outs in 15 minutes, and12 talk-outs in 18 minutes had response rates of 0.60,0.67, 0.60, and 0.67 per minute.

The six rules and guidelines described below willhelp researchers and practitioners obtain, describe, and interpret count and rate data most appropriately.

Always Reference the Counting Time. When re-porting rate-of-response data, researchers and practi-tioners must always include the duration of theobservation time (i.e., the counting time). Comparingrate measures without reference to the counting timecan lead to faulty interpretations of data. For example, iftwo students read from the same text at equal rates of100 correct words per minute with no incorrect re-sponses, it would appear that they exhibited equal per-formances. Without knowing the counting time, however,an evaluation of these two performances cannot be con-ducted. Consider, for example, that Sally and Lillianeach ran at a rate of 7 minutes per mile. We cannotcompare their performances without reference to the

96

Measuring Behavior

distances they ran. Running 1 mile at a rate of 7 minutesper mile is a different class of behavior than running ata rate of 7 minutes per mile over a marathon distance.

The counting time used for each session needs to ac-company each rate measure when the counting timechanges from session to session. For instance, rather thanhaving a set counting time to answer arithmetic facts (e.g.,a 1-minute timing), the teacher records the total time re-quired for the student to complete an assigned set of arith-metic problems during each class session. In thissituation, the teacher could report the student’s correctand incorrect answers per minute for each session, andalso report the counting times for each session becausethey changed from session to session.

Calculate Correct and Incorrect Rates of ResponseWhen Assessing Skill Development. When a participanthas an opportunity to make a correct or an incorrect re-sponse, a rate of response for each behavior should bereported. Calculating rate correct and rate incorrect iscrucial for evaluating skill development because an im-proving performance cannot be assessed by knowingonly the correct rate. The rate of correct responses alonecould show an improving performance, but if incorrectresponding is also increasing, the improvement may beillusionary. Correct and incorrect rate measures togetherprovide important information to help the teacher evalu-ate how well the student is progressing. Ideally, the cor-rect response rate accelerates toward a performancecriterion or goal, and the incorrect response rate decel-erates to a low, stable level. Also, reporting rate correctand rate incorrect provides for an assessment of propor-tional accuracy while maintaining the dimensionalquantities of the measurement (e.g., 20 correct and 5 in-correct responses per minute = 80% accuracy, or a mul-tiple of X4 proportional accuracy).

Correct and incorrect response rates provide essen-tial data for the assessment of fluent performance (i.e.,proficiency) (Kubina, 2005). The assessment of fluencyrequires measurement of the number of correct and in-correct responses per unit of time (i.e., proportional ac-curacy). Analysts cannot assess fluency using only thecorrect rate because a fluenct performance must be ac-curate also.

Take Into Account the Varied Complexity of Re-sponses. Rate of responding is a sensitive and appro-priate measure of skill acquisition and the developmentof fluent performances only when the level of difficultyand complexity from one response to the next remainsconstant within and across observations. The rate-of-response measures previously discussed have beenwith whole units in which the response requirementsare essentially the same from one response to the next.Many important behaviors, however, are composites of

two or more component behaviors, and different situa-tions call for varied sequences or combinations of thecomponent behaviors.

One method for measuring rate of responding thattakes varied complexity into account for multiple-component behaviors is to count the operations neces-sary to achieve a correct response. For example, inmeasuring students’ math calculation performance, in-stead of counting a two-digit plus three-digit additionproblem with regrouping as correct or incorrect, a re-searcher might consider the number of steps that werecompleted in correct sequence within each problem. Hel-wig (1973) used the number of operations needed to pro-duce the answers to mathematics problems to calculateresponse rates. In each session the student was given 20multiplication and division problems selected at randomfrom a set of 120 problems. The teacher recorded the du-ration of time for each session. All the problems were oftwo types: a × b = c and a ÷ b = c. For each problem thestudent was asked to find one of the factors: the product,the dividend, the divisor, or the quotient. Depending onthe problem, finding the missing factor required from oneto five operations. For example, writing the answer 275in response to the problem 55 × 5 = ? would be scored asfour correct responses because finding the missing factorrequires four operations:

1. Multiply the ones: 5 × 5 = 25.

2. Record the 5 ones and carry the 2 tens.

3. Multiply the tens: 5 × 5(0) = 25(0).

4. Add the 2 tens carried and record the sum (27).

When more than one way to find the answer was pos-sible, the mean number of operations was figured for thatproblem. For example, the answer to the problem, 4 × ? =164, can be obtained by multiplication with two opera-tions and by division with four operations. The meannumber of operations is three. Helwig counted the num-ber of operations completed correctly and incorrectly perset of 20 problems and reported correct and incorrectrates of response.

Use Rate of Responding to Measure Free Operants.Rate of response is a useful measure for all behaviorscharacterized as free operants. The term free operantrefers to behaviors that have discrete beginning and end-ing points, require minimal displacement of the organismin time and space, can be emitted at nearly any time, donot require much time for completion, and can be emit-ted over a wide range of response rates (Baer & Fowler,1984). Skinner (1966) used rate of response of free oper-ants as the primary dependent variable when develop-ing the experimental analysis of behavior. The bar pressand key peck are typical free operant responses used in

97

Measuring Behavior

nonhuman animal laboratory studies. Human subjects ina basic laboratory experiment might depress keys on akeyboard. Many socially significant behaviors meet thedefinition of free operants: number of words read duringa 1-minute counting period, number of head slaps perminute, number of letter strokes written in 3 minutes. Aperson can make these responses at almost any time,each discrete response does not use much time, and eachresponse class can produce a wide range of rates.

Rate of response is a preferred measure for free oper-ants because it is sensitive to changes in behavior values(e.g., oral reading may occur at rates ranging from 0 to 250or more correct words per minute), and because it offersclarity and precision by defining a count per unit of time.

Do Not Use Rate to Measure Behaviors that Occurwithin Discrete Trials. Rate of response is not a usefulmeasure for behaviors that can occur only within lim-ited or restricted situations. For example, response ratesof behaviors that occur within discrete trials are con-trolled by a given opportunity to emit the response.Typical discrete trials used in nonhuman animal labora-tory studies include moving from one end of a maze orshuttle box to another. Typical applied examples in-clude responses to a series of teacher-presented flashcards; answering a question prompted by the teacher;and, when presented with a sample color, pointing to acolor from an array of three colors that matches thesample color. In each of these examples, the rate of re-sponse is controlled by the presentation of the an-tecedent stimulus. Because behaviors that occur withindiscrete trials are opportunity bound, measures such aspercentage of response opportunities in which a re-sponse was emitted or trials-to-criterion should be em-ployed, but not rate measures.

Do Not Use Rate to Measure Continuous Behaviorsthat Occur for Extended Periods of Time. Rate is also apoor measure for continuous behaviors that occur forextended periods of time, such as participating in play-ground games or working at a play or activity center.Such behaviors are best measured by whether they are“on” or “off” at any given time, yielding data on durationor estimates of duration obtained by interval recording.

Celeration

Just like a car that accelerates when the driver pressesthe gas pedal and decelerates when the driver releasesthe pedal or steps on the brake, rates of response accel-erate and decelerate. Celeration is a measure of how ratesof response change over time. Rate of response acceler-ates when a participant responds faster over successivecounting periods and decelerates when responding slowsover successive observations. Celeration incorporates

3Celeration, the root word of acceleration and deceleration, is a genericterm without specific reference to accelerating or decelerating responserates. Practitioners and researchers should use acceleration or decelerationwhen describing increasing or decreasing rates of response.4Methods for calculating and drawing trend lines are described in Chap-ter entitled “Constructing and Interpreting Graphic Displays of BehavioralData.”

three dimensional quantities: count per unit time/per unitof time; or expressed another way, rate/per unit of time(Graf & Lindsley, 2002; Johnston & Pennypacker,1993a). Celeration provides researchers and practitionerswith a direct measure of dynamic patterns of behaviorchange such as transitions from one steady state of re-sponding to another and the acquisition of fluent levels ofperformance (Cooper, 2005).

The Standard Celeration Chart provides a standardformat for displaying measures of celeration (Penny-packer, Gutierrez, & Lindsley, 2003).3 There are fourStandard Celeration Charts, showing rate as count (a) perminute, (b) per week, (c) per month, and (d) per year.These four charts provide different levels of magnifica-tion for viewing and interpreting celeration. Responserate is displayed on the vertical, or y, axis of the charts,and successive calendar time in days, weeks, months, oryears is presented on the horizontal, or x, axis. Teachersand other behavioral practitioners use the count perminute–successive calendar days chart most often.

Celeration is displayed on all Standard CelerationCharts with a celeration trend line. A trend line, a straightline drawn through a series of graphed data points, visu-ally represents the direction and degree of trend in thedata.4 The celeration trend line shows a factor by whichrate of response is multiplying (accelerating) or dividing(decelerating) across the celeration time period (e.g., rateper week, rate per month, rate per year, rate per decade).A celeration time period is 1/20th of the horizontal axisof all Standard Celeration Charts. For example, the cel-eration period for the successive calendar days chart isper week. The celeration period for the successive cal-endar weeks chart is per month.

A trend line drawn from the bottom left corner to thetop right corner on all Standard Celeration Charts has aslope of 34°, and has an acceleration value of X2 (read as“times-2”; celerations are expressed as multiples or di-visors). It is this 34° angle of celeration, a linear mea-sure of behavior change across time, that makes a chartstandard. For example, if the response rate were 20 perminute on Monday, 40 per minute the next Monday, and80 per minute on the third Monday, the celeration line(i.e., the trend line) on the successive calendar days chartwould show a X2 acceleration, a doubling in rate perweek. A X2 acceleration on successive calendar weekschart is a doubling in rate every week (e.g., 20 in Week1, 40 in Week 2, and 80 in Week 3). In most cases, cel-eration values should not be calculated with fewer thanseven data points.

98

Measuring Behavior

Measures Based on Temporal Extent

Duration

Duration, the amount of time in which behavior occurs,is the basic measure of temporal extent. Researchers andpractitioners measure duration in standard units of time(e.g., Enrique worked cooperatively with his peer tutor for12 minutes and 24 seconds today).

Duration is important when measuring the amountof time a person engages in the target behavior. Appliedbehavior analysts measure the duration of target behav-iors that a person has been engaging in for too long orfor too short of a time period, such as a child with de-velopmental disabilities who tantrums for more than anhour at a time, or a student who sticks with an academictask for no more than 30 seconds at a time.

Duration is also an appropriate measure for behaviorsthat occur at very high rates (e.g., rocking; rapid jerks ofthe head, hands, legs) or task-oriented continuous be-haviors that occur for an extended time (e.g., coopera-tive play, on-task behavior, off-task behavior).

Behavioral researchers and practitioners commonlymeasure one or both of two kinds of duration measures:total duration per session or observation period andduration per occurrence.

Total Duration per Session. Total duration is ameasure of the cumulative amount of time in which aperson engages in the target behavior. Applied behavioranalysts use two procedures for measuring and report-ing total duration. One method involves recording thecumulative amount of time a behavior occurs within aspecified observation period. For example, a teacherconcerned that a kindergarten child is spending toomuch time in solitary play could record the total timethat the child is observed engaged in solitary play dur-ing daily 30-minute free-play periods. Procedurally,when the child engages in solitary play, the teacher ac-tivates a stopwatch. When solitary play ceases, theteacher stops the stopwatch but does not reset it. Whenthe child drifts into solitary play again, the teacherstarts the stopwatch again. Over the course of a 30-minute free-play period, the child might have engagedin a total of 18 minutes of solitary play. If the durationof the free-play periods varied from day to day, theteacher would report the total duration of solitary be-havior as a percentage of total time observed (i.e., totalduration of solitary behavior ÷ duration of free-play pe-riod × 100 = % of solitary behavior in one free-play pe-riod). In this case, 18 minutes of cumulative solitaryplay within a 30-minute session yields 60%.

people with profound developmental disabilities. Theyused a stopwatch to record physical engagement with anitem (i.e., contact with both hand and the item) during2-minute trials. They reported total contact in seconds bysumming the duration values across three 2-minute trialsof each assessment. McCord, Iwata, Galensky, Elling-son, and Thomson (2001) started and stopped a stopwatchto measure the total duration in seconds that two adultswith severe or profound mental retardation engaged inproblem behavior.

The other measure of total duration recording is theamount of time a person spends engaged in an activity, orthe time a person needs to complete a specific task, with-out specifying a minimum or maximum observation pe-riod. For example, a community planner concerned withthe amount of time senior citizens attended a new recre-ation center could report the total number of minutes perday that each senior spends at the center.

Duration per Occurrence. Duration per occur-rence is a measure of the duration of time that each in-stance of the target behavior occurs. For example,assume that a student leaves his seat frequently and forvarying amounts of time. Each time the student leaveshis seat, the duration of his out-of-seat behavior couldbe recorded with a stopwatch. When he leaves his seat,the stopwatch is engaged. When he returns, the stop-watch is disengaged, the total time recorded, and thestopwatch reset to zero. When the student leaves hisseat again, the process is repeated. The resulting mea-sures yield data on the duration of occurrences of eachout-of-seat behavior over the observation period.

In a study evaluating an intervention for decreasingnoisy disruptions by children on a school bus, Greene,Bailey, and Barber (1981) used a sound-recording devicethat automatically recorded both the number of times thatoutbursts of sound exceeded a specified threshold andthe duration in seconds that each outburst remained abovethat threshold. The researchers used the mean durationper occurrence of noisy disruptions as one measure forevaluating the intervention’s effects.

Selecting and Combining Measures of Count andDuration. Measurements of count, total duration, andduration per occurrence provide different views of be-havior. Count and duration measure different dimen-sional quantities of behavior, and these differencesprovide the basis for selecting which dimension to mea-sure. Event recording meaures repeatability, whereasduration recording measures temporal extent. For in-stance, a teacher concerned about a student who is outof her seat “too much” could tally each time the studentleaves her seat. The behavior is discrete and is unlikely

Zhou, Iwata, Goff, and Shore (2001) used total du-ration measurement to assess leisure-item preferences of

99

Measuring Behavior

to occur at such a high rate that counting the number ofoccurrences would be difficult. Because any instance ofout-of-seat behavior has the potential to occur for anextended time and the amount of time the student is out of her seat is a socially significant aspect of the behavior, the teacher could also use total durationrecording.

Using count to measure out-of-seat behavior pro-vides the number of times the student left her seat. A mea-sure of total duration will indicate the amount andproportion of time that the student was out of her seatduring the observation period. Because of the relevanceof temporal extent in this case, duration would provide abetter measurement than count would. The teacher mightobserve that the student left her seat once in a 30-minuteobservation period. One occurrence of the behavior in 30minutes might not be viewed as a problem. However, ifthe student remained out of her seat for 29 of the obser-vation period’s 30 minutes, a very different view of thebehavior is obtained.

In this situation, duration per occurrence wouldmake an even better measurement selection than eithercount or total duration recording would. That is becauseduration per occurrence measures the repeatability andthe temporal extent of the behavior. A duration-per-occurrence measure would give the teacher informationon the number of times the student was out of her seatand the duration of each occurrence. Duration per oc-currence is often preferable to total duration because itis sensitive to the number of instances of the target be-havior. Further, if a total duration measure is needed forother purposes, the individual durations of each of thecounted and timed occurrences could be summed. How-ever, if behavior endurance (e.g., academic responding,motor movements) is the major consideration, then totalduration recording may be sufficient (e.g., oral readingfor 3 minutes, free writing for 10 minutes, running for10 kilometers).

Measures Based on Temporal Locus

As stated previously, temporal locus refers to when an in-stance of behavior occurs with respect to other events of in-terest. The two types of events most often used by researchersas points of reference for measuring temporal locus are theonset of antecedent stimulus events and the cessation of theprevious response. These two points of reference provide the context for measuring response latency and interresponsetime, the two measures of temporal locus most frequentlyreported in the behavior analysis literature.

Response Latency

Response latency (or, more commonly, latency) is a mea-sure of the elapsed time between the onset of a stimulus

5Latency is most often used to describe the time between the onset of anantecedent stimulus change and the initiation of a response. However, theterm can be used to refer to any measure of the temporal locus of a re-sponse with respect to any type of antecedent event. See Johnston and Pen-nypacker (1993b).

and the initiation of a subsequent response.5 Latency is anappropriate measure when the researcher or practitioneris interested in how much time occurs between an op-portunity to emit a behavior and when the behavior is ini-tiated. For example, a student might exhibit excessivedelays in complying with teacher directions. Responselatency would be the elapsed time from the end of theteacher’s direction and when the student begins to com-ply with the direction. Interest can also focus on latenciesthat are too short. A student may give incorrect answersbecause she does not wait for the teacher to complete thequestions. An adolescent who, at the slightest provocationfrom a peer, immediately retaliates has no time to con-sider alternative behaviors that could defuse the situationand lead to improved interactions.

Researchers typically report response latency databy the mean, median and range of individual latenciesmeasures per observation period. For example, Lerman,Kelley, Vorndran, Kuhn, and LaRue (2002) used a latencymeasure to assess the effects of different reinforcementmagnitudes (i.e., 20 seconds, 60 seconds, or 300 secondsof access to a reinforcer) on postreinforcement pause.The researchers measured the number of seconds fromthe end of each reinforcer-access interval to the first in-stance of the target behavior (a communication response).They then calculated and graphed the mean, median, andrange of response latencies measured during each ses-sion (see Lerman et al., 2002, p. 41).

Interresponse Time

Interresponse time (IRT) is the amount of time thatelapses between two consecutive instances of a responseclass. Like response latency, IRT is a measure of tempo-ral locus because it identifies when a specific instance ofbehavior occurs with respect to another event (i.e., theprevious response). Figure 2 shows a schematic repre-sentation of interresponse time.

Although it is a direct measure of temporal locus,IRT is functionally related to rate of response. ShorterIRTs coexist with higher rates of response, and longerIRTs are found within lower response rates. Applied behavior analysts measure IRT when the time betweeninstances of a response class is important. IRT providesa basic measure for implementing and evaluating interventions using differential reinforcement of lowrates (DRL), a procedure for using reinforcement to re-duce the rate of responding. Like latency data, IRT

100

Measuring Behavior

measures are typically reported and displayed graphicallyby mean (or median) and range per observation period.

Derivative Measures

Percentage and trials-to-criterion, two forms of data de-rived from direct measures of dimensional quantities ofbehavior, are frequently used in applied behavior analysis.

Percentage

A percentage is a ratio (i.e., a proportion) formed bycombining the same dimensional quantities, such as count(i.e., number ÷ number) or time (i.e., duration ÷ duration;latency ÷ latency). A percentage expresses the propor-tional quantity of some event in terms of the number oftimes the event occurred per 100 opportunities that theevent could have occurred. For example, if a student an-swered correctly 39 of 50 items on an exam, an accuracypercentage can be calculated by dividing the number ofcorrect answers by the total number of test items and mul-tiplying that product by 100 (39 ÷ 50 = .78 × 100 = 78%).

Percentage is frequently used in applied behavioranalysis to report the proportion of total correct responses.For example, Ward and Carnes (2002) used a percentage-of-correct-performances measure in their study evaluat-ing the effects of goal setting and public posting on skillexecution of three defensive skills by linebackers on acollege football team. The researchers recorded countsof correct and incorrect reads, drops, and tackles by eachplayer and calculated accuracy percentages based on thenumber of opportunities for each type of play.

Percentage is also used frequently in applied behav-ior analysis to report the proportion of observation inter-vals in which the target behavior occurred. Thesemeasures are typically reported as a proportion of inter-vals within a session. Percentage can also be calculatedfor an entire observation session. In a study analyzingthe differential effects of reinforcer rate, quality, imme-diacy, and response effort on the impulsive behavior ofstudents with attention-deficit/ hyperactivity disorder,Neef, Bicard, and Endo (2001) reported the percentage oftime each student allocated to two sets of concurrentlyavailable math problems (e.g., time allocated to mathproblems yielding high-quality delayed reinforcers ÷ totaltime possible 100 = %).

Percentages are used widely in education, psychol-ogy, and the popular media, and most people understandproportional relationships expressed as percentages.However, percentages are often used improperly and fre-quently misunderstood. Thus, we offer several notes ofcaution on the use and interpretation of percentages.

Percentages most accurately reflect the level of andchanges in behavior when calculated with a divisor (or de-nominator) of 100 or more. However, most percentagesused by behavioral researchers and practitioners are cal-culated with divisors much smaller than 100. Percentagemeasures based on small divisors are unduly affected bysmall changes in behavior. For example, a change in countof just 1 response per 10 opportunities changes the per-centage by 10%. Guilford (1965) cautioned that it is unwiseto compute percentages with divisors smaller than 20. Forresearch purposes, we recommend that whenever possi-ble, applied behavior analysts design measurement sys-tems in which resultant percentages are based on no fewerthan 30 response opportunities or observation intervals.

Sometimes changes in percentage can erroneouslysuggest improving performance. For example, an accu-racy percentage could increase even though the frequencyof incorrect responses remains the same or worsens. Con-sider a student whose accuracy in answering math prob-lems on Monday is 50% (5 of 10 problems answeredcorrectly) and on Tuesday it is 60% (12 of 20 problemsanswered correctly). Even with the improved proportionalaccuracy, the number of errors increased (from 5 on Mon-day to 8 on Tuesday).

Although no other measure communicates propor-tional relationships better than percentage does, its useas a behavioral quantity is limited because a percentagehas no dimensional quantities.6 For example, percentagecannot be used to assess the development of proficientor fluent behavior because an assessment of proficiencycannot occur without reference to count and time, but itcan show the proportional accuracy of a targeted behav-ior during the development of proficiency.

R1 R2 R4R3

Time

IRT IRT IRT Figure 2 Schematic representation of three interresponse times (IRT). IRT, theelapsed time between the termination of one response and the initiation of the nextresponse, is a commonly used measure oftemporal locus.

6Because percentages are ratios based on the same dimensional quantity,the dimensional quantity is canceled out and no longer exists in the per-centage. For example, an accuracy percentage created by dividing the num-ber of correct responses by the number of response opportunities removesthe actual count. However, ratios created from different dimensional quan-tities retain the dimensional quantities of each component. For example, rateretains a count per unit of time. See Johnston and Pennypacker (1993a)for further explication.

101

Measuring Behavior

Another limitation of percentage as a measure of be-havior change is that lower and upper limits are imposedon the data. For example, using percent correct to assessa student’s reading ability imposes an artificial ceilingon the measurement of performance. A learner who cor-rectly reads 100% of the words she is presented cannotimprove in terms of the measure used.

Different percentages can be reported from the samedata set, with each percentage suggesting significantly dif-ferent interpretations. For example, consider a student whoscores 4 correct (20%) on a 20-item pretest and 16 correct(80%) when the same 20 items are given as a posttest. Themost straightforward description of the student’s im-provement from pretest to posttest (60%) compares thetwo measures using the original basis or divisor (20 items).Because the student scored 12 more items correct on theposttest than he did on the pretest, his performance on theposttest could be reported as an increase (gain score) overhis pretest performance of 60%. And given that the stu-dent’s posttest score represented a fourfold improvementin correct responses, some might report the posttest scoreas a 300% improvement of the pretest—a completely dif-ferent interpretation from an improvement of 40%.

Although percentages greater than 100% are some-times reported, strictly speaking, doing so is incorrect.Although a behavior change greater than 100% may seemimpressive, it is a mathematical impossibility. A percent-age is a proportional measure of a total set, where x (theproportion) of y (the total) is expressed as 1 part in 100.A proportion of something cannot exceed the total of thatsomething or be less than zero (i.e., there is no such thingas a negative percentage). Every coach’s favorite athletewho “always gives 110%” simply does not exist.7

Trials-to-Criterion

Trials-to-criterion is a measure of the number of re-sponse opportunities needed to achieve a predeterminedlevel of performance. What constitutes a trial depends onthe nature of the target behavior and the desired perfor-mance level. For a skill such as shoe tying, each oppor-tunity to tie a shoe could be considered a trial, andtrials-to-criterion data are reported as the number of tri-als required for the learner to tie a shoe correctly withoutprompts or assistance. For behaviors involving problemsolving or discriminations that must be applied across alarge number of examples to be useful, a trial might con-

7When a person reports a percentage in excess of 100% (e.g., “Our mutualfund grew by 120% during the recent bear market”), he is probably usingthe misstated percentage in comparison to the previous base unit, not as aproportion of it. In this example, the mutual fund’s 20% increase makes itsvalue 1.2 times greater than its value when the bear market began.

sist of a block, or series of response opportunities, inwhich each response opportunity involves the presenta-tion of a different exemplar of the problem or discrimi-nation. For example, a trial for discriminating betweenthe short and long vowel sounds of the letter o could bea block of 10 consecutive opportunities to respond, inwhich each response opportunity is the presentation of aword containing the letter o, with short-vowel and long-vowel o words (e.g., hot, boat) presented in random order.Trials-to-criterion data could be reported as the numberof blocks of 10 trials required for the learner to correctlypronounce the o sound in all 10 words. Count would bethe basic measure from which the trials-to-criterion datawould be derived.

Other basic measures, such as rate, duration, and la-tency, can also be used to determine trials-to-criteriondata. For example, a trials-to-criterion measure for solv-ing two-digit minus two-digit subtraction problems re-quiring borrowing could be the number of practice sheetsof 20 randomly generated and sequenced problems alearner completes before she is able to solve all 20 prob-lems on a single sheet in 3 minutes or less.

Trials-to-criterion data are often calculated and re-ported as an ex post facto measure of one important aspectof the “cost” of a treatment or instructional method. Forexample, Trask-Tyler, Grossi, and Heward (1994) reportedthe number of instructional trials needed by each of threestudents with developmental disabilities and visual im-pairments to prepare three food items from recipes with-out assistance on two consecutive times over two sessions.Each recipe entailed from 10 to 21 task-analyzed steps.

Trials-to-criterion data are used frequently to com-pare the relative efficiency of two or more treatments orinstructional methods. For example, by comparing thenumber of practice trials needed for a student to masterweekly sets of spelling words practiced in two differentways, a teacher could determine whether the studentlearns spelling words more efficiently with one methodthan with another. Sometimes trials-to-criterion data aresupplemented by information on the number of minutesof instruction needed to reach predetermined performancecriteria (e.g., Holcombe, Wolery, Werts, & Hrenkevich1993; Repp, Karsh, Johnson, & VanLaarhoven, 1994).

Trials-to-criterion measures can also be collected andanalyzed as a dependent variable throughout a study. Forexample, R. Baer (1987) recorded and graphed trials-to-criterion on a paired-associates memory task as a depen-dent variable in a study assessing the effects of caffeineon the behavior of preschool children.

Trials-to-criterion data can also be useful for assess-ing a learner’s increasing competence in acquiring a re-lated class of concepts. For instance, teaching a conceptsuch as the color red to a child could consist of present-ing “red” and “not red” items to the child and providing

102

Measuring Behavior

differential reinforcement for correct responses. Trials-to-criterion data could be collected of the number of “red”and “not red” exemplars required before the childachieves a specified level of performance with the dis-crimination. The same instructional and data collectionprocedures could then be used in teaching other colors tothe child. Data showing that the child achieves mastery ofeach newly introduced color in fewer instructional trialsthan it took to learn previous colors might be evidence ofthe child’s increasing agility in learning color concepts.

Definitional Measures

In addition to the basic and derived dimensions alreadydiscussed, behavior can also be defined and measured byits form and intensity. Neither form (i.e., topography) norintensity (i.e., magnitude) of responding is a fundamen-tal dimensional quantity of behavior, but each is an im-portant quantitative parameter for defining and verifyingthe occurrence of many response classes. When behav-ioral researchers and practitioners measure the topogra-phy or magnitude of a response, they do so to determinewhether the response represents an occurrence of the tar-get behavior. Occurrences of the target behavior that areverified on the basis of topography or magnitude are thenmeasured by one or more aspects of count, temporal ex-tent, or temporal locus. In other words, measuring topog-raphy or magnitude is sometimes necessary to determinewhether instances of the targeted response class have oc-curred, but the subsequent quantification of those re-sponses is recorded, reported, and analyzed in terms ofthe fundamental and derivative measures of count, rate,duration, latency, IRT, percentage, and trials-to-criterion.

Topography

Topography, which refers to the physical form or shape ofa behavior, is both a measurable and malleable dimension ofbehavior. Topography is a measurable quantity of behaviorbecause responses of varying form can be detected from oneanother. That topography is a malleable aspect of behavioris evidenced by the fact that responses of varying form areshaped and selected by their consequences.

A group of responses with widely different topogra-phies may serve the same function (i.e., form a responseclass). For example, each of the different ways of writingthe word topography, shown in Figure 3, would pro-duce the same effect on most readers. Membership insome response classes, however, is limited to responseswithin a narrow range of topographies. Although each ofthe response topographies in Figure 3 would meet thefunctional requirements of most written communications,none would meet the standards expected of an advancedcalligraphy student.

Topography is of obvious and primary importance inperformance areas in which form, style, or artfulness ofbehavior is valued in its own right (e.g., painting, sculpt-ing, dancing, gymnastics). Measuring and providing dif-ferential consequences for responses of varied topographiesis also important when the functional outcomes of the be-havior correlate highly with specific topographies. A stu-dent who sits with good posture and looks at the teacher ismore likely to receive positive attention and opportunitiesto participate academically than is a student who sloucheswith her head on the desk (Schwarz & Hawkins, 1970).Basketball players who execute foul shots with a certainform make a higher percentage of shots than they do whenthey shoot idiosyncratically (Kladopoulos & McComas,2001).

Trap, Milner-Davis, Joseph, and Cooper (1978) mea-sured the topography of cursive handwriting by first-grade students. Plastic transparent overlays were used to

Figure 3 Topography, the physical form or shape of behavior, is a measurable dimension ofbehavior.

103

Measuring Behavior

Figure 4 Examples of outlines on a transparentoverlay used to measure inside and outside boundariesof manuscript letters and an illustration of using thetransparent overlay to measure the letter m. Becausethe vertical stroke of the letter m extended beyond theconfines of the outline, it did not meet the topographicalcriteria for a correct response.

From “The Measurement of Manuscript Letter Strokes” by J. J. Helwig, J. C.Johns, J. E. Norman, J. O. Cooper, 1976. Journal of Applied Behavior Analysis,9, p. 231. Copyright 1976 by the Society for the Experimental Analysis ofBehavior, Inc. Used by permission.

detect deviations from model letters in lower- and upper-case letters written by the children (see Figure 4). The researchers counted the number of correct letter strokes—those that met all of the specified topographical criteria(e.g., all letter strokes contained within the 2-mm para-meters of the overlay, connected, complete, sufficientlength)—and used the percentage correct of all strokeswritten by each student to assess the effects of visual andverbal feedback and a certificate of achievement on thechildren’s acquisition of cursive handwriting skills.

Magnitude

Magnitude refers to the force or intensity with which aresponse is emitted. The desired outcomes of some be-haviors are contingent on responding at or above (orbelow) a certain intensity or force. A screwdriver mustbe turned with sufficient force to insert or remove screws;a pencil must be applied to paper with enough force toleave legible marks. On the other hand, applying toomuch torque to a misaligned screw or bolt is likely to

strip the threads, and pushing too hard with a pencil willbreak its point.

The magnitude of speech or other vocalizationsthat were considered too loud or too soft has been mea-sured in several studies. Schwarz and Hawkins (1970)measured the voice loudness of Karen, a sixth-gradegirl who spoke so softly in class that her voice wasusually inaudible to others. Karen’s voice was recordedon videotape during two class periods each day. (Thevideotape was also used to obtain data on two otherbehaviors: face touching and the amount of time Karensat in a slouched position). The researchers then playedthe videotape into an audiotape recorder with a loud-ness indicator and counted the number of times theneedle went above a specified level on the loudnessmeter. Schwarz and Hawkins used the number (pro-portion) of needle inflections per 100 words spokenby Karen as the primary measure for evaluating the ef-fects of an intervention on increasing her voice vol-ume during class.

Greene and colleagues (1981) used an automatedsound-recording device to measure the magnitude ofnoisy disruptions by middle school students on a schoolbus. The recording device could be adjusted so that onlysound levels above a predetermined threshold activated it.The device automatically recorded both the number oftimes outbursts of sound exceeded a specified threshold(93 dB) and the total duration in seconds that the soundremained above that threshold. When the noise level ex-ceeded the specified threshold, a light on a panel that allstudents could see was activated automatically. When thelight was off, students listened to music during the busride; when the number of noisy disruptions was below acriterion, they participated in a raffle for prizes. This in-tervention drastically reduced the outbursts and otherproblem behaviors as well. Greene and colleagues re-ported both the number and the mean duration per oc-currence of noisy disruptions as measures of evaluatingthe intervention’s effects.8

Table 1 summarizes the measurable dimensions ofbehavior and considerations for their use.

8Researchers sometimes manipulate and control response topography andmagnitude to assess their possible effects as independent variables. Piazza,Roane, Kenney, Boney, and Abt (2002) analyzed the effects of differentresponse topographies on the frequency of pica (e.g., ingesting nonnutri-tive matter that may threaten one’s life) by three females. Pica items werelocated in various places, which required the subjects to respond in differ-ent ways (e.g., reach, bend over, get on the floor, open a container) to ob-tain them. Pica decreased when more elaborate response topographies wererequired to obtain pica items. Van Houten (1993) reported that when a boywith a long history of intense and high-frequency face slapping wore 1.5-pound wrist weights, face slapping immediately dropped to zero. Studiessuch as these suggest that problem behaviors may be reduced when en-gaging in those behaviors requires more effortful responses in terms oftopography or magnitude.

104

Tab

le 1

Fun

dam

enta

l, de

rived

, and

def

initi

onal

dim

ensi

ons

by w

hich

beh

avio

r ca

n be

mea

sure

d an

d de

scrib

ed.

Fund

amen

tal m

easu

res

How

cal

cula

ted

Con

side

ratio

ns

Cou

nt:T

he n

umbe

r of

res

pons

esem

itted

dur

ing

an o

bser

vatio

npe

riod.

Sim

ple

tally

of t

he n

umbe

r of

res

pons

es o

bser

ved.

Juda

h co

ntrib

uted

5 c

omm

ents

dur

ing

the

10-m

inut

e cl

ass

disc

ussi

on.

•O

bser

vatio

n tim

e in

whi

ch c

ount

was

rec

orde

d sh

ould

be

refe

renc

ed.

•M

ost u

sefu

l for

com

paris

on w

hen

obse

rvat

ion

time

(cou

ntin

g tim

e)is

con

stan

t acr

oss

obse

rvat

ions

.

•U

sed

in c

alcu

latin

g ra

te/fr

eque

ncy,

cel

erat

ion,

per

cent

ages

, and

tria

ls-t

o-cr

iterio

n m

easu

res.

Rat

e/fre

quen

cy:A

ratio

of c

ount

per

obse

rvat

ion

time;

ofte

nex

pres

sed

as c

ount

per

sta

ndar

dun

it of

tim

e (e

.g.,

per

min

ute,

per

hour

, per

day

).

Rep

ort n

umbe

r of

res

pons

es r

ecor

ded

per

time

in w

hich

obse

rvat

ions

wer

e co

nduc

ted.

If Ju

dah’

s co

mm

ents

wer

e co

unte

d du

ring

a 10

-min

ute

clas

sdi

scus

sion

, his

rate

of r

espo

ndin

g w

ould

be

5 co

mm

ents

per

10

min

utes

.

Ofte

n ca

lcul

ated

by

divi

ding

the

num

ber

of r

espo

nses

rec

orde

d by

num

ber

of s

tand

ard

units

of t

ime

in w

hich

obs

erva

tions

wer

eco

nduc

ted.

Juda

h m

ade

com

men

ts a

t a ra

te o

f 0.5

per

min

ute.

•If

obse

rvat

ion

time

varie

s ac

ross

mea

sure

men

ts, c

alcu

late

with

stan

dard

uni

ts o

f tim

e.

•M

inim

ize

faul

ty in

terp

reta

tions

by

repo

rtin

g co

untin

g tim

e.

•E

valu

atin

g sk

ill d

evel

opm

ent a

nd fl

uenc

y re

quire

s m

easu

rem

ent o

fco

rrec

t and

inco

rrec

t res

pons

e ra

tes.

•A

ccou

nt fo

r va

ried

com

plex

ity a

nd d

iffic

ulty

whe

n ca

lcul

atin

gre

spon

se ra

tes.

•R

ate

is th

e m

ost s

ensi

tive

mea

sure

of c

hang

es in

rep

eata

bilit

y.

•P

refe

rred

mea

sure

for

free

ope

rant

s.

•P

oor

mea

sure

for

beha

vior

s th

at o

ccur

with

in d

iscr

ete

tria

ls o

r fo

rbe

havi

ors

that

occ

ur fo

r ex

tend

ed d

urat

ions

.

•M

ost s

ensi

tive

mea

sure

of b

ehav

ior

repe

atab

ility

.

Der

ived

mea

sure

sH

ow c

alcu

late

dC

onsi

dera

tions

Cel

erat

ion:

The

cha

nge

(acc

eler

atio

n or

dec

eler

atio

n) in

rate

of r

espo

ndin

g ov

er ti

me.

Bas

ed o

n co

unt p

er u

nit o

f tim

e (r

ate)

;exp

ress

ed a

s fa

ctor

by

whi

chre

spon

ding

is a

ccel

erat

ing/

dece

lera

ting

(mul

tiply

ing

or d

ivid

ing)

.

A tr

end

line

conn

ectin

g Ju

dah’

s m

ean

rate

s of

com

men

ting

over

4w

eeks

of 0

.1, 0

.2, 0

.4, a

nd 0

.8 c

omm

ents

per

min

ute,

res

pect

ivel

y,w

ould

sho

w a

n ac

cele

ratio

n of

by

a fa

ctor

of t

imes

-2 p

er w

eek.

•R

evea

ls d

ynam

ic p

atte

rns

of b

ehav

ior

chan

ge s

uch

as tr

ansi

tions

from

one

ste

ady

stat

e to

ano

ther

and

acq

uisi

tion

of fl

uenc

y.

•D

ispl

ayed

with

a tr

end

line

on a

Sta

ndar

d C

eler

atio

n C

hart

.

•M

inim

um o

f sev

en m

easu

res

of ra

te r

ecom

men

ded

for

calc

ulat

ing.

( con

tinue

d)

105

Res

pons

e la

tenc

y:T

he p

oint

intim

e w

hen

a re

spon

se o

ccur

sw

ith r

espe

ct to

the

occu

rren

ce o

fan

ant

eced

ent s

timul

us.

Rec

ord

time

elap

sed

from

the

onse

t of t

he a

ntec

eden

t stim

ulus

eve

ntan

d th

e be

ginn

ing

of th

e re

spon

se;o

ften

repo

rted

by

mea

n or

med

ian

and

rang

e of

late

ncie

s pe

r se

ssio

n.

Juda

h’s

com

men

ts to

day

had

a m

ean

late

ncy

of 3

0 se

cond

sfo

llow

ing

a pe

ers’

com

men

t (ra

nge,

5 to

90

seco

nds)

.

•Im

port

ant m

easu

re w

hen

targ

et b

ehav

ior

is a

pro

blem

bec

ause

itis

em

itted

with

late

ncie

s th

at a

re to

o lo

ng o

r to

o sh

ort.

•D

ecre

asin

g la

tenc

ies

can

reve

al p

erso

n’s

incr

easi

ng m

aste

ry o

fso

me

skill

s.

Rec

ord

time

elap

sed

from

the

end

of p

revi

ous

resp

onse

and

the

begi

nnin

g of

nex

t res

pons

e;of

ten

repo

rted

by

mea

n or

med

ian

and

rang

e of

IRT

s pe

r se

ssio

n.

Juda

h’s

com

men

ts to

day

had

a m

edia

n IR

T o

f 2 m

inut

es a

nd a

rang

eof

10

seco

nds

to 5

min

utes

.

•Im

port

ant m

easu

re w

hen

the

time

betw

een

resp

onse

s, o

r pa

cing

of b

ehav

ior,

is th

e fo

cus.

•A

lthou

gh a

mea

sure

of t

empo

ral l

ocus

, IR

T is

cor

rela

ted

with

rate

of r

espo

ndin

g.

•A

n im

port

ant m

easu

re w

hen

impl

emen

ting

and

eval

uatin

g D

RL.

Inte

rres

pons

e tim

e (IR

T):T

hepo

int i

n tim

e w

hen

a re

spon

seoc

curs

with

res

pect

to th

eoc

curr

ence

of t

he p

revi

ous

resp

onse

.

Div

ide

num

ber

of r

espo

nses

mee

ting

spec

ified

crit

eria

(e.

g., c

orre

ctre

spon

ses,

res

pons

es w

ith m

inim

um IR

T, r

espo

nses

of p

artic

ular

topo

grap

hy)

by th

e to

tal n

umbe

r of

res

pons

es e

mitt

ed (

or r

espo

nse

oppo

rtun

ities

) an

d m

ultip

ly b

y 10

0.

Sev

enty

per

cent

of J

udah

’s c

omm

ents

toda

y w

ere

rele

vant

to th

edi

scus

sion

topi

c.

•P

erce

ntag

es b

ased

on

divi

sors

sm

alle

r th

an 2

0 ar

e un

duly

influ

-en

ced

by s

mal

l cha

nges

in b

ehav

ior.

Min

imum

of 3

0 ob

serv

atio

nin

terv

als

or r

espo

nse

oppo

rtun

ities

rec

omm

ende

d fo

r re

sear

ch.

•C

hang

e in

per

cent

age

may

err

oneo

usly

sug

gest

impr

oved

perfo

rman

ce.

•A

lway

s re

port

the

divi

sor

on w

hich

per

cent

age

mea

sure

s ar

eba

sed.

•C

anno

t be

used

to a

sses

s pr

ofic

ienc

y or

flue

ncy.

Perc

enta

ge:A

pro

port

ion,

expr

esse

d as

a n

umbe

r of

par

tspe

r 10

0;ty

pica

lly e

xpre

ssed

as

ara

tio o

f the

num

ber

of r

espo

nses

of a

cer

tain

type

per

tota

l num

ber

of r

espo

nses

(or

opp

ortu

nitie

s or

inte

rval

s in

whi

ch s

uch

are

spon

se c

ould

hav

e oc

curr

ed).

Tab

le 1

(co

nti

nu

ed)

Fund

amen

tal m

easu

res

How

cal

cula

ted

Con

side

ratio

ns

Dur

atio

n:T

he a

mou

nt o

f tim

e th

ata

beha

vior

occ

urs.

Tota

l dur

atio

n:Tw

o m

etho

ds:(

a) A

dd th

e in

divi

dual

am

ount

s of

tim

efo

r ea

ch r

espo

nse

durin

g an

obs

erva

tion

perio

d;or

(b)

rec

ord

tota

ltim

e in

divi

dual

is e

ngag

ed in

an

activ

ity, o

r ne

eds

to c

ompl

ete

a ta

sk,

with

out a

min

imum

or

max

imum

obs

erva

tion

perio

d.

Juda

h sp

ent 1

.5 m

inut

es c

omm

entin

g in

cla

ss to

day.

Dur

atio

n pe

r occ

urre

nce:

Rec

ord

dura

tion

of ti

me

for

each

inst

ance

of th

e be

havi

or;o

ften

repo

rted

by

mea

n or

med

ian

and

rang

e of

dura

tions

per

ses

sion

.

Juda

h’s

6 co

mm

ents

toda

y ha

d a

mea

n du

ratio

n of

11

seco

nds,

with

a ra

nge

of 3

to 2

4 se

cond

s.

•Im

port

ant m

easu

re w

hen

targ

et b

ehav

ior

is p

robl

emat

ic b

ecau

se it

occu

rs fo

r du

ratio

ns th

at a

re to

o lo

ng o

r to

o sh

ort.

•U

sefu

l mea

sure

for

beha

vior

s th

at o

ccur

at v

ery

high

rate

s an

d fo

rw

hich

acc

urat

e ev

ent r

ecod

ing

is d

iffic

ult (

e.g.

, fin

ger

flick

ing)

.

•U

sefu

l mea

sure

for

beha

vior

s th

at d

o no

t hav

e di

scre

te b

egin

ning

san

d fo

r w

hich

eve

nt r

ecod

ing

is d

iffic

ult (

e.g.

, hum

min

g).

•U

sefu

l mea

sure

for

task

-orie

nted

or

cont

inuo

us b

ehav

iors

(e.

g.,

coop

erat

ive

play

).

•D

urat

ion

per

occu

rren

ce o

ften

pref

erre

d ov

er to

tal d

urat

ion

beca

use

it in

clud

es d

ata

on c

ount

and

tota

l dur

atio

n.

•U

se to

tal d

urat

ion

whe

n in

crea

sing

the

endu

ranc

e of

beh

avio

r is

the

goal

.

•M

easu

ring

dura

tion

per

occu

rren

ce e

ntai

ls c

ount

ing

resp

onse

s,w

hich

can

be

used

to c

alcu

late

rate

of r

espo

ndin

g.

Der

ived

mea

sure

sH

ow c

alcu

late

dC

onsi

dera

tions

106

Use

d to

det

erm

ine

whe

ther

res

pons

es m

eet t

opog

raph

ical

crit

eria

;re

spon

ses

mee

ting

thos

e cr

iteria

are

mea

sure

d an

d re

port

ed b

y on

eor

mor

e fu

ndam

enta

l or

deriv

ativ

e m

easu

res

(e.g

., pe

rcen

tage

of

resp

onse

s m

eetin

g to

pogr

aphi

cal c

riter

ia).

The

pla

ne o

f the

gol

f clu

b re

mai

ned

with

in p

lus

or m

inus

2 d

egre

esfr

om b

acks

win

g to

follo

w-t

hrou

gh o

n 85

% o

f Am

anda

’s s

win

gs.

•Im

port

ant m

easu

re w

hen

desi

red

outc

omes

of b

ehav

ior

are

cont

inge

nt o

n re

spon

ses

mee

ting

cert

ain

topo

grap

hies

.

•Im

port

ant m

easu

re fo

r pe

rform

ance

are

as in

whi

ch fo

rm, s

tyle

, or

artfu

lnes

s is

val

ued.

Topo

grap

hy:T

he fo

rm o

r sh

ape

of b

ehav

ior.

Add

num

ber

of r

espo

nses

or

prac

tice

tria

ls n

eces

sary

for

lear

ner

toac

hiev

e sp

ecifi

ed c

riter

ion.

Dur

ing

clas

s di

scus

sion

trai

ning

ses

sion

s co

nduc

ted

in th

e re

sour

cero

om, 1

4 bl

ocks

of 1

0 op

port

uniti

es to

com

men

t wer

e ne

eded

for

Juda

h to

ach

ieve

the

crite

rion

of 8

on-

topi

c co

mm

ents

per

10

oppo

rtun

ities

.

•P

rovi

des

an e

x po

st fa

cto

desc

riptio

n of

the

“cos

t”of

a tr

eatm

ent o

rin

stru

ctio

nal m

etho

d.

•U

sefu

l for

com

parin

g re

lativ

e ef

ficie

ncy

of d

iffer

ent m

etho

ds o

fin

stru

ctio

n or

trai

ning

.

•U

sefu

l in

asse

ssin

g ch

ange

s in

the

rate

at w

hich

a le

arne

r m

aste

rsne

w s

kills

(ag

ility

).

Tria

ls to

crit

erio

n:N

umbe

r of

resp

onse

s, in

stru

ctio

nal t

rials

, or

prac

tice

oppo

rtun

ities

nee

ded

tore

ach

a pr

edet

erm

ined

perfo

rman

ce c

riter

ion.

Use

d to

det

erm

ine

whe

ther

res

pons

es m

eet m

agni

tude

crit

eria

;re

spon

ses

mee

ting

thos

e cr

iteria

are

mea

sure

d an

d re

port

ed b

y on

eor

mor

e fu

ndam

enta

l or

deriv

ativ

e m

easu

res

(e.g

., co

unt o

fre

spon

ses

mee

ting

mag

nitu

de c

riter

ia).

Jill

benc

h pr

esse

d a

60-p

ound

bar

20

times

.

•Im

port

ant m

easu

re w

hen

desi

red

outc

omes

of b

ehav

ior

are

cont

inge

nt o

n re

spon

ses

with

in c

erta

in ra

nge

of m

agni

tude

s.M

agni

tude

:The

str

engt

h,in

tens

ity, o

r fo

rce

of b

ehav

ior.

•Im

pose

s up

per

and

low

er li

mits

on

perfo

rman

ce (

i.e.,

cann

otex

ceed

100

%)

•W

idel

y di

ffere

nt p

erce

ntag

es c

an b

e re

port

ed fr

om th

e sa

me

data

set.

•To

cal

cula

te a

n ov

eral

l per

cent

age

from

per

cent

ages

bas

ed o

ndi

ffere

nt d

enom

inat

ors

(e.g

., 90

% [9

/10]

, 87.

5% [7

/8],

33%

[1/3

],10

0% [1

/1],

divi

de th

e to

tal n

umer

ator

s of

the

com

pone

ntpe

rcen

tage

s (e

.g.,

18)

by th

e to

tal d

enom

inat

ors

(e.g

., 18

/ 22

=81

.8%

).A

mea

n of

the

perc

enta

ges

them

selv

es y

ield

s a

diffe

rent

outc

ome

(e.g

., 90

% +

87.

5% +

33%

+ 1

00%

/ 4

= 77

.6%

).

Def

initi

onal

mea

sure

sH

ow c

alcu

late

dC

onsi

dera

tions

107

Measuring Behavior

Procedures for Measuring BehaviorProcedures for measuring behavior used most often byapplied behavior analysts involve one or a combinationof the following: event recording, timing, and varioustime sampling methods.

Event Recording

Event recording encompasses a wide variety of proce-dures for detecting and recording the number of times abehavior of interest occurs. For example, Cuvo, Lerch,Leurquin, Gaffaney, and Poppen (1998) used eventrecording while analyzing the effects of work require-ments and reinforcement schedules on the choice behav-ior of adults with mental retardation and preschoolchildren while they were engaged in age-appropriate tasks(e.g., adults sorting silverware, children tossing beanbagsor jumping hurdles). The researchers recorded each pieceof silverware sorted, each beanbag tossed, and each hur-dle jumped.

Devices for Event Recording

Although pencil and paper are sufficient for making eventrecordings, the following devices and procedures mayfacilitate the counting process.

• Wrist counters. Wrist counters are useful for tallyingstudent behaviors. Golfers use these counters to tallystrokes. Most wrist counters record from 0 to 99 re-sponses. These counters can be purchased fromsporting goods stores or large department stores.

• Hand-tally digital counters. These digital countersare similar to wrist counters. Hand-tally countersare frequently used in grocery chain stores, cafete-rias, military mess halls, and tollgates to tally thenumber of people served. These mechanical coun-ters are available in single or multiple channels andfit comfortably in the palm of the hand. With prac-tice, practitioners can operate the multiple-channelcounters rapidly and reliably with just one hand.These digital counters can be obtained from officesupply stores.

• Abacus wrist and shoestring counters. Landry andMcGreevy (1984) described two types of abacuscounters for measuring behavior. The abacus wristcounter is made from pipe cleaners and beads at-tached to a leather wristband to form an abacuswith rows designated as ones and tens. An observercan tally from 1 to 99 occurrences of behavior bysliding the beads in abacus fashion. Responses are

tallied in the same way on an abacus shoestringcounter except the beads slide on shoestrings at-tached to a key ring, which is attached to the ob-server’s belt, belt loop, or some other piece ofclothing such as a buttonhole.

• Masking tape. Teachers can mark tallies on mask-ing tape attached to their wrists or desk.

• Pennies, buttons, paper clips. One item can bemoved from one pocket to another each time thetarget behavior occurs.

• Pocket calculators. Pocket calculators can be usedto tally events.

Event recording also applies to the measurement ofdiscrete trial behaviors, in which the count for each trialor opportunity to respond is either 1 or 0, representingthe occurrence or nonoccurrence of the behavior. Figure5 shows a form used to record the occurrence of imita-tion responses by a preschooler with disabilities and histypically developing peer partner within a series of in-structional trials embedded into ongoing classroom ac-tivities (Valk, 2003). For each trial, the observer recordedthe occurrence of a correct response, no response, an ap-proximation, or an inappropriate response by the targetchild and the peer by circling or marking a slash throughletters representing each behavior. The form also allowedthe observer to record whether the teacher prompted orpraised the target child’s imitative behavior.

Considerations with Event Recording

Event recording is easy to do. Most people can tally dis-crete behaviors accurately, often on the first attempt. Ifthe response rate is not too high, event recording doesnot interfere with other activities. A teacher can con-tinue with instruction while tallying occurrences of thetarget behavior.

Event recording provides a useful measurement formost behaviors. However, each instance of the target be-havior must have discrete beginning and ending points.Event recording is applicable for target behaviors such asstudents’ oral responses to questions, students’ writtenanswers to math problems, and a parent praising his sonor daughter. Behaviors such as humming are hard to mea-sure with event recording because an observer would havedifficulty determining when one hum ends and anotherbegins. Event recording is difficult for behaviors definedwithout specific discrete action or object relations, suchas engagement with materials during free-play activity.Because engagement with materials does not present aspecific discrete action or object relation, an observer mayhave difficulty judging when one engagement starts andends, and then another engagement begins.

108

Measuring Behavior

Session date:

Target child:

Target behavior:

Session no:

Peer:

Condition:

Observer:

IOA day: YES NO

May 21 16

Jordan Ethan

Jennie

Place block on structure 5-sec time delayCode: C = Correct A = Approximation I = Inappropriate

Trial Target child's behavior Teacher behaviortoward target child

Peer’s behavior Teacherpraise

1 C N A I Prompt Praise C N A I Praise2 C N A I Prompt Praise C N A I Praise3 C N A I Prompt Praise C N A I Praise4 C N A I Prompt Praise C N A I Praise5 C N A I Prompt Praise C N A I Praise6 C N A I Prompt Praise C N A I Praise7 C N A I Prompt Praise C N A I Praise8 C N A I Prompt Praise C N A I Praise9 C N A I Prompt Praise C N A I Praise10 C N A I Prompt Praise C N A I Praise

No. corrects by target child: No. corrects by peer:8 9

************************************************

Target behavior: Condition:Place sticker on paper 5-sec time delay

Trial Target child's behavior Teacher behaviortoward target child

Peer’s behavior Teacherpraise

1 C N A I Prompt Praise C N A I Praise2 C N A I Prompt Praise C N A I Praise3 C N A I Prompt Praise C N A I Praise4 C N A I Prompt Praise C N A I Praise5 C N A I Prompt Praise C N A I Praise6 C N A I Prompt Praise C N A I Praise7 C N A I Prompt Praise C N A I Praise8 C N A I Prompt Praise C N A I Praise9 C N A I Prompt Praise C N A I Praise10 C N A I Prompt Praise C N A I Praise

No. corrects by target child: No. corrects by peer:4 8

N = No response

Figure 5 Data collection form for recording the behavior of two children and ateacher during a series of discrete trials.Adapted from The Effects of Embedded Instruction within theContext of a Small Group on the Acquisition of Imitation Skillsof Young Children with Disabilities by J. E. Valk (2003), p. 167.Unpublished doctoral dissertation, The Ohio State University.Used by permission.

Another consideration with event recording is thatthe target behaviors should not occur at such high ratesthat an observer would have difficulty counting each dis-crete occurrence accurately. High-frequency behaviorsthat may be difficult to measure with event recording in-clude rapid talking, body rocks, and tapping objects.

Also, event recording does not produce accuratemeasures for target behaviors that occur for extendedtime periods, such as staying on task, listening, playingquietly alone, being out of one’s seat, or thumb sucking.Task-oriented or continuous behaviors (e.g., being “ontask”) are examples of target behaviors for which eventrecording would not be indicated. Classes of continuousbehaviors occurring across time are usually not a primeconcern of applied behavior analysts. For example, read-ing per se is of less concern than the number of wordsread correctly and incorrectly, or the number of readingcomprehension questions answered correctly and incor-rectly. Similarly, behaviors that demonstrate understand-ing are more important to measure than “listeningbehavior,” and the number of academic responses a stu-dent emits during an independent seatwork period is moreimportant than being on task.

Timing

Researchers and practitioners use a variety of timing de-vices and procedures to measure duration, response la-tency, and interresponse time.

Timing the Duration of Behavior

Applied researchers often use semi-automated computer-driven systems for recording durations. Practitioners,however, will most likely use nonautomated instrumentsfor recording duration. The most precise nonautomatedinstrument is a digital stopwatch. Practitioners can usewall clocks or wristwatches to measure duration, but themeasures obtained will be less precise than those obtainedwith a stopwatch.

The procedure for recording the total duration of atarget behavior per session with a stopwatch is to (a) ac-tivate the stopwatch as the behavior starts and (b) stopthe watch at the end of the episode. Then, without reset-ting the stopwatch, the observer starts the stopwatch againat the beginning of the second occurrence of the behav-ior and stops the watch at the end of the second episode.

109

Measuring Behavior

The observer continues to accumulate the durations oftime in this fashion until the end of the observation pe-riod, and then transfers the total duration of time show-ing on the stopwatch to a data sheet.

Gast and colleagues (2000) used the following pro-cedure for measuring duration with a foot switch and taperecorder that enables a practitioner to use both hands topresent stimulus materials or interact with the participantin other ways throughout the session.

The duration of a student’s target behavior was recordedby using a cassette audio tape recorder with a blank tapein it that was activated by the instructor using her foot toactivate a jellybean switch connected to the taperecorder. When the student engaged in the target behav-ior, the teacher activated the switch that started the taperunning. When the student stopped engaging in the be-havior, the teacher stopped the tape by activating theswitch again. At the end of the session the teacher said,“end” and stopped the tape recorder and rewound thetape. [After the session the teacher] then played the tapefrom the beginning until the “end” and timed the dura-tion with a stopwatch. (p. 398)

McEntee and Saunders (1997) recorded duration peroccurrence using a bar code data collection system tomeasure (a) functional and stereotypic engagements withmaterials and (b) stereotypy without interaction with ma-terials and other aberrant behaviors. They created andarranged bar codes on a data sheet to record the behav-iors of four adolescents with severe to profound mentalretardation. A computer with a bar code font and soft-ware read the bar code, recording the year, month, day,and time of particular events. The bar code data collec-tion system provided real-time duration measurementsof engagements with materials and aberrant behaviors.

Timing Response Latencyand Interresponse Time

Procedures for measuring latency and interresponse time(IRT) are similar to procedures used to measure duration.Measuring latency requires the precise detection andrecording of the time that elapses from the onset of eachoccurrence of the antecedent stimulus event of interestand the onset of the target behavior. Measuring IRTs re-quires recording the precise time that elapses from thetermination of each occurrence of the target behavior tothe onset of the next. Wehby and Hollahan (2000) mea-sured the latency of compliance by an elementary schoolstudent with learning disabilities to instructions to beginmath assignments. Using a laptop computer with soft-ware designed to detect latency (see Tapp, Wehby, & Ellis,1995, for the MOOSES observation system), they mea-sured latency from the request to the onset of compliance.

9A variety of terms are used in the applied behavior analysis literature todescribe measurement procedures that involve observing and recordingbehavior within or at the end of scheduled intervals. Some authors use theterm time sampling to refer only to momentary time sampling. We includewhole-interval and partial-interval recording under the rubric of time sam-pling because it is often conducted as a discontinuous measurement methodto provide a representative “sample” of a person’s behavior during the ob-servation period interval.

Time Sampling

Time sampling refers to a variety of methods for ob-serving and recording behavior during intervals or at spe-cific moments in time. The basic procedure involvesdividing the observation period into time intervals andthen recording the presence or absence of behavior withinor at the end of each interval.

Time sampling was developed originally by etholo-gists studying the behavior of animals in the field(Charlesworth & Spiker, 1975). Because it was not pos-sible or feasible to observe the animals continuously, thesescientists arranged systematic schedules of relatively briefbut frequent observation intervals. The measures obtainedfrom these “samples” are considered to be representativeof the behavior during the entire time period from whichthey were collected. For example, much of our knowl-edge about the behavior of gorillas is based on data col-lected by researchers such as Jane Goodall using timesampling observation methods.

Three forms of time sampling used often by appliedbehavior analysts are whole-interval recording, partial-interval recording, and momentary time sampling.9

Whole-Interval Recording

Whole-interval recording is often used to measure contin-uous behaviors (e.g., cooperative play) or behaviors thatoccur at such high rates that observers have difficulty dis-tinguishing one response from another (e.g., rocking, hum-ming) but can detect whether the behavior is occurring at anygiven time.With whole-interval recording, the observationperiod is divided into a series of brief time intervals (typi-cally from 5 to 10 seconds).At the end of each interval, theobserver records whether the target behavior occurredthroughout the interval. If a student’s on-task behavior isbeing measured via 10-second whole-interval recording,the student would need to meet the definition of being ontask during an entire interval for the behavior to be recordedas occurring in that interval. The student who was on taskfor 9 of an interval’s 10 seconds would be scored as notbeing on task for that interval. Hence, data obtained withwhole-interval recording usually underestimate the overallpercentage of the observation period in which the behavioractually occurred.The longer the observation intervals, thegreater the degree to which whole-interval recording willunderestimate the actual occurrence of the behavior.

110

Measuring Behavior

Date: Group no: Session no:May 7 1 17Observer: Robin

Experimental condition:

Obs. start time: 9:42

IOA session:

Baseline

Stop time: 10:12

Yes

On task

No

Productivity

10-sec intervals Student 1 Student 2 Student 3 Student 41 YES NO YES NO YES NO YES NO2 YES NO YES NO YES NO YES NO3 YES NO YES NO YES NO YES NO4 YES NO YES NO YES NO YES NO5 YES NO YES NO YES NO YES NO6 YES NO YES NO YES NO YES NO7 YES NO YES NO YES NO YES NO8 YES NO YES NO YES NO YES NO9 YES NO YES NO YES NO YES NO

10 YES NO YES NO YES NO YES NO11 YES NO YES NO YES NO YES NO12 YES NO YES NO YES NO YES NO13 YES NO YES NO YES NO YES NO14 YES NO YES NO YES NO YES NO15 YES NO YES NO YES NO YES NO16 YES NO YES NO YES NO YES NO17 YES NO YES NO YES NO YES NO18 YES NO YES NO YES NO YES NO19 YES NO YES NO YES NO YES NO20 YES NO YES NO YES NO YES NO21 YES NO YES NO YES NO YES NO22 YES NO YES NO YES NO YES NO23 YES NO YES NO YES NO YES NO24 YES NO YES NO YES NO YES NO25 YES NO YES NO YES NO YES NO26 YES NO YES NO YES NO YES NO27 YES NO YES NO YES NO YES NO28 YES NO YES NO YES NO YES NO29 YES NO YES NO YES NO YES NO30 YES NO YES NO YES NO YES NO

26 4 28 2 18 12 22 8

86.6% 93.3% 60.0% 73.3%

Totals% Intervals

on task

On-Task Recording Form

Yes = On-task No = Off-task

X

Figure 6 Observation form used forwhole-interval recording of four studentsbeing on task during independentseatwork time.Adapted from Smiley faces and spinners: Effects of self-monitoring of productivity with an indiscriminablecontingency of reinforcement on the on-task behavior andacademic productivity by kindergarteners duringindependent seatwork by R. L. Ludwig, 2004, p. 101.Unpublished master’s thesis, The Ohio State University. Usedby permission.

Data collected with whole-interval recording are re-ported as the percentage of total intervals in which thetarget behavior was recorded as occurring. Because theyrepresent the proportion of the entire observation periodthat the person was engaged in the target behavior, whole-interval recording data yield an estimate of total dura-tion. For example, assume a whole-interval observationperiod consisting of six 10-second intervals (a one-minutetime frame). If the target behavior was recorded as oc-curring for four of these whole intervals and not occur-ring for the remaining two intervals, it would yield a totalduration estimate of 40 seconds.

Figure 6 shows an example of a whole-intervalrecording form used to measure the on-task behavior offour students during academic seatwork time (Ludwig,2004). Each minute was divided into four 10-second ob-servation intervals; each observation interval was followedby 5 seconds in which the observer recorded the occur-rence or nonoccurrence of target behavior during the pre-ceding 10 seconds. The observer first watched Student 1continuously for 10 seconds, and then she looked away

during the next 5 seconds and recorded whether Student1 had been on task throughout the previous 10 seconds bycircling YES or NO on the recording form. After the 5-second interval for recording Student 1’s behavior, theobserver looked up and watched Student 2 continuouslyfor 10 seconds, after which she recorded Student 2’s be-havior on the form. The same procedure for observingand recording was used for Students 3 and 4. In this way,the on-task behavior of each student was observed andrecorded for one 10-second interval per minute.

Continuing the sequence of observation and record-ing intervals over a 30-minute observation period pro-vided thirty 10-second measures (i.e., samples) of eachstudent’s on-task behavior. The data in Figure 6 show that the four students were judged by the observer to havebeen on task during Session 17 for 87%, 93%, 60%, and73% of the intervals, respectively. Although the data areintended to represent the level of each student’s behaviorthroughout the observation period, it is important to re-member that each student was observed for a total of only5 of the observation period’s 30 minutes.

111

Measuring Behavior

1 2 3 4

AStudent 1 T S D N A T S D N A T S D N A T S D N

AStudent 2 T S D N A T S D N A T S D N A T S D N

AStudent 3 T S D N A T S D N A T S D N A T S D N

Key:A = Academic responseT = Talk-outS = Out-of-seatD = Other disruptive behaviorN = No occurrences of target behaviors

Figure 7 Portion of a form used for partial-interval recording of four response classes bythree students.

Observers using any form of time sampling shouldalways make a recording response of some sort in everyinterval. For example, an observer using a form such asthe one in Figure 6 would record the occurrence or nonoccurrence of the target behavior in each interval bycircling YES or NO. Leaving unmarked intervals in-creases the likelihood of losing one’s place on the record-ing form and marking the result of an observation in thewrong interval space.

All time sampling methods require a timing deviceto signal the beginning and end of each observation andrecording interval. Observers using pencil, paper, clip-board, and timers for interval measurement will often at-tach a stopwatch to a clipboard. However, observing andrecording behavior while having to look simultaneouslyat a stopwatch is likely to have a negative impact on theaccuracy of measurement. An effective solution to thisproblem is for the observer to listen by earphone to prerecorded audio cues signaling the observation andrecording intervals. For example, observers using awhole-interval recording procedure such as the one justdescribed could listen to an audio recording with a se-quence of prerecorded statements such as the following:“Observe Student 1,” 10 seconds later, “Record Stu-dent 1,” 5 seconds later, “Observe Student 2,” 10 secondslater, “Record Student 2,” and so on.

Tactile prompting devices can also be used to signalobservation intervals. For example, the Gentle Re-minder ([email protected]) and MotivAider(www.habitchange.com) are small timing instrumentsthat vibrate at the time intervals programmed by the user.

Partial-Interval Recording

When using partial-interval recording, the observerrecords whether the behavior occurred at any time duringthe interval. Partial-interval time sampling is not con-cerned with how many times the behavior occurred dur-

ing the interval or how long the behavior was present,just that it occurred at some point during the interval. Ifthe target behavior occurs multiple times during the in-terval, it is still scored as occurring only once. An ob-server using partial-interval recording to measure astudent’s disruptive behavior would mark an interval ifdisruptive behavior of any form meeting the target be-havior definition occurred for any amount of time dur-ing the interval. That is, an interval would be scored asdisruptive behavior even if the student was disruptive foronly 1 second of a 6-second interval. Because of this,data obtained via partial-interval recording often overes-timate the overall percentage of the observation period(i.e., total duration) that the behavior actually occurred.

Partial-interval data, like whole-interval data, aremost often reported as a percentage of total intervals inwhich the target behavior was scored. Partial-interval dataare used to represent the proportion of the entire obser-vation period in which the target behavior occurred, butunlike whole-interval recording, the results of partial-interval recording do not provide any information on du-ration per occurrence. That is because any instance of thetarget behavior, regardless of how brief its duration, willcause an interval to be scored.

If partial-interval recording with brief observationintervals is used to measure discrete responses of shortduration per occurrence, the data obtained provide a crudeestimate of the minimum rate of responding. For exam-ple, data showing that a behavior measured by partial-interval recording consisting of 6-second contiguousintervals (i.e., successive intervals are not separated bytime in which the behavior is not observed) occurred in50% of the total intervals indicates a minimum responserate of five responses per minute (on the average, at leastone response occurred in 5 of the 10 intervals per minute).Although partial-interval recording often overestimatesthe total duration, it is likely to underestimate the rate ofa high-frequency behavior. This is because an interval inwhich a person made eight nonverbal sounds would be

112

Measuring Behavior

scored the same as an interval in which the person madeonly one sound. When the evaluation and understandingof a target behavior requires an accurate and sensitivemeasure of response rate, event recording should be used.

Because an observer using partial-interval recordingneeds to record only that a behavior has occurred at anypoint during each interval (compared to having to watchthe behavior throughout the entire interval with whole-interval), it is possible to measure multiple behaviors con-currently. Figure 7 shows a portion of a form formeasuring four response classes by three students usingpartial-interval recording with 20-second intervals. Theobserver watches Student 1 throughout the first 20-secondinterval, Student 2 for the next 20 seconds, and Student3 for the next 20 seconds. Each student is observed for 20seconds out of each minute of the observation period. Ifa student engages in any of the behaviors being measuredat any time during an observation interval, the observermarks the letter(s) corresponding to those behaviors. If astudent engages in none of the behaviors being measuredduring an interval, the observer marks N to indicate no occurrences of the target behaviors. For example, duringthe first interval in which Student 1 was observed, hesaid, “Pacific Ocean” (an academic response). Duringthe first interval in which Student 2 was observed, sheleft her seat and threw a pencil (a behavior within the re-sponse class “other disruptive behavior”). Student 3 emit-ted none of the four target behaviors during the firstinterval that he was observed.

Momentary Time Sampling

An observer using momentary time sampling recordswhether the target behavior is occurring at the momentthat each time interval ends. If conducting momentarytime sampling with 1-minute intervals, an observer wouldlook at the person at the 1-minute mark of the observa-tion period, determine immediately whether the targetbehavior was occurring, and indicate that decision on therecording form. One minute later (i.e., 2 minutes into theobservation period), the observer would look again at the person and then score the presence or absence of thetarget behavior. This procedure would continue until theend of the observation period.

As with interval recording methods, data from mo-mentary time sampling are typically reported as per-centages of the total intervals in which the behavioroccurred and are used to estimate the proportion of thetotal observation period that the behavior occurred.

A major advantage of momentary time sampling isthat the observer does not have to attend continuosly tomeasurement, whereas interval recording methods de-mand the undivided attention of the observer.

Because the person is observed for only a brief mo-ment, much behavior will be missed with momentarytime sampling. Momentary time sampling is used pri-marily to measure continuous activity behaviors such asengagement with a task or activity, because such behav-iors are easily identified. Momentary time sampling isnot recommended for measuring low-frequency, short-duration behaviors (Saudargas & Zanolli, 1990).

A number of studies have compared measures ob-tained by momentary time sampling using intervals ofvarying duration with measures of the same behaviorobtained by continuous duration recording (e.g., Gunter,Venn, Patrick, Miller, & Kelly, 2003; Powell, Martin-dale, Kulp, Martindale, & Bauman, 1977; Powell, Mar-tindale, & Kulp, 1975; Simpson & Simpson, 1977;Saudargas and Zanolli 1990; Test & Heward, 1984). Ingeneral, this research has found that momentary timesampling both overestimates and underestimates the con-tinuous duration measure when time intervals are greaterthan 2 minutes. With intervals less than 2 minutes, thedata obtained using momentary time sampling moreclosely matched that obtained using the continuous du-ration measure.

Results of a study by Gunter and colleagues (2003)are representative of this research. These researchers com-pared measures of on-task behavior of three elementarystudents with emotional/behavioral disorders over sevensessions obtained by momentary time sampling conductedat 2-, 4-, and 6-minute intervals with measures of the samebehavior obtained by continuous duration recording. Mea-sures obtained using the 4-minute and 6-minute time sam-pling method produced data paths that were highlydiscrepant with the data obtained by continuous mea-surement, but data obtained using the 2-minute intervalhad a high degree of correspondence with the measuresobtained using the continuous duration method. Figure8 shows the results for one of the students.

Planned Activity Check

A variation of momentary time sampling, planned ac-tivity check (PLACHECK) uses head counts to mea-sure “group behavior.” A teacher using PLACHECKobserves a group of students at the end of each time in-terval, counts the number of students engaged in the tar-geted activity, and records the tally with the total numberof students in the group. For example, Doke and Risley(1972) used data obtained by PLACHECK measurementto compare group participation in required and optionalbefore-school activities. Observers tallied the number ofstudents in either the required or the optional activity areaat the end of 3-minute intervals, and then the number ofchildren actually participating in an activity in either area.

113

Measuring Behavior

100

90

80

70

60

50

40

30

20

10

0

Per

cen

t o

f T

ime/

Inte

rval

s

1 2 3 4 5 6 7Sessions

Harry

MTS 2

MTS 4

MTS 6

Continuous

Figure 8 Comparison of measures of the on-task behavior of an elemen-tary student obtained by 2-minute, 4-minute, and 6-minute momentarytime sampling with measures of thesame behavior obtained by continuousduration recording.From “Efficacy of using momentary time samples to determine on-task behavior of students withemotional/behavioral disorders” by P. L. Gunter, M. L.Venn, J. Patrick, K. A. Miller, and L. Kelly, 2003,Education and Treatment of Children, 26, p. 406. Used by permission.

1 2 3 4 5 6 8 9 107

− − + + − − + − −−

− + + + − + + + +−

− + + + − − + − +−

Duration

WI

PI

MTS

55%

30%

70%

50%

Consecutive Observed Intervals

Mea

sure

men

t M

eth

od

+−

= Actual occurrence of behavior measured by continuous duration recording

= Behavior recorded as occurring during the interval= Behavior recorded as not occurring during the interval

WI = whole-interval recordingPI = partial-interval recordingMTS = momentary time sampling

Figure 9Comparing measures of the samebehavior obtained by three differenttime sampling methods with measureobtained by continuous durationrecording.

They reported these data as separate percentages of chil-dren participating in required or optional activities.

Dyer, Schwartz, and Luce (1984) used a variation ofPLACHECK to measure the percentage of students withdisabilities living in a residential facility engaged in age-appropriate and functional activities. As students enteredthe observation area, they were observed individually foras long as it took to determine the activity in which theywere engaged. The students were observed in a prede-termined order not exceeding 10 seconds per student.

Other variations of the PLACHECK measurementcan be found in the literature, though they usually arecalled time sampling or momentary time sampling. Forexample, in a study examining the effects of responsecards on the disruptive behavior of third-graders duringdaily math lessons, Armendariz and Umbreit (1999)recorded at 1-minute intervals whether each student inthe class was being disruptive. By combining thePLACHECK data obtained across all of the no-response

cards (baseline) sessions, and graphing those results asthe percentage of students who were disruptive at each 1-minute mark, and doing the same with the PLACHEKdata from all response cards sessions. Armendariz andUmbreit created a clear and powerful picture of the dif-ferences in “group behavior” from the beginning to theend of a typical lesson in which response cards were usedor not used.

Recognizing Artifactual Variability in TimeSampling Measures

As stated previously, all time sampling methods provideonly an estimate of the actual occurrence of the behavior.Different measurement sampling procedures produce dif-ferent results, which can influence decisions and inter-pretations. Figure 9 illustrates just how different theresults obtained by measuring the same behavior with

114

Measuring Behavior

different time sampling methods can be. The shaded barsindicate when the behavior was occurring within an ob-servation period divided into 10 contiguous intervals. Theshaded bars reveal all three dimensional quantities of be-havior: repeatability (seven instances of the behavior),temporal extent (the duration of each response) and tem-poral locus (interresponse time is depicted by the spacebetween the shaded bars).

Because the time sampling methods used in appliedbehavior analysis are most often viewed and interpretedas measures of the proportion of the total observation pe-riod in which the behavior occurred, it is important tocompare the results of time sampling methods with thoseobtained by continuous measurement of duration. Con-tinuous measurement reveals that the behavior depictedin Figure 9 occurred 55% of the time during the ob-servation period. When the same behavior during thesame observation period was recorded using whole in-terval recordings the measure obtained grossly underes-timated the actual occurrence of the behavior (i.e., 30%versus 55%), partial-interval recording grossly overesti-mated the actual occurrence (i.e., 70% versus 55%), andmomentary time sampling yielded a fairly close estimateof actual occurrence of the behavior (50% versus 55%).

The fact that momentary time sampling resulted in ameasure that most closely approximated the actual be-havior does not mean that it is always the preferredmethod. Different distributions of the behavior (i.e., tem-poral locus) during the observation period, even at thesame overall frequency and duration as the session shownin Figure 9, would result in widely different outcomesfrom each of the three time sampling methods.

Discrepancies between measures obtained by differ-ent measurement methods are usually described in termsof the relative accuracy or inaccuracy of each method.However, accuracy is not the issue here. If the shadedbars in Figure 9 represent the true value of the behav-ior, then each of the time sampling methods was con-ducted with complete accuracy and the resulting data arewhat should be obtained by applying each method. Anexample of the inaccurate use of one of the measurementmethods would be if the observer using whole-intervalrecording had marked the behavior as occurring in In-terval 2, when the behavior did not occur according tothe rules of whole-interval recording.

But if the behavior actually occurred for 55% of theobservation period, what should we call the wrong andmisleading measures of 30% and 70% if not inaccurate?In this case, the misleading data are artifacts of the mea-surement procedures used to obtain them. An artifact issomething that appears to exist because of the way it isexamined or measured. The 30% measure obtained bywhole-interval recording and the 70% measure obtained

measures were conducted. The fact that data obtainedfrom whole-interval and partial-interval recording con-sistently underestimate and overestimate, respectively,actual occurrence of behavior as measured by continuousduration recording is an example of well-known artifacts.

It is clear that interval measurement and momentarytime sampling result in some artifactual variability in thedata, which must be considered carefully when interpre-tating results obtained with these measurement methods.

Measuring Behaviorby Permanent ProductsBehavior can be measured in real time by observing aperson’s actions and recording responses of interest asthey occur. For example, a teacher can keep a tally of thenumber of times a student raises her hand during a classdiscussion. Some behaviors can be measured in real timeby recording their effects on the environment as those ef-fects are produced. For example, a hitting instructor ad-vances a handheld counter each time a batter hits a pitchto the right-field side of second base.

Some behaviors can be measured after they have oc-curred. A behavior that produces consistent effects onthe environment can be measured after it has occurred ifthe effects, or products left behind by the behavior, re-main unaltered until measurement is conducted. For ex-ample, if the flight of the baseballs hit by a batter duringbatting practice were not impeded, and the balls were leftlying on the ground, a hitting instructor could collect dataon the batter’s performance after the batter’s turn wascompleted by counting each ball found lying in fair ter-ritory on the right-field side of second base.

Measuring behavior after it has occurred by measur-ing the effects that the behavior produced on the environ-ment is known as measurement by permanent product.Measuring permanent products is an ex post facto methodof data collection because measurement takes place afterthe behavior has occurred. A permanent product is achange in the environment produced by a behavior thatlasts long enough for measurement to take place.

Although often described erroneously as a methodfor measuring behavior, measurement by permanentproduct does not refer to any particular measurementprocedure or method. Instead, measurement by perma-nent product refers to the time of measurement (i.e., afterthe behavior has occurred) and the medium (i.e., the ef-fect of the behavior, not the behavior itself) by which themeasurer comes in contact with (i.e., observes) the be-havior. All of the methods for measuring behavior de-scribed in this chapter—event recording, timing, and time

by partial-interval recording are artifacts of the way those

115

Measuring Behavior

sampling—can be applied to the measurement of per-manent products.

Permanent products can be natural or contrived out-comes of the behavior. Permanent products are naturaland important outcomes of a wide range of socially sig-nificant behaviors in educational, vocational, domestic,and community environments. Examples in education in-clude compositions written (Dorow & Boyle, 1998), com-putations of math problems written (Skinner, Fletcher,Wildmon, & Belfiore, 1996), spelling words written(McGuffin, Martz, & Heron, 1997), worksheets com-pleted (Alber, Heward, & Hippler, 1999), homework as-signments turned in (Alber, Nelson, & Brennan, 2002),and test questions answered (e.g., Gardner, Heward, &Grossi, 1994). Behaviors such as mopping floors anddishwashing (Grossi & Heward, 1998), incontinence (Ad-kins & Matthews, 1997), drawing bathroom graffiti(Mueller, Moore, Doggett, & Tingstrom, 2000), recycling(Brothers, Krantz, & McClannahan, 1994), and pickingup litter (Powers, Osborne, & Anderson, 1973) can alsobe measured by the natural and important changes theymake on the environment.

Many socially significant behaviors have no directeffects on the physical environment. Reading orally, sit-ting with good posture, and repetitive hand flapping leaveno natural products in typical environments. Neverthe-less, ex post facto measurement of such behaviors canoften be accomplished via contrived permanent products.For example, by audiotaping students as they read outloud (Eckert, Ardoin, Daly, & Martens, 2002), videotap-ing a girl sitting in class (Schwarz & Hawkins, 1970), andvideotaping a boy flapping his hands (Ahearn, Clark, Gar-denier, Chung, & Dube, 2003) researchers obtained con-trived permanent products for measuring these behaviors.

Contrived permanent products are sometimes usefulin measuring behaviors that have natural permanent prod-ucts that are only temporary. For example, Goetz andBaer (1973) measured variations in the form of children’sblock building from photographs taken of the construc-tions made by the children, and Twohig and Woods(2001) measured the length of fingernails from pho-tographs of nail-biters’ hands.

Advantages of Measurementby Permanent Product

Measurement by permanent product offers numerous ad-vantages to practitioners and researchers.

Practitioner Is Free to Do Other Tasks

Not having to observe and record behavior as it occurs en-ables the practitioner to do something else during the ob-servation period. For example, a teacher who uses an

audiotape recorder to record students’ questions, com-ments, and talk-outs during a class discussion can con-centrate on what her students are saying, provideindividual help, and so on.

Makes Possible Measurement of SomeBehaviors That Occur at Inconvenientor Inaccessible Times and Places

Many socially significant behaviors occur at times andplaces that are inconvenient or inaccessible to the re-searcher or practitioner. Measuring permanent productsis indicated when observing the target behavior as it oc-curs would be difficult because the behavior occurs in-frequently, in various environments, or for extendedperiods of time. For example, a music teacher can havehis guitar student make audio recordings of portions of hisdaily practice sessions at home.

Measurement May Be More Accurate,Complete, and Continuous

Although measuring behavior as it occurs provides themost immediate access to the data, it does not necessar-ily yield the most accurate, complete, and representativedata. An observer measuring behavior from permanentproducts can take his time, rescore the worksheet, or lis-ten to and watch the videotape again. Videotape enablesthe observer to slow down, pause, and repeat portions ofthe session—to literally “hold still” the behavior so it canbe examined and measured again and again if necessary.The observer might see or hear additional nuances and as-pects of the behavior, or other behaviors that she over-looked or missed altogether during live performances.

Measurement by permanent product enables datacollection on more participants. An observer can look ata videotape once and measure one participant’s behav-ior, then replay the tape and measure a second partici-pant’s behavior.

Video- or audiotaping behavior provides data on theoccurrence of all instances of a target behavior (Mil-tenberger, Rapp, & Long, 1999; Tapp & Walden, 2000).Having this permanent product of all instances of the be-havior lends itself to later scoring by using the built-incalibrated digital timer (e.g., on a VCR), set to zero sec-onds (or the first frame) at the beginning of the session,to note the exact time of behavior onset and offset. Fur-ther, software programs facilitate data collection andanalysis based on exact timings. PROCORDER is a soft-ware system for facilitating the collection and analysisof videotaped behavior. According to Miltenberger andcolleagues, “With a recording of the exact time of theonset and offset of the target behavior in the observation

116

Measuring Behavior

Generic Positive(transcribe firstinstance)

Excellent

Timeindex

0:17

Time indexof repeats

1:37

4:00

7:45

10:22

3:36

4:15

9:11

10:34

Specific Positive(transcribe firstinstance)

Timeindex

Time indexof repeats

2:10 3:28

I like how youhelped her.Thank you fornot talking.

1:05

1:57

Negative(transcribe firstinstance)

Timeindex

Time indexof repeats

Beautiful 0:26

1:44

9:01

13:09

1:59

11:52

6:53

9:56

4:33

8:21Good—nice bigword.You raised yourhand, beautiful job.

2:45

3:37

Good 0:56

1:22

5:27

6:16

8:44

4:42

5:47

6:38

9:25

Good job, that'sa new word.Thank you forpaying attention.

3:46

4:56

Smart 5:14

7:06 11:59 Thank you fornot writing.

7:50

High five 8:00

Transcribe positive and negative statements, corresponding time indexes, and time indexes of repeated statements in the boxes below.

Participant:

Observer:

T1

Susan

Date of session:

Date of observation

4/23

4/23

Exp. condition:

Duration of observation:

Self-scoring generalization

15:00 Decimal: 15.0

Count: 28 % repeats: 83% Count: 13 % repeats 46% 0 % repeats:

No. per min. 1.9All Positives: 41Count:

No. per min. 0.9No. per min. 71%% repeats2.7

Figure 10 Data collection form for recording count and temporal locus of three classes of teacher statementsfrom videotaped sessions.

From The effects of self-scoring on teachers’ positive statements during classroom instruction by S. M. Silvestri (2004), p. 124. Unpublished doctoral dissertation,The Ohio State University. Used by permission.

10Edwards and Christophersen (1993) described a time-lapse videotaperecorder (TLVCR) that automatically records on one 2-hour tape time sam-ples of behavior over observation periods ranging from 2 to 400 hours. ATLVCR programmed to record over a 12-hour period would record for0.10 of each second. Such a system can be useful for recoding very low fre-quency behaviors and for behaviors that occur over long periods of time(e.g., a child’s sleep behavior).

session, we are able to report the frequency (or rate) or theduration of the behavior” (p. 119).10

Facilitates Data Collection for InterobserverAgreement and Treatment Integrity

Video- or audiotapes assist with data collection tasks suchas obtaining interobserver agreement data and assessingtreatment integrity. Permanent products of behaviorsmake possible the repeated measurement of behaviors,eliminating the need to bring multiple observers into theresearch or treatment setting.

Enables Measurement of Complex Behaviorsand Multiple Response Classes

Permanent products, especially videotaped records of be-havior, afford the opportunity to measure complex be-haviors and multiple response classes in busy socialenvironments. Schwarz and Hawkins (1970) obtainedmeasures of the posture, voice volume, and face touch-ing of an elementary student from videotapes taken dur-ing two class periods. The three behaviors were targetedas the result of operationalizing the girl’s “poor self-esteem.” The researchers were able to view the tapes re-peatedly and score them for different behaviors. In thisstudy, the girl also watched and evaluated her behavior onthe videotapes as part of the intervention.

Figure 10 shows an example of a recording formused by Silvestri (2004) to measure three types of state-ments by teachers—generic positive, specific positive,

117

Measuring Behavior

and negative—from audiotapes of classroom lessons.(See Figure 3.7 for definitions of these behaviors.) Themovement, multiple voices, and general commotion thatcharacterize any classrooms would combine to make itvery difficult, if not impossible, for a live observer to con-sistently detect and accurately record these behaviors.Each teacher participant in the study wore a small wire-less microphone that transmitted a signal to a receiverthat was connected to a cassette tape recorder.

Determining WhetherMeasurement by PermanentProduct Is Appropriate

The advantages of measurement by permanent productare considerable, and it may seem as though permanentproduct measurement is always preferable to real-timemeasurement. Answering the following four questionswill help practitioners and researchers determine whethermeasurement by permanent product is appropriate: Isreal-time measurement needed? Can the behavior be mea-sured by permanent product? Will obtaining a contrivedpermanent product unduly affect the behavior? and Howmuch will it cost?

Is Real-Time Measurement Needed?

Data-based decision making about treatment proceduresand experimental conditions is a defining feature of ap-plied behavior analysis and one of its major strengths.Data-based decision making requires more than directand frequent measurement of behavior; it also requiresongoing and timely access to the data those measuresprovide. Measuring behavior as it occurs provides themost immediate access to the data. Although real-timemeasurement by permanent product can be conducted insome situations (e.g., counting each batted ball that landsto the right of second base as a hitter takes batting prac-tice), measurement by permanent product will most oftenbe conducted after the instructional or experimental ses-sion has ended.

Measures taken from video- or audiotapes cannot beobtained until the tapes are viewed after a session hasended. If treatment decisions are being made on a session-by-session basis, this behavior-to-measurement delayposes no problem as long as the data can be obtainedfrom the tapes prior to the next session. However, whenmoment-to-moment treatment decisions must be madeaccording to the participant’s behavior during the ses-sion, real-time measurement is necessary. Consider a be-havior analyst trying to reduce the rate of a person’sself-injurious behavior (SIB) by providing access to apreferred stimulus contingent on increasing durations of

time without SIB. Accurate implementation of the treat-ment protocol would require the real-time measurementof interresponse times (IRT).

Can the Behavior Be Measured by Permanent Product?

Not all behaviors are suitable for measurement by per-manent product. Some behaviors affect fairly permanentchanges in the environment that are not reliable for thepurposes of measurement. For example, self-injuriousbehavior (SIB) often produces long-lasting effects(bruises, welts, and even torn and bleeding skin) thatcould be measured after the occurrence of the behavior.But accurate measures of SIB could not be obtained byexamining the client’s body on a regular basis. The pres-ence of discolored skin, abrasions, and other such markswould indicate that the person had been injured, but manyimportant questions would be left unanswered. Howmany times did SIB occur? Were there acts of SIB thatdid not leave observable marks on the skin? Was each in-stance of tissue damage the result of SIB? These are im-portant questions for evaluating the effects of anytreatment. Yet, they could not be answered with certaintybecause these permanent products are not precise enoughfor the measurement of the SIB. Behaviors suitable formeasurement via permanent product must meet two rules.

Rule 1: Each occurrence of the target behaviormust produce the same permanent product. The perma-nent product must be a result of every instance of theresponse class measured. All topographical variants ofthe target behavior and all responses of varying magni-tude that meet the definition of target behavior mustproduce the same permanent product. Measuring anemployee’s work productivity by counting the numberof correctly assembled widgets in his “completedwork” bin conforms to this rule. An occurrence of thetarget behavior in this case is defined functionally as acorrectly assembled widget. Measuring SIB by markson the skin does not meet Rule 1 because some SIB re-sponses will not leave discernable marks.

Rule 2: The permanent product can only be pro-duced by the target behavior. This rule requires that thepermanent product cannot result from (a) any behaviorsby the participant other than the target behavior or (b)the behavior of any person other than the participant.Using the number of correctly assembled widgets in theemployee’s “completed work” bin to measure his pro-ductivity conforms to Rule 2 if the observer can be as-sured that (a) the employee put no widgets in his binthat he did not assemble and (b) none of the assembledwidgets in the employee’s bin were put there by anyone

118

Measuring Behavior

other than the employee. Using marks on the skin as apermanent product for measuring self-injurious behav-ior also fails to meet Rule 2. Marks on the skin could beproduced by other behaviors by the person (e.g., run-ning too fast, which caused him to trip and hit his head,stepping in poison ivy) or by the behavior of other peo-ple (e.g., struck by another person).

Will Obtaining a Contrived PermanentProduct Unduly Affect the Behavior?

Practitioners and researchers should always consider re-activity—the effects of the measurement procedure onthe behavior being measured. Reactivity is most likelywhen observation and measurement procedures are ob-trusive. Obtrusive measurement alters the environment,which may in turn affect the behavior being measured.Permanent products obtained using recording equipment,the presence of which may cause a person to behave dif-ferently, are called contrived permanent products. For ex-ample, using an audiotape to record conversations mightencourage the participant to talk less or more. But itshould be recognized that reactivity to the presence ofhuman observers is a common phenomenon, and reac-tive effects are usually temporary (e.g., Haynes & Horn,1982; Kazdin, 1982, 2001). Even so, it is appropriate toanticipate the influence that equipment may have on thetarget behavior.

How Much Will It Cost to Obtain and Measurethe Permanent Product?

The final issues to address in determining the appropri-ateness of measuring a target behavior by permanentproduct are availability, cost, and effort. If recordingequipment is required to obtain a contrived product ofthe behavior, is it currently available? If not, how muchwill it cost to buy or rent the equipment? How much timewill it take to learn to use the equipment initially? Howdifficult and time-consuming will it be to set up, store,and use the equipment for the duration of the study orbehavior change program?

Computer-AssistedMeasurement of BehaviorComputer hardware and software systems for behavioralmeasurement and data analysis have become increasinglysophisticated and useful, especially for applied re-searchers. Developers have produced data collection andanalysis software for observational measurement usinglaptops (Kahng & Iwata, 2000; Repp & Karsh, 1994),

11Descriptions of the characteristics and capabilities of a variety of com-puter-assisted behavioral measurement systems can be found in the fol-lowing sources: Emerson, Reever, & Felce (2000); Farrell (1991); Kahng& Iwata (1998, 2000); Repp, Harman, Felce, Vanacker, & Karsh (1989);Saunders, Saunders, & Saunders (1994); Tapp & Walden (2000); and Tappand Wehby (2000).

handheld computers (Saudargas & Bunn, 1989), or per-sonal digital assistants (Emerson, Reever, & Felce, 2000).Some systems use bar-code scanners for data collection(e.g., McEntee & Saunders, 1997; Saunders, Saunders, &Saunders, 1994; Tapp & Wehby, 2000). Most computersoftware systems require a DOS or Windows operatingsystem. A few software systems have been developed forthe Mac OS. Regardless of the computer operating sys-tems, Kahng and Iwata (2000) stated:

These systems have the potential to facilitate the task ofobservation by improving the reliability and accuracy ofrecording relative to traditional but cumbersome meth-ods based on paper and pencil and to improve the effi-ciency of data calculation and graphing. (p. 35)

Developments in microchip technology have ad-vanced the measurement and data analysis capabilitiesof these systems, and have made the software increas-ingly easier to learn and apply. Many of these semiauto-mated systems can record a number of events, includingdiscrete trials, the number of responses per unit of time,duration, latency, interresponse time (IRT), and fixed andvariable intervals for time-sampling measurement. Thesesystems can present data calculated as rate, duration, la-tency, IRT, percentage of intervals, percentage of trials,and conditional probabilities.

A distinct advantage of computer-based observationand measurement systems is that when aggregating rates,time-series analyses, conditional probabilities, sequen-tial dependencies, interrelationships, and combinationsof events, these data can be clustered and analyzed. Be-cause these systems allow for the simultaneous recordingof multiple behaviors across multiple dimensions, out-puts can be examined and analyzed from different per-spectives that would be difficult and time-consuming withpaper-and-pencil methods.

In addition to recording behavior and calculatingdata, these systems provide analyses of interobserveragreement (e.g., smaller/larger, overall, occurrence,nonoccurrence) and measurement from audio and videodocuments. The semiautomated computer-driven sys-tems, as compared to the commonly used paper-and-pencil data recording and analysis methods, have thepotential to improve interobserver agreements, the relia-bility of observational measurements, and the efficiencyof data calculation (Kahng & Iwata, 1998).11

119

Measuring Behavior

The practical value derived from the greater effi-ciency and ease of use of computer-assisted measure-ment systems will likely increase their use by applied

SummaryDefinition and Functions of Measurement in AppliedBehavior Analysis

1. Measurement is the process of applying quantitative labelsto observed properties of events using a standard set of rules.

2. Measurement is how scientists operationalize empiricism.

3. Without measurement, all three levels of scientific knowl-edge—description, prediction, and control—would be rel-egated to guesswork and subjective opinions.

4. Applied behavior analysts measure behavior to obtain an-swers to questions about the existence and nature of func-tional relations between socially significant behavior andenvironmental variables.

5. Practitioners measure behavior before and after treatmentto evaluate the overall effects of interventions (summa-tive evaluation) and frequent measures of behavior dur-ing treatment (formative assessment) to guide decisionsconcerning the continuation, modification, or terminationof treatment.

6. Without frequent measures of the behavior targeted for in-tervention, practitioners may (a) continue an ineffectivetreatment when no real behavior change occurred, or (b)discontinue an effective treatment because subjective judg-ment detects no improvement.

7. Measurement also helps practitioners optimize their ef-fectiveness; verify the legitimacy of practices touted as“evidence based”; identify treatments based on pseudo-science, fad, fashion, or ideology; be accountable toclients, consumers, employers, and society; and achieveethical standards.

Measureable Dimensions of Behavior

8. Because behavior occurs within and across time, it has threedimensional quantities: repeatability (i.e., count), temporalextent (i.e., duration), and temporal locus (i.e., when be-havior occurs). These properties, alone and in combination,provide the basic and derivative measures used by appliedbehavior analysts. (See Figure 5 for a detailed summary.)

9. Count is the number of responses emitted during an ob-servation period.

10. Rate, or frequency, is a ratio of count per observation pe-riod; it is often expressed as count per standard unit of time.

11. Celeration is a measure of the change (acceleration or de-celeration) in rate of responding per unit of time.

12. Duration is the amount of time in which behavior occurs.

13. Response latency is a measure of the elapsed time be-tween the onset of a stimulus and the initiation of a sub-sequent response.

14. Interresponse time (IRT) is the amount of time that elapsesbetween two consecutive instances of a response class.

15. Percentage, a ratio formed by combining the same dimen-sional quantities, expresses the proportional quantity of anevent in terms of the number of times the event occurredper 100 opportunities that the event could have occurred.

16. Trials-to-criterion is a measure of the number of responseopportunities needed to achieve a predetermined level ofperformance.

17. Although form (i.e., topography) and intensity of re-sponding (i.e., magnitude) are not fundamental dimen-sional quantities of behavior, they are importantquantitative parameters for defining and verifying the oc-currence of many response classes.

18. Topography refers to the physical form or shape of a be-havior.

19. Magnitude refers to the force or intensity with which a re-sponse is emitted.

Procedures for Measuring Behavior

20. Event recording encompasses a wide variety of proceduresfor detecting and recording the number of times a behav-ior of interest is observed.

21. A variety of timing devices and procedures are used tomeasure duration, response latency, and interresponse time.

22. Time sampling refers to a variety of methods for observ-ing and recording behavior during intervals or at specificmoments in time.

23. Observers using whole-interval recording divide the ob-servation period into a series of equal time intervals. Atthe end of each interval, they record whether the target be-havior occurred throughout the entire interval.

24. Observers using partial-interval recording divide the ob-servation period into a series of equal time intervals. Atthe end of each interval, they record whether behavior occurred at any point during the interval.

25. Observers using momentary time sampling divide the ob-servation period into a series of time intervals. At the endof each interval, they record whether the target behavior isoccurring at that specific moment.

researchers and practitioners who currently use me-chanical counters, timers, and paper and pencil for ob-servational recording.

120

Measuring Behavior

26. Planned activity check (PLACHECK) is a variation of mo-mentary time sampling in which the observer recordswhether each individual in a group is engaged in the tar-get behavior.

27. Measurement artifacts are common with time sampling.

Measuring Behavior by Permanent Products

28. Measuring behavior after it has occurred by measuring itseffects on the environment is known as measurement bypermanent product.

29. Measurement of many behaviors can be accomplished viacontrived permanent products.

30. Measurement by permanent product offers numerous ad-vantages: The practitioner is free to do other tasks; it en-ables the measurement of behaviors that occur atinconvenient or inaccessible times and places; measure-ment may be more accurate, complete, and continuous; itfacilitates the collection of interobserver agreement andtreatment integrity data; and it enables the measurement ofcomplex behaviors and multiple response classes.

31. If moment-to-moment treatment decisions must be madeduring the session, measurement by permanent productmay not be warranted.

32. Behaviors suitable for measurement via permanent prod-ucts must meet two rules. Rule 1: Each occurrence of thetarget behavior must produce the same permanent product.Rule 2: The permanent product can only be produced bythe target behavior.

Computer-Assisted Measurement of Behavior

33. Computer hardware and software systems for behavioralmeasurement and data analysis have become increasinglysophisticated and easier to use.

34. Developers have produced data collection and analysissoftware for observational measurement using laptops,handheld computers, personal digital assistants (PDAs),and desktop computers.

35. Some systems allow for the simultaneous recording ofmultiple behaviors across multiple dimensions. Outputscan be examined and analyzed from different perspectivesthat would be difficult and time-consuming with paper-and-pencil methods.

121

accuracybelievabilitycalibrationcontinuous measurementdirect measurementdiscontinuous measurementexact count-per-interval IOAindirect measurementinterobserver agreement (IOA)

interval-by-interval IOAmean count-per-interval IOAmean duration-per-occurrence IOAmeasurement biasnaive observerobserved valueobserver driftobserver reactivityreliability

scored-interval IOAtotal count IOAtotal duration IOAtrial-by-trial IOAtrue valueunscored-interval IOAvalidity

Behavior Analyst Certification Board® BCBA® & BCABA®Behavior Analyst Task List, © Third Edition

Content Area 6: Measurement of Behavior

6-4 Select the appropriate measurement procedure given the dimensions of thebehavior and the logistics of observing and recording.

6-5 Select a schedule of observation and recording periods.

6-14 Use various methods of evaluating the outcomes of measurement procedures,such as interobserver agreement, accuracy, and reliability.

© 2006 The Behavior Analyst Certification Board, Inc.,® (BACB®) all rights reserved. A current version of thisdocument may be found at www.bacb.com. Requests to reprint, copy, or distribute this document and ques-tions about this document must be submitted directly to the BACB.

Improving and Assessing the Qualityof Behavioral Measurement

Key Terms

From Chapter 5 of Applied Behavior Analysis, Second Edition. John O. Cooper, Timothy E. Heron, William L. Heward. Copyright © 2007by Pearson Education, Inc. All rights reserved.

122

The data obtained by measuring behavior are theprimary material with which behavioral re-searchers and practitioners guide and evaluate

their work. Applied behavior analysts measure sociallysignificant behaviors to help determine which behaviorsneed to be changed, to detect and compare the effects ofvarious interventions on behaviors targeted for change,and to evaluate the acquisition, maintenance, and gener-alization of behavior changes.

Because so much of what the behavior analyst doeseither as a researcher or practitioner depends on mea-surement, concerns about the legitimacy of the data itproduces must be paramount. Do the data meaningfullyreflect the original reason(s) for measuring the behav-ior? Do the data represent the true extent of the behav-ior as it actually occurred? Do the data provide aconsistent picture of the behavior? In other words, can thedata be trusted?

This chapter focuses on improving and assessing thequality of behavioral measurement. We begin by definingthe essential indicators of trustworthy measurement: va-lidity, accuracy, and reliability. Next, common threats tomeasurement are identified and suggestions for combat-ing these threats are presented. The chapter’s final sec-tions detail procedures for assessing the accuracy,reliability, and believability of behavioral measurement.

Indicators of TrustworthyMeasurement

Three friends—John, Tim, and Bill—took a bicycle ridetogether. At the end of the ride John looked at hishandlebar-mounted bike computer and said, “We rode68 miles. Excellent!” “My computer shows 67.5 miles.Good ride, fellas!” Tim replied. As he dismounted andrubbed his backside, the third biker, Bill, said, “Geewhiz, I’m sore! We must’ve ridden 100 miles!” A fewdays later, the three friends completed the same route.After the second ride, John’s computer showed 68 miles,Tim’s computer read 70 miles, and Bill, because hewasn’t quite as sore as he was after the first ride, saidthey had ridden 90 miles. Following a third ride on thesame country roads, John, Tim, and Bill reported dis-tances of 68, 65, and 80 miles, respectively.

How trustworthy were the measures reported by thethree bicyclists? Which of the three friends’ data wouldbe most usable for a scientific account of the miles theyhad ridden? To be most useful for science, measurementmust be valid, accurate, and reliable. Were the threefriends’ measurements characterized by validity, accu-racy, and reliability?

Validity

Measurement has validity when it yields data that are di-rectly relevant to the phenomenon measured and to thereason(s) for measuring it. Determining the validity ofmeasurement revolves around this basic question: Was arelevant dimension of the behavior that is the focus ofthe investigation measured directly and legitimately?

Did the measurements of miles ridden by the threebicyclists have validity? Because the bikers wanted toknow how far they had ridden each time, the number ofmiles ridden was a relevant, or valid, dimension of theirriding behavior. Had the bikers’ primary interest beenhow long or how fast they had ridden, the number ofmiles ridden would not have been a valid measure. Johnand Tim’s use of their bike computers to measure di-rectly the miles they rode was a valid measure. BecauseBill used an indirect measure (the relative tendernessof his backside) to determine the number of miles hehad ridden, the validity of Bill’s mileage data is sus-pect. A direct measure of the actual behavior of interestwill always possess more validity than an indirect mea-sure, because a direct measure does not require an in-ference about its relation to the behavior of interest,whereas an indirect measure always requires such aninference. Although soreness may be related to the dis-tance ridden, because it is also influenced by such fac-tors as the time on the bike saddle, the roughness of theroad, riding speed, and how much (or little) the personhas ridden recently, soreness as a measure of mileagehas little validity.

Valid measurement in applied behavior analysis re-quires three equally important elements: (a) measuringdirectly a socially significant target behavior, (b) mea-suring a dimension (e.g., rate, duration) of the target be-havior relevant to the question or concern about thebehavior, and (c) ensuring that the data are representa-tive of the behavior’s occurrence under conditions andduring times that are most relevant to the question or con-cern about the behavior. When any of these elements aresuspect or lacking—no matter how technically proficient(i.e., accurate and reliable) was the measurement that pro-duced the data—the validity of the resultant data are com-promised, perhaps to the point of being meaninglessness.

Accuracy

When used in the context of measurement, accuracyrefers to the extent to which the observed value, thequantitative label produced by measuring an event,matches the true state, or true value, of the event as it ex-ists in nature (Johnston & Pennypacker, 1993a). In otherwords, measurement is accurate to the degree that it cor-responds to the true value of the thing measured. A true

Improving and Assessing the Quality of Behavioral Measurement

123

Improving and Assessing the Quality of Behavioral Measurement

value is a measure obtained by procedures that areindependent of and different from the procedures thatproduced the data being evaluated and for which the re-searcher has taken “special or extraordinary precautionsto insure that all possible sources of error have beenavoided or removed” (p. 136).

How accurate were the three bikers’ measures ofmiles ridden? Because each biker obtained a differentmeasure of the same event, all of their data could not beaccurate. Skeptical of the training miles the three cyclistswere claiming, a friend of theirs, Lee, drove the samecountry roads with a Department of Transportationodometer attached to the back bumper of his car. At theend of the route the odometer read 58 miles. Using themeasure obtained by the DOT odometer as the true valueof the route’s distance, Lee determined that none of thethree cyclists’ measures were accurate. Each rider hadoverestimated the true mileage.

By comparing the mileage reported by John, Tim,and Bill with the true value of the route’s distance, Leediscovered not only that the riders’ data were inaccurate,but also that the data reported by all three riders werecontaminated by a particular type of measurement errorcalled measurement bias. Measurement bias refers tononrandom measurement error; that is, error in mea-surement that is likely to be in one direction. When mea-surement error is random, it is just as likely tooverestimate the true value of an event as it is to under-estimate it. Because John, Tim, and Bill consistently over-estimated the actual miles they had ridden, their datacontained measurement bias.

Reliability

Reliability describes the extent to which a “measurementprocedure yields the same value when brought into re-peated contact with the same state of nature” (Johnston& Pennypacker, 1993a, p. 138). In other words, reliablemeasurement is consistent measurement. Like validityand accuracy, reliability is a relative concept; it is a mat-ter of degree. The closer the values obtained by repeatedmeasurement of the same event are to one another, thegreater the reliability. Conversely, the more observed val-ues from repeated measurement of the same event differfrom one another, the less the reliability.

How reliable were the bicyclists’ measurements? Be-cause John obtained the same value, 68 miles, each timehe measured the same route, his measurement had com-plete reliability. Tim’s three measures of the same ride—67.5, 70, and 65 miles—differed from one another by asmuch as 5 miles. Therefore, Tim’s measurement was lessreliable than John’s. Bill’s measurement system was theleast reliable of all, yielding values for the same routeranging from 80 to 100 miles.

Relative Importance of Validity,Accuracy, and Reliability

Behavioral measurement should provide legitimate datafor evaluating behavior change and guiding research andtreatment decisions. Data of the highest quality (i.e., datathat are most useful and trustworthy for advancing sci-entific knowledge or for guiding data-based practice) areproduced by measurement that is valid, accurate, and re-liable (see Figure 1). Validity, accuracy, and reliabilityare relative concepts; each can range from high to low.

Measurement must be both valid and accurate for thedata to be trustworthy. If measurement is not valid, ac-curacy is moot. Accurately measuring a behavior that isnot the focus of the investigation, accurately measuringan irrelevant dimension of the target behavior, or accu-rately measuring the behavior under circumstances or attimes not representative of the conditions and times rel-evant to the analysis will yield invalid data. Conversely,the data obtained from measuring a meaningful dimen-sion of the right behavior under the relevant circum-stances and times is of little use if the observed valuesprovide an inaccurate picture of the behavior. Inaccuratemeasurement renders invalid the data obtained by other-wise valid measurement.

Reliability should never be confused with accuracy.Although John’s bicycle computer provided totally reli-able measures, it was also totally inaccurate.

Concern about the reliability of data in the absence of aprior interest in their accuracy suggests that reliability is being mistaken for accuracy. The questions for a re-searcher or someone who is reading a published study isnot, “Are the data reliable?” but “Are the data accurate?”(Johnston & Pennypacker, 1993a, p. 146)

If accuracy trumps reliability—and it does—whyshould researchers and practitioners be concerned withthe reliability of measurement? Although high reliabil-ity does not mean high accuracy, poor reliability revealsproblems with accuracy. Because Tim and Bill’s mea-surements were not reliable, we know that at least someof the data they reported could not be accurate, knowl-edge that could and should lead to checking the accuracyof their measurement tools and procedures.

Highly reliable measurement means that whateverdegree of accuracy (or inaccuracy) exists in the mea-surement system will be revealed consistently in the data.If it can be determined that John’s computer reliably ob-tains observed values higher than the true values by aconstant amount or proportion, the data could be adjustedto accommodate for that constant degree of inaccuracy.

The next two sections of the chapter describe meth-ods for combating common threats to the validity, accu-racy, and reliability of behavioral measurement.

124

Improving and Assessing the Quality of Behavioral Measurement

Figure 1 Measurement that is valid, accurate, and reliable yields the most trust-worthy and useful data for science and science-based practice.

Measurement that is . .

Valid Accurate Reliable . . . yields data that are . . .

Yes Yes Yes . . . most useful for advancing scientificknowledge and guiding data-based practice.

No Yes Yes . . . meaningless for the purposes for whichmeasurement was conducted.

Yes No Yes . . . always wrong.1

Yes Yes No2 . . . sometimes wrong.3

1. If adjusted for consistent measurement error of standard size and direction, inaccurate data may still be usable.

2. If the accuracy of every datum in a data set can be confirmed, reliability is a moot point. In practice, however, that isseldom possible; therefore, knowing the consistency with which a valid and accurate measurement system has beenapplied contributes to the level of confidence in the overall trustworthiness of the data set.

3. User is unable to separate the good data from the bad.

Threats to MeasurementValidityThe validity of behavioral data is threatened when mea-surement is indirect, when the wrong dimension of thetarget behavior is measured, or when measurement is con-ducted in such a way that the data it produces are an ar-tifact of the actual events.

Indirect Measurement

Direct measurement occurs when “the phenomenon thatis the focus of the experiment is exactly the same as thephenomenon being measured” (Johnston & Pennypacker,1993a, p. 113). Conversely, indirect measurement oc-curs when “what is actually measured is in some way dif-ferent from” the target behavior of interest (Johnston &Pennypacker, 1993a, p. 113). Direct measurement of be-havior yields more valid data than will indirect measure-ment. This is because indirect measurement providessecondhand or “filtered” information (Komaki, 1998) thatrequires the researcher or practitioner to make inferencesabout the relationship between the event that was mea-sured and the actual behavior of interest.

Indirect measurement occurs when the researcher orpractitioner measures a proxy, or stand-in, for the actualbehavior of interest. An example of indirect measurementwould be using children’s responses to a questionnaireas a measure of how often and well they get along withtheir classmates. It would be better to use a direct mea-sure of the number of positive and negative interactionsamong the children. Using a student’s score on a stan-dardized math achievement test as an indicator of her

1Strategies for increasing the accuracy of self-reports can be found inCritchfield, Tucker, and Vuchinich (1998) and Finney, Putnam, and Boyd(1998).

mastery of the math skills included in the school’s cur-riculum is another example of indirect measurement. Ac-cepting the student’s score on the achievement test as avalid reflection of her ability with the school’s curriculumwould require an inference. By contrast, a student’s scoreon a properly constructed test consisting of math prob-lems from recently covered curriculum content is a di-rect measure requiring no inferences about what it meanswith respect to her performance in the curriculum.

Indirect measurement is usually not an issue in ap-plied behavior analysis because meeting the applied di-mension of ABA includes the targeting and meaningful(i.e., valid) measurement of socially significant behav-iors. Sometimes, however, the researcher or practitionerhas no direct and reliable access to the behavior of inter-est and so must use some form of indirect measurement.For example, because researchers studying adherence tomedical regimens cannot directly observe and measurepatients’ behavior in their homes, they rely on self-reportsfor their data (e.g., La Greca & Schuman, 1995).1

Indirect measurement is sometimes used to make in-ferences about private events or affective states. For ex-ample, Green and Reid (1996) used direct measures ofsmiling to represent “happiness” by persons with pro-found multiple disabilities. However, research on privateevents does not necessarily involve indirect measurement.A research participant who has been trained to observe hisown private events is measuring the behavior of interestdirectly (e.g., Kostewicz, Kubina, & Cooper, 2000; Ku-bina, Haertel, & Cooper, 1994).

125

Improving and Assessing the Quality of Behavioral Measurement

Whenever indirect measurement is used, it is the re-sponsibility of the researcher to provide evidence that theevent measured directly reflects, in some reliable andmeaningful way, something about the behavior for whichthe researcher wishes to draw conclusions (Johnston &Pennypacker, 1993a). In other words, it is incumbentupon the researcher to provide a convincing case for thevalidity of her data. Although it is sometimes attempted,the case for validity cannot be achieved by simply at-taching the name of the thing one claims to be measur-ing to the thing actually measured. With respect to thatpoint, Marr (2003) recounted this anecdote about Abra-ham Lincoln:

“Sir, how many legs does this donkey have?”“Four, Mr. Lincoln.”“And how many tails does it have?”“One, Mr. Lincoln.”“Now, sir, what if we were to call a tail a leg; how manylegs would the donkey have?”“Five, Mr. Lincoln.”“No sir, for you cannot make a tail into a leg by callingit one.” (pp. 66–67)

Measuring the Wrong Dimension of the Target Behavior

The validity of behavioral measurement is threatenedmuch more often by measuring the wrong dimension ofthe behavior of interest than it is by indirect measure-ment. Valid measurement yields data that are relevant tothe questions about the behavior one seeks to answerthrough measurement. Validity is compromised whenmeasurement produces values for a dimension of the be-havior ill suited for, or irrelevant to, the reason for mea-suring the behavior.

Johnston and Pennypacker (1980) provided an ex-cellent example of the importance of measuring a di-mension that fits the reasons for measurement. “Stickinga ruler in a pot of water as the temperature is raised willyield highly reliable measures of the depth of the waterbut will tell us very little about the changing tempera-ture” (p. 192). While the units of measurement on a rulerare well suited for measuring length, or in this case, depth,they are not at all valid for measuring temperature. If thepurpose of measuring the water is to determine whetherit has reached the ideal temperature for making a pot oftea, a thermometer is the correct measurement tool.

If you are interested in measuring a student’s aca-demic endurance with oral reading, counting the numberof correct and incorrect words read per minute withoutmeasuring and reporting the total time that the studentread will not provide valid data on endurance. Number ofwords read per minute alone does not fit the reason for

measuring reading (i.e., academic endurance). To mea-sure endurance, the practitioner would need to report theduration of the reading period (e.g., 30 minutes). Simi-larly, measuring the percentage of trials on which a stu-dent makes a correct response will not provide valid datafor answering questions about the student’s developingfluency with a skill, whereas measuring the number ofcorrect responses per minute and the changing rates ofresponding (celeration) would.

Measurement Artifacts

Directly measuring a relevant dimension of a socially sig-nificant target behavior does not guarantee valid mea-surement. Validity is reduced when the data—no matterhow accurate or reliable they are—do not give a mean-ingful (i.e., valid) representation of the behavior. Whendata give an unwarranted or misleading picture of the be-havior because of the way measurement was conducted,the data are called an artifact. A measurement artifactis something that appears to exist because of the way itis measured. Discontinuous measurement, poorly sched-uled measurement periods, and using insensitive or limiting measurement scales are common causes of mea-surement artifacts.

Discontinuous Measurement

Because behavior is a dynamic and continuous phenom-enon that occurs and changes over time, continuous mea-surement is the gold standard in behavioral research.Continuous measurement is measurement conductedin a manner such that all instances of the responseclass(es) of interest are detected during the observationperiod (Johnston & Pennypacker, 1993a). Discontinuousmeasurement describes any form of measurement inwhich some instances of the response class(es) of inter-est may not be detected. Discontinuous measurement—no matter how accurate and reliable—may yield data thatare an artifact.

A study by Thomson, Holmber, and Baer (1974) pro-vides a good demonstration of the extent of artifactualvariability in a data set that may be caused by discontin-uous measurement. A single, highly experienced observerused three different procedures for scheduling time sam-pling observations to measure the behavior of four sub-jects (two teachers and two children) in a preschool settingduring 64-minute sessions. Thomson and colleaguescalled the three time sampling procedures contiguous, al-ternating, and sequential. With each time sampling pro-cedure, one-fourth of the observer’s time (i.e., 16 minutes)was assigned to each of the four subjects.

126

Improving and Assessing the Quality of Behavioral Measurement

When the contiguous observation scheduled wasused, the observer recorded the behavior of Subject 1throughout the first 16 minutes of the session, recordedthe behavior Subject 2 during the second 16 minutes, andso on until all four students had been observed. In the al-ternating mode, Subjects 1 and 2 were observed in alter-nating intervals during the first half of the session, andSubjects 3 and 4 were observed in the same fashion dur-ing the last half of the session. Specifically, Student 1was observed during the first 4 minutes, Subject 2 duringthe next 4 minutes, Subject 1 during the next 4 minutes,and so on until 32 minutes had expired. The same pro-cedure was then used for Students 3 and 4 during the last32 minutes of the session. The sequential approach sys-tematically rotated the four subjects through 4-minuteobservations. Subject 1 was observed during the first 4minutes, Subject 2 during the second 4 minutes, Subject3 during the third 4 minutes, and Subject 4 during thefourth 4 minutes. This sequence was repeated four timesto give the total of 64 minutes of observation.

To arrive at the percentage of artifactual variance in thedata associated with each time sampling schedule, Thom-son and colleagues (1974) compared the observer’s datawith “actual rates” for each subject produced by continu-ous measurement of each subject for the same 64-minutesessions. Results of the study showed clearly that the con-tiguous and alternating schedules produced the most un-representative (and therefore, less valid) measures of thetarget behaviors (often more than 50% variance from con-tinuous measurement), whereas sequential sampling pro-cedure produced results that more closely resembled thedata obtained through continuous recording (from 4 to11% variance from continuous measurement).

In spite of its inherent limitations, discontinuousmeasurement is used in many studies in applied behav-ior analysis in which individual observers measure thebehavior of multiple subjects within the same session.Minimizing the threat to validity posed by discontinuousmeasurement requires careful consideration of when ob-servation and measurement periods should be scheduled.Infrequent measurement, no matter how accurate and re-liable it is, often yields results that are an artifact. Al-though a single measure reveals the presence or absenceof the target behavior at a given point in time, it may notbe representative of the typical value for the behavior.2 Asa general rule, observations should be scheduled on adaily or frequent basis, even if for only brief periods.

Ideally, all occurrences of the behavior of interestshould be recorded. However, when available resources

2Single measures, such as pretests and posttests, can provide valuable in-formation on a person’s knowledge and skills before and after instructionor treatment. The use of probes, occasional but systematic measures, to as-sess maintenance and generalization of behavior change is discussed inchapter entitled “Generalization and Maintenance of Behavior Change.”

preclude continuous measurement throughout an obser-vation period, the use of sampling procedures is neces-sary. A sampling procedure may be sufficient for decisionmaking and analysis if the samples represent a valid ap-proximation of the true parameters of the behavior of interest. When measurement cannot be continuousthroughout an observation period, it is generally prefer-able to sample the occurrence of the target behavior fornumerous brief observation intervals that are evenly dis-tributed throughout the session than it is to use longer,less frequent intervals (Thomson et al., 1974; Thompson,Symons, & Felce, 2000). For example, measuring a sub-ject’s behavior in thirty 10-second intervals equally dis-tributed within a 30-minute session will likely yield morerepresentative data than will observing the person for asingle 5-minute period during the half hour.

Measuring behavior with observation intervals thatare too short or too long may result in data that grosslyover- or underestimate the true occurrence of behavior.For example, measuring off-task behavior by partial-interval recording with 10-minute intervals may producedata that make even the most diligent of students appearto be highly off task.

Poorly Scheduled Measurement Periods

The observation schedule should be standardized to pro-vide an equal opportunity for the occurrence or nonoc-currence of the behavior across sessions and consistentenvironmental conditions from one observation session tothe next. When neither of these requirements is met, theresultant data may not be representative and may be in-valid. If observation periods are scheduled at times whenand/or places where the frequency of behavior is atypi-cal, the data may not represent periods of high or low responding. For example, measuring students’ being on-task during only the first 5 minutes of each day’s 20-minute cooperative learning group activity may yielddata that make on-task behavior appear higher than it ac-tually is over the entire activity.

When data will be used to assess the effects of an in-tervention or treatment, the most conservative observationtimes should be selected. That is, the target behavior shouldbe measured during those times when their frequency ofoccurrence is most likely to be different from the desiredor predicted outcomes of the treatment. Measurement ofbehaviors targeted for reduction should occur during timeswhen those behaviors are most likely to occur at their high-est response rates. Conversely, behaviors targeted for increase should be measured when high-frequency re-sponding is least likely. If an intervention is not planned—as might be the case in a descriptive study—it is importantto select the observation times most likely to yield datathat are generally representative of the behavior.

127

Improving and Assessing the Quality of Behavioral Measurement

Insensitive and/or Limited Measurement Scales

Data that are artifacts may result from using measure-ment scales that cannot detect the full range of relevantvalues or that are insensitive to meaningful changes inbehavior. Data obtained with a measurement scale thatdoes not detect the full range of relevant performancesmay incorrectly imply that behavior cannot occur at lev-els below or above obtained measures because the scalehas imposed an artificial floor or ceiling on performance.For example, measuring a student’s oral reading fluencyby giving him a 100-word passage to read in 1 minutemay yield data that suggest that his maximum perfor-mance is 100 wpm.

A measurement scale that is over- or undersensitiveto relevant changes in behavior may produce data thatshow misleadingly that meaningful behavior change has(or has not) occurred. For example, using a percentagemeasure scaled in 10% increments to evaluate the effectsof an intervention to improve quality control in a manu-facturing plant may not reveal important changes in per-formance if improvement in the percentage of correctlyfabricated widgets from a baseline level of 92% to a rangeof 97 to 98% is the difference between unacceptable andacceptable (i.e., profitable) performance.

Threats to MeasurementAccuracy and ReliabilityThe biggest threat to the accuracy and reliability of datain applied behavior analysis is human error. Unlike theexperimental analysis of behavior, in which measurementis typically automated and conducted by machines, mostinvestigations in applied behavior analysis use humanobservers to measure behavior.3 Factors that contributeto human measurement error include poorly designedmeasurement systems, inadequate observer training, andexpectations about what the data should look like.

Poorly Designed Measurement System

Unnecessarily cumbersome and difficult-to-use mea-surement systems create needless loss of accuracy andreliability. Collecting behavioral data in applied settingsrequires attention, keen judgment, and perseverance. Themore taxing and difficult a measurement system is to use,the less likely an observer will be to consistently detectand record all instances of the target behavior. Simplify-

3We recommend using automatic data recording devices whenever possi-ble. For example, to measure the amount of exercise by boys on stationarybicycles, DeLuca and Holborn (1992) used magnetic counters that auto-matically recorded the number of wheel revolutions.

ing the measurement system as much as possible mini-mizes measurement errors.

The complexity of measurement includes such vari-ables as the number of individuals observed, the numberof behaviors recorded, the duration of observation peri-ods, and/or the duration of the observation intervals, allof which may affect the quality of measurement. For in-stance, observing several individuals is more complexthan observing one person; recording several behaviors ismore complex than recording a single behavior; usingcontiguous 5-second observation intervals with no timebetween intervals to record the results of the observationis more difficult than a system in which time is reservedfor recording data.

Specific recommendations concerning reducing com-plexity depend on the specific nature of the study. How-ever, when using time sampling measurements, appliedbehavior analysts can consider modifications such as de-creasing the number of simultaneously observed indi-viduals or behaviors, decreasing the duration of theobservation sessions (e.g., from 30 minutes to 15 min-utes), and increasing the duration of time intervals (e.g.,from 5 to 10 seconds). Requiring more practice duringobserver training, establishing a higher criterion for mas-tery of the observational code, and providing more fre-quent feedback to observers may also reduce the possiblenegative effects of complex measurement.

Inadequate Observer Training

Careful attention must be paid to the selection and train-ing of observers. Explicit and systematic training of ob-servers is essential for the collection of trustworthy data.Observation and coding systems require observers to dis-criminate the occurrence and nonoccurrence of specificclasses of behaviors or events against an often complexand dynamic background of other behaviors or eventsand to record their observations onto a data sheet. Ob-servers must learn the definitions for each response classor event to be measured; a code or symbol notation sys-tem for each variable; a common set of recording proce-dures such as keystrokes or scan movements; and amethod for correcting inadvertent handwritten, keystroke,or scan mistakes (e.g., writing a plus sign instead of aminus sign, hitting the F6 key instead of the F5 key, scan-ning an incorrect bar code).

Selecting Observers Carefully

Admittedly, applied researchers often scramble to finddata collectors, but not all volunteers should be acceptedinto training. Potential observers should be interviewedto determine past experiences with observation and measurement activities, current schedule and upcoming

128

Improving and Assessing the Quality of Behavioral Measurement

commitments, work ethic and motivation, and overall socialskills. The interview might include a pretest to determinecurrent observation and skill levels. This can be accom-plished by having potential observers watch short videoclips of behaviors similar to what they may be asked to ob-serve and noting their performance against a criterion.

Training Observers to an Objective Standardof Competency

Observer trainees should meet a specified criterion forrecording before conducting observations in applied set-tings. During training, observers should practice record-ing numerous examples and nonexamples of the targetbehavior(s) and receive a critique and performance feed-back. Observers should have numerous practice sessionsbefore actual data collection. Training should continueuntil a predetermined criterion is achieved (e.g., 95% ac-curacy for two or three consecutive sessions). For exam-ple, in training observers to measure the completion ofpreventive maintenance tasks of heavy equipment by mil-itary personnel, Komaki (1998) required three consecu-tive sessions of at least 90% agreement with a true value.

Various methods can be used to train observers.These include sample vignettes, narrative descriptions,video sequences, role playing, and practice sessions inthe environment in which actual data will be collected.Practice sessions in natural settings are especially bene-ficial because they allow both observers and participantsto adapt to each other’s presence and may reduce the re-active effects of the presence of observers on participants’behavior. The following steps are an example of a sys-tematic approach for training observers.

Step 1 Trainees read the target behavior definitionsand become familiar with data collection forms, pro-cedures for recording their observations, and theproper use of any measurement or recording devices(e.g., tape recorders, stopwatches, laptops, PDAs, barcode scanners).

Step 2 Trainees practice recording simplified narrativedescriptions of behavioral vignettes until they obtain100% accuracy over a predetermined number of instances.

Step 3 Trainees practice recording longer, more com-plex narrative descriptions of behavioral vignettes untilthey obtain 100% accuracy for a predetermined numberof episodes.

Step 4 Trainees practice observing and recording datafrom videotaped or role-played vignettes depicting thetarget behavior(s) at the same speed and complexity asthey will occur in the natural environment. Training vi-gnettes should be scripted and sequenced to providetrainees practice making increasingly difficult discrimi-nations between the occurrence and nonoccurrence of the

target behavior(s). Having trainees rescore the same se-ries of vignettes a second time and comparing the relia-bility of their measures provides an assessment of theconsistency with which the trainees are applying the mea-surement system. Trainees remain at this step until theirdata reach preestablished accuracy and reliability criteria.(If the study involved collecting data from natural per-manent products such as compositions or academic work-sheets, Steps 2 through 4 should provide trainees withpractice scoring increasingly extensive and more diffi-cult to score examples.)

Step 5 Practicing collecting data in the natural environ-ment is the final training step of observer training. Anexperienced observer accompanies the trainee and si-multaneously and independently measures the targetbehaviors. Each practice session ends with the traineeand experienced observer comparing their data sheetsand discussing any questionable or heretofore unfore-seen instances. Training continues until a preestab-lished criterion of agreement between the experiencedobserver and the trainee is achieved (e.g., at least 90%for three consecutive sessions).

Providing Ongoing Training to MinimizeObserver Drift

Over the course of a study, observers sometimes alter,often unknowingly, the way they apply a measurementsystem. Called observer drift, these unintended changesin the way data are collected may produce measurementerror. Observer drift usually entails a shift in the ob-server’s interpretation of the definition of the target be-havior from that used in training. Observer drift occurswhen observers expand or compress the original defini-tion of the target behavior. For example, observer driftmight be responsible for the same behaviors by a childthat were recorded by an observer as instances of non-compliance during the first week of a study being scoredas instances of compliance during the study’s final week.Observers are usually unaware of the drift in their mea-surement.

Observer drift can be minimized by occasional ob-server retraining or booster sessions throughout the in-vestigation. Continued training provides the opportunityfor observers to receive frequent feedback on the accuracyand reliability of measurement. Ongoing training canoccur at regular, prescheduled intervals (e.g., every Fri-day morning) or randomly.

Unintended Influences on Observers

Ideally, data reported by observers have been influencedonly by the actual occurrences and nonoccurrences of thetarget behavior(s) they have been trained to measure. In

129

Improving and Assessing the Quality of Behavioral Measurement

reality, however, a variety of unintended and undesired in-fluences on observers can threaten the accuracy and re-liability of the data they report. Common causes of thistype of measurement error include presuppositions anobserver may hold about the expected outcomes of thedata and an observer’s awareness that others are measur-ing the same behavior.

Observer Expectations

Observer expectations that the target behavior should occurat a certain level under particular conditions, or changewhen a change in the environment has been made, pose amajor threat to accurate measurement. For example, if anobserver believes or predicts that a teacher’s implementa-tion of a token economy should decrease the frequency ofinappropriate student behavior, she may record fewer in-appropriate behaviors during the token reinforcement con-dition than she would have recorded otherwise withoutholding that expectation. Data influenced by an observer’sexpectations or efforts to obtain results that will please theresearcher are characterized by measurement bias.

The surest way to minimize measurement biascaused by observer expectations is to use naive observers.A totally naive observer is a trained observer who is un-aware of the study’s purpose and/or the experimental con-ditions in effect during a given phase or observationperiod. Researchers should inform observer trainees thatthey will receive limited information about the study’spurpose and why that is. However, maintaining observers’naiveté is often difficult and sometimes impossible.

When observers are aware of the purpose or hy-pothesized results of an investigation, measurement biascan be minimized by using target behavior definitionsand recording procedures that will give a conservativepicture of the behavior (e.g., whole-interval recording ofon-task behavior with 10-second rather than 5-second in-tervals), frank and repeated discussion with observersabout the importance of collecting accurate data, and fre-quent feedback to observers on the extent to which theirdata agree with true values or data obtained by observerswho are naive. Observers should not receive feedbackabout the extent to which their data confirm or runcounter to hypothesized results or treatment goals.

Observer Reactivity

Measurement error resulting from an observer’s aware-ness that others are evaluating the data he reports is calledobserver reactivity. Like reactivity that may occur whenparticipants are aware that their behavior is being ob-served, the behavior of observers (i.e., the data theyrecord and report) can be influenced by the knowledgethat others are evaluating the data. For example, knowing

that the researcher or another observer is watching thesame behavior at the same time, or will monitor the mea-surement through video- or audiotape later, may produceobserver reactivity. If the observer anticipates that an-other observer will record the behavior in a certain way,his data may be influenced by what he anticipates theother observer may record.

Monitoring observers as unobtrusively as possibleon an unpredictable schedule helps reduce observer re-activity. Separating multiple observers by distance orpartition reduces the likelihood that their measures willbe influenced by one another’s during an observation.One-way mirrors in some research and clinical settingseliminate visual contact between the primary and sec-ondary observers. If sessions are audiotaped or video-taped, the secondary observer can measure the behaviorat a later time and the primary observer never has to comeinto contact with the secondary observer. In settingswhere one-way mirrors are not possible, and whereaudio- or videotaping may be intrusive, the secondaryobserver might begin measuring the behavior at a timeunknown to the primary observer. For example, if theprimary observer begins measuring behavior with thefirst interval, the secondary observer could start mea-suring behavior after 10 minutes have elapsed. The in-tervals used for comparisons would begin at the10-minute mark, ignoring those intervals that the pri-mary observer recorded beforehand.

Assessing the Accuracyand Reliability of BehavioralMeasurementAfter designing a measurement system that will producea valid representation of the target behavior and trainingobservers to use it in a manner that is likely to yield accurate and reliable data, the researcher’s nextmeasurement-related tasks are evaluating the extent towhich the data are, in fact, accurate and reliable. Essen-tially, all procedures for assessing the accuracy and reli-ability of behavioral data entail some form of “measuringthe measurement system.”

Assessing the Accuracyof Measurement

Measurement is accurate when the observed values (i.e.,the numbers obtained by measuring an event) match thetrue values of the event. The fundamental reason for de-termining the accuracy of data is obvious: No one wantsto base research conclusions or make treatment decisions

130

Improving and Assessing the Quality of Behavioral Measurement

on faulty data. More specifically, conducting accuracyassessments serves four interrelated purposes. First, it isimportant to determine early in an analysis whether thedata are good enough to serve as the basis for making ex-perimental or treatment decisions. The first person thatthe researcher or practitioner must try to convince thatthe data are accurate is herself. Second, accuracy assess-ments enable the discovery and correction of specific in-stances of measurement error. The two other approachesto assessing the quality of data to be discussed later inthis chapter—reliability assessments and interobserveragreement—can alert the researcher to the likelihood ofmeasurement errors, but neither approach identifies er-rors. Only the direct assessment of measurement accu-racy allows practitioners or applied researchers to detectand correct faulty data.

A third reason for conducting accuracy assessmentsis to reveal consistent patterns of measurement error,which can lead to the overall improvement or calibrationof the measurement system. When measurement error isconsistent in direction and value, the data can be adjustedto compensate for the error. For example, knowing thatJohn’s bicycle computer reliably obtained a measure of68 miles for a route with a true value of 58 miles led notonly to the cyclists correcting the data in hand (in thiscase, confessing to one another and to their friend Leethat they had not ridden as many miles as previouslyclaimed) but to their calibrating the measurement instru-ment so that future measures would be more accurate (inthis case, adjusting the wheel circumference setting onJohn’s bike computer).

Calibrating any measurement tool, whether it is amechanical device or human observer, entails compar-ing the data obtained by the tool against a true value. Themeasure obtained by the Department of Transportation’swheel odometer served as the true value for calibratingJohn’s bike computer. Calibration of a timing device suchas a stopwatch or countdown timer could be made againsta known standard: the “atomic clock.”4 If no differencesare detected when comparing the timing device againstthe atomic clock, or if the differences are tolerable forthe intended purposes of measurement, then calibrationis satisfied. If significant differences are found, the tim-ing device would need to be reset to the standard. We rec-ommend frequent accuracy assessments in the beginningstages of an analysis. Then, if the assessments have pro-duced high accuracy, less frequent assessments can beconducted to check the calibration of the recorders.

4The official time in the United States can be accessed through the Na-tional Bureau of Standards and the United States Naval Observatory atomicclock (actually 63 atomic clocks are averaged to determine official time):http://tycho.usno.navy.mil/what1.html. The atomic clock is accurate to 1billionth of a second per day, or 1 second per 6 million years!

5The preferred spelling of a word may change (e.g., judgement becomesjudgment), but in such cases a new true value is established.

A fourth reason for conducting accuracy assessmentsis to assure consumers that the data are accurate. Includ-ing the results of accuracy assessments in research re-ports helps readers judge the trustworthiness of the databeing offered for interpretation.

Establishing True Values

“There is only one way to assess the accuracy of a set ofmeasures—by comparing observed values to true val-ues. The comparison is relatively easy; the challenge isoften obtaining measures of behavior that can legiti-mately be considered true values” (Johnston & Penny-packer, 1993a, p. 138). As defined previously, a truevalue is a measure obtained by procedures that are in-dependent of and different from the procedures that pro-duced the data being evaluated and for which theresearcher has taken “special or extraordinary precau-tions to ensure that all possible sources of error havebeen avoided or removed” (p. 136).

True values for some behaviors are evident and uni-versally accepted. For example, obtaining the true valuesof correct responses in academic areas such math andspelling is straightforward. The correct response to thearithmetic problem 2 + 2 = ? has a true value of 4, andthe Oxford English Dictionary is a source of true valuesfor assessing the accuracy of measuring the spelling ofEnglish words.5 Although not universal, true values formany socially significant behaviors of interest to appliedresearchers and practitioners can be established condi-tionally on local context. For example, the correct re-sponse to the question “Name the three starchesrecommended as thickeners for pan gravy” on a quizgiven to students in a culinary school has no universaltrue value. Nevertheless, a true value relevant to the stu-dents taking the quiz can be found in the instructor’scourse materials.

True values for each of the preceding examples wereobtained through sources independent of the measures tobe evaluated. Establishing true values for many behav-iors studied by applied behavior analysts is difficult be-cause the process for determining a true value must bedifferent from the measurement procedures used to obtainthe data one wishes to compare to the true value. For ex-ample, determining true values for occurrences of a be-havior such as cooperative play between children isdifficult because the only way to attach any values to thebehavior is to measure it with the same observation pro-cedures used to produce the data in the first place.

It can be easy to mistake true values as values thatonly appear to be true values. For example, suppose that

131

Improving and Assessing the Quality of Behavioral Measurement

four well-trained and experienced observers view a video-tape of teacher and student interactions. Their task is toidentify the true value of all instances of teacher praisecontingent on academic accomplishments. Each observerviews the tape independently and counts all occurrencesof contingent teacher praise. After recording their re-spective observations, the four observers share their mea-surements, discuss disagreements, and suggest reasonsfor the disagreements. The observers independentlyrecord contingent praise a second time. Once again theyshare and discuss their results. After repeating the record-ing and sharing process several times, all observers agreethat they have recorded every instance of teacher praise.However, the observers did not produce a true value ofteacher praise for two reasons: (1) The observers couldnot calibrate their measurement of teacher praise to anindependent standard of teacher praise, and (2) theprocess used to identify all instances of teacher praisemay be biased (e.g., one of the observers may have con-vinced the others that her measures represented the truevalue). When true values cannot be established, re-searchers must rely on reliability assessments and mea-sures of interobserver agreement to evaluate the qualityof their data.

Accuracy Assessment Procedures

Determining the accuracy of measurement is a straight-forward process of calculating the correspondence of eachmeasure, or datum, assessed to its true value. For exam-ple, a researcher or practitioner assessing the accuracyof the score for a student’s performance on a 30-wordspelling test reported by a grader would compare thegrader’s scoring of each word on the test with the truevalue for that word found in a dictionary. Each word onthe test that matched the correct letter sequence (i.e., or-thography) provided by the dictionary and was markedcorrect by the grader would be an accurate measure by the grader, as would each word marked incorrect by thegrader that did not match the dictionary’s spelling. If theoriginal grader’s scoring of 29 of the test’s 30 words cor-responded to the true values for those words, the grader’smeasure would be 96.7% accurate.

Although an individual researcher or practitioner canassess the accuracy of the data she has collected, multi-ple independent observers are often used. Brown, Dunne,and Cooper (1996) described the procedures they usedto assess the accuracy of measurement in a study of oralreading comprehension as follows:

An independent observer reviewed one student’s audio-tape of the delayed one-minute oral retell each day to assess our accuracy of measurement, providing an assessment of the extent that our counts of delayedretells approximated the true value of the audio-taped

correct and incorrect retells. The independent observerrandomly selected each day’s audiotape by drawing astudent’s name from a hat, then listened to the tape andscored correct and incorrect retells using the same defin-itions as the teacher. Observer scores were compared toteacher scores. If there was a discrepancy between thesescores, the observer and the teacher reviewed the tape(i.e., the true value) together to identify the source of thediscrepancy and corrected the counting error on the datasheet and the Standard Celeration Chart. The observeralso used a stopwatch to time the duration of the audio-tape to ensure accuracy of the timings. We planned tohave the teacher re-time the presentation or retell and re-calculate the frequency per minute for each timing dis-crepancy of more than 5 seconds. All timings, however,met the 5-second accuracy definition. (p. 392)

Reporting Accuracy Assessments

In addition to describing procedures used to assess theaccuracy of the data, researchers should report the num-ber and percentage of measures that were checked for ac-curacy, the degree of accuracy found, the extent ofmeasurement error detected, and whether those mea-surement errors were corrected in the data. Brown andcolleagues (1996) used the following narrative to reportthe results of their accuracy assessment:

The independent observer and the teacher achieved100% agreement on 23 of the 37 sessions checked. Theteacher and the observer reviewed the tape together toidentify the source of measurement errors for the 14 ses-sions containing measurement discrepancies and cor-rected the measurement errors. Accurate data from the37 sessions rechecked were then displayed on the Stan-dard Celeration Charts. The magnitude of the measure-ment errors was very small, often a difference of 1 to 3discrepancies. (p. 392)

A full description and reporting of the results of ac-curacy assessment helps readers of the study evaluate theaccuracy of all of the data included in the report. For ex-ample, suppose a researcher reported that she conductedaccuracy checks on a randomly selected 20% of the data,found those measures to be 97% accurate with the 3%error being nonbiased, and corrected the assessed data asneeded. A reader of the study would know that 20% of thedata are 100% accurate and be fairly confident that theremaining 80% of the data (i.e., all of the measures thatwere not checked for accuracy) is 97% accurate.

Assessing the Reliabilityof Measurement

Measurement is reliable when it yields the same valuesacross repeated measures of the same event. Reliabilityis established when the same observer measures the same

132

Improving and Assessing the Quality of Behavioral Measurement

data set repeatedly from archived response products suchas audiovisual products and other forms of permanentproducts. The more frequently a consistent pattern of ob-servation is produced, the more reliable the measurement(Thompson et al., 2000). Conversely, if similar observedvalues are not achieved with repeated observations, thedata are considered unreliable. This leads to a concernabout accuracy, which is the primary indicator of qualitymeasurement.

But, as we have pointed out repeatedly, reliable dataare not necessarily accurate data. As the three bicyclistsdiscovered, totally reliable (i.e., consistent) measurementmay be totally wrong. Relying on the reliability of mea-surement as the basis for determining the accuracy ofmeasurement would be, as the philosopher Wittgenstein(1953) noted, “As if someone were to buy several copiesof the morning paper to assure himself that what it saidwas true” (p. 94).

In many research studies and most practical appli-cations, however, checking the accuracy of every mea-sure is not possible or feasible. In other cases, true valuesfor measures of the target behavior may be difficult to es-tablish. When confirming the accuracy of each datum isnot possible or practical, or when true values are notavailable, knowing that a measurement system has beenapplied with a high degree of consistency contributes toconfidence in the overall trustworthiness of the data. Al-though high reliability cannot confirm high accuracy,discovering a low level of reliability signals that the dataare then suspect enough to be disregarded until prob-lems in the measurement system can be determined andrepaired.

Assessing the reliability of behavioral measure-ment requires either a natural or contrived permanentproduct so the observer can remeasure the same events.For example, reliability of measurement of variablessuch as the number of adjectives or action verbs in students’ essays could be accomplished by having anobserver rescore essays. Reliability of measurement of the number and type of response prompts and feed-back statements by parents to their children at the family dinner table could be assessed by having an ob-server replay and rescore videotapes of the family’smealtime and compare the data obtained from the twomeasurements.

Observers should not remeasure the same permanentproduct soon after measuring it the first time. Doing somight result in the measures from the second scoringbeing influenced by what the observer remembered fromthe initial scoring. To avoid such unwanted influence, aresearcher can insert several previously scored essays orvideotapes randomly into the sequence of “new data”being recorded by observers.

Using InterobserverAgreement to AssessBehavioral MeasurementInterobserver agreement is the most commonly used in-dicator of measurement quality in applied behavior analy-sis. Interobserver agreement (IOA) refers to the degreeto which two or more independent observers report thesame observed values after measuring the same events.There are numerous techniques for calculating IOA, eachof which provides a somewhat different view of the ex-tent and nature of agreement and disagreement betweenobservers (e.g., Hartmann, 1977; Hawkins & Dotson,1975; Page & Iwata, 1986; Poling, Methot, & LeSage,1995; Repp, Dietz, Boles, Dietz, & Repp, 1976).

Benefits and Uses of IOA

Obtaining and reporting interobserver agreement servesfour distinct purposes. First, a certain level of IOA can beused as a basis for determining the competence of newobservers. As noted earlier, a high degree of agreementbetween a newly trained observer and an experienced ob-server provides an objective index of the extent to whichthe new observer is measuring the behavior in the sameway as experienced observers.

Second, systematic assessment of IOA over thecourse of a study can detect observer drift. When ob-servers who obtained the same, or nearly the same, ob-served values when measuring the same behavioral eventsat the beginning of a study (i.e., IOA was high) obtaindifferent measures of the same events later in the study(i.e., IOA is now low), one of the observers may be usinga definition of the target behavior that has drifted. Dete-riorating IOA assessments cannot indicate with assur-ance which of the observer’s data are being influencedby drift (or any other reason for disagreement), but the in-formation reveals the need for further evaluation of thedata and/or for retraining and calibration of the observers.

Third, knowing that two or more observers consis-tently obtained similar data increases confidence that thedefinition of the target behavior was clear and unam-biguous and the measurement code and system not toodifficult. Fourth, for studies that employ multiple ob-servers as data collectors, consistently high levels of IOAincrease confidence that variability in the data is not afunction of which observer(s) happened to be on duty forany given session, and therefore that changes in the datamore likely reflect actual changes in the behavior.

The first two reasons for assessing IOA are proac-tive: They help researchers determine and describe thedegree to which observers have met training criteria anddetect possible drift in observers’ use of the measurement

133

Improving and Assessing the Quality of Behavioral Measurement

system. The second two purposes or benefits of IOA areas summative descriptors of the consistency of measure-ment across observers. By reporting the results of IOA as-sessments, researchers enable consumers to judge therelative believability of the data as trustworthy and de-serving of interpretation.

Requisites for Obtaining Valid IOA Measures

A valid assessment of IOA depends on three equally im-portant criteria. Although these criteria are perhaps ob-vious, it is nonetheless important to make them explicit.Two observers (usually two, but may be more) must (a)use the same observation code and measurement system,(b) observe and measure the same participant(s) andevents, and (c) observe and record the behavior indepen-dent of any influence from one other.

Observers Must Use the SameMeasurement System

Interobserver agreement assessments conducted for anyof the four previously stated reasons require observers touse the same definitions of the target behavior, observa-tion procedures and codes, and measurement devices.Beyond using the same measurement system, all ob-servers participating in IOA measures used to assess thebelievability of data (as opposed to evaluating the ob-server trainees’ performance) should have received iden-tical training with the measurement system and achievedthe same level of competence in using it.

Observers Must Measure the Same Events

The observers must be able to observe the same subject(s)at precisely the same observation intervals and periods.IOA for data obtained by real-time measurement requiresthat both observers be in the setting simultaneously. Real-time observers must be positioned such that each has asimilar view of the subject(s) and environment. Two ob-servers sitting on opposite sides of a classroom, for ex-ample, might obtain different measures because thedifferent vantage points enable only one observer to seeor hear some occurrences of the target behavior.

Observers must begin and end the observation pe-riod at precisely the same time. Even a difference of afew seconds between observers may produce significantmeasurement disagreements. To remedy this situation,the timing devices could be started simultaneously andoutside the observation setting, but before data collec-tion begins, with the understanding that the data collec-tion would actually start at a prearranged time (e.g.,exactly at the beginning of the fifth minute). Alterna-

tively, but less desirably, one observer could signal theother at the exact moment the observation is to begin.

A common and effective procedure is for both ob-servers to listen by earphones to an audiotape of prere-corded cues signaling the beginning and end of eachobservation interval. An inexpensive splitter device thatenables two earphones to be plugged into the same taperecorder allows observers to receive simultaneous cuesunobtrusively and without depending on one another.

When assessing IOA for data obtained from perma-nent products, the two observers do not need to measurethe behavior simultaneously. For example, the observerscould each watch and record data from the same video-or audiotape at different times. Procedures must be inplace, however, to ensure that each observer watched orlistened to the same tapes and that they started andstopped their independent observations at precisely thesame point(s) on the tapes. Ensuring that two observersmeasure the same events when the target behavior pro-duces natural permanent products, such as completed aca-demic assignments or widgets manufactured, wouldinclude procedures such as clearly marking the sessionnumber, date, condition, and subject’s name on the prod-uct and guarding the response products to ensure thatthey are not disturbed until the second observer has ob-tained his measure.

Observers Must Be Independent

The third essential ingredient for valid IOA assessment isensuring that neither observer is influenced by the other’smeasurements. Procedures must be in place to guaranteeeach observer’s independence. For example, observersconducting real-time measurement of behavior “must besituated so that they can neither see nor hear when theother observes and records a response” (Johnston & Pen-nypacker, 1993a, p. 147). Observers must not be seatedor positioned so closely to one another that either ob-server can detect or be influenced by the other observer’srecordings.

Giving the second observer academic worksheets orwritten assignments that have already been marked byanother observer would violate the observers’ indepen-dence. To maintain independence, the second observermust score photocopies of unadulterated and unmarkedworksheets or assignments as completed by the subjects.

Methods for Calculating IOA

There are numerous methods for calculating IOA, each ofwhich provides a somewhat different view of the extentand nature of agreement and disagreement between ob-servers (e.g., Hartmann, 1977; Hawkins & Dotson, 1975;

134

Improving and Assessing the Quality of Behavioral Measurement

Page & Iwata, 1986; Poling, Methot, & LeSage, 1995;Repp, Dietz, Boles, Dietz, & Repp, 1976). The followingexplanation of different IOA formats is organized by thethree major methods for measuring behavioral data: eventrecording, timing, and interval recording or time sam-pling. Although other statistics are sometimes used, thepercentage of agreement between observers is by far themost common convention for reporting IOA in appliedbehavior analysis.6 Therefore, we have provided the for-mula for calculating a percentage of agreement for eachtype of IOA.

IOA for Data Obtained by Event Recording

The various methods for calculating interobserver agree-ment for data obtained by event recording are based oncomparing (a) the total count recorded by each observerper measurement period, (b) the counts tallied by eachobserver during each of a series of smaller intervals oftime within the measurement period, or (c) each ob-server’s count of 1 or 0 on a trial-by-trial basis.

Total Count IOA.7 The simplest and crudest indi-cator of IOA for event recording data compares thetotal count recorded by each observer per measurementperiod. Total count IOA is expressed as a percentageof agreement between the total number of responsesrecorded by two observers and is calculated by dividingthe smaller of the counts by the larger count and multi-plying by 100, as shown by this formula:

For example, suppose that a child care worker in aresidential setting recorded that 9-year-old Mitchell usedprofane language 10 times during a 30-minute observa-tion period and that a second observer recorded thatMitchell swore 9 times during that same period. The total

6IOA can be calculated by product-moment correlations, which range from+1.0 to �1.0. However, expressing IOA by correlation coefficients has twomajor weaknesses: (a) High coefficients can be achieved if one observerconsistently records more occurrences of the behavior than the other, and(b) correlation coefficients provide no assurance that the observers agreedon the occurrence of any given instance of behavior (Poling et al., 1995).Hartmann (1977) described the use of kappa (k) as an measure of IOA.The k statistic was developed by Cohen (1960) as a procedure for deter-mining the proportion of agreements between observers that would be ex-pected as a result of chance. However, the k statistic is seldom reported inthe behavior analysis literature.7Multiple terms are used in the applied behavior analysis literature for thesame methods of calculating IOA, and the same terms are sometimes usedwith different meanings. We believe the IOA terms used here represent thediscipline’s most used conventions. In an effort to point out and preservesome meaningful distinctions among variations of IOA measures, we haveintroduced several terms.

count IOA for the observation period would be 90% (i.e.,9 10 100 = 90%).

Great caution must be used in interpreting total countIOA because a high degree of agreement provides no as-surance that the two observers recorded the same in-stances of behavior. For example, the following is oneof the countless ways that the data reported by the twoobservers who measured Mitchell’s use of profane lan-guage may not represent anywhere close to 90% agree-ment that they measured the same behaviors. The childcare worker could have recorded all 10 occurrences ofprofane language on her data sheet during the first 15minutes of the 30-minute observation period, a timewhen the second observer recorded just 4 of the 9 totalresponses he reported.

Mean Count-per-Interval IOA. The likelihood thatsignificant agreement between observers’ count datameans they measured the same events can be increasedby (a) dividing the total observation period into a seriesof smaller counting times, (b) having the observersrecord the number of occurrences of the behaviorwithin each interval, (c) calculating the agreement be-tween the two observers’ counts within each interval,and (d) using the agreements per interval as the basisfor calculating the IOA for the total observation period.The hypothetical data shown in Figure 2 will be used to illustrate two methods for calculating count-per-interval IOA: mean count-per-interval and exact count-per-interval. During a 30-minute observation period,two observers independently tallied the number oftimes each witnessed an instance of a target behaviorduring each of six 5-minute intervals.

Even though each observer recorded a total of 15 re-sponses within the 30-minute period, their data sheets re-veal a high degree of disagreement within the observationperiod. Although the total count IOA for the entire obser-vation period was 100%, agreement between the two ob-servers within each 5-minute interval ranged from 0% to100%, yielding a mean count-per-interval IOA of 65.3%.

Mean count-per-interval IOA is calculated bythis formula:

Int 1 IOA + Int 2 IOA + Int N IOA——————————————————n intervals = mean count-per-interval IOA %

Exact Count-per-Interval IOA. The most stringentdescription of IOA for most data sets obtained by eventrecording is obtained by computing the exact count-per-interval IOA—the percentage of total intervals inwhich two observers recorded the same count. The twoobservers whose data are shown in Figure 2 recorded

Smaller count

Larger count x 100 = total count IOA %

× ÷

135

Improving and Assessing the Quality of Behavioral Measurement

Figure 2 Two methods for computing interobserver agreement (IOA) for eventrecording data tallied within smaller time intervals.

Interval (Time) Observer 1 Observer 2 IOA per interval

1 (1:00–1:05) /// // 2/3 = 67%

2 (1:05–1:10) /// /// 3/3 = 100%

3 (1:10–1:15) / // 1/2 = 50%

4 (1:15–1:20) //// /// 3/4 = 75%

5 (1:20–1:25) 0 / 0/1 = 0%

6 (1:25–1:30) //// //// 4/4 = 100%

Total count Total count Mean count-per-interval IOA = 65.3%= 15 = 15 Exact count-per-interval IOA = 33%

the same number of responses in just two of the six in-tervals, an exact count-per-interval IOA of 33%.

The following formula is used to calculate exactcount-per-interval IOA:

Trial-by-Trial IOA. The agreement between two ob-servers who measured the occurrence or nonoccurrenceof discrete trial behaviors for which the count for eachtrial, or response opportunity, can only be 0 or 1 can becalculated by comparing the observers’ total counts or bycomparing their counts on a trial-by-trial basis. Calculat-ing total count IOA for discrete trial data uses the sameformula as total count IOA for free operant data: Thesmaller of the two counts reported by the observers is di-vided by the larger count and multiplied by 100, but inthis case the number of trials for which each observerrecorded the occurrence of the behavior is the count.Suppose, for example, that a researcher and a second ob-server independently measured the occurrence or nonoc-currence of a child’s smiling behavior during each of 20trials that the researcher showed the child a funny pic-ture. The two observers compare data sheets at the end ofthe session and discover that they recorded smiles on 14and 15 trials, respectively. The total count IOA for thesession is 93% (i.e., 14 15 100 = 93.3%), whichmight lead an inexperienced researcher to conclude thatthe target behavior has been well defined and is beingmeasured with consistency by both observers. Thoseconclusions, however, would not be warranted.

Total count IOA of discrete trial data is subject to thesame limitations as total count IOA of free operant data:

It tends to overestimate the extent of actual agreementand does not indicate how many responses, or which re-sponses, trials, or items, posed agreement problems.Comparing the two observers’ counts of 14 and 15 trialssuggests that they disagreed on the occurrence of smilingon only 1 of 20 trials. However, it is possible that any ofthe 6 trials scored as “no smile” by the experimenter wasscored as a “smile” trial by the second observer and thatany of the 5 trials recorded by the second observer as “nosmile” was recorded as a “smile” by the experimenter.Thus, the total count IOA of 93% may vastly overesti-mate the actual consistency with which the two observersmeasured the child’s behavior during the session.

A more conservative and meaningful index of inter-observer agreement for discrete trial data is trial-by-trialIOA, which is calculated by the following formula:

The trial-by-trial IOA for the two observers’ smilingdata, if calculated with the worst possible degree of agree-ment from the previous example—that is, if all 6 trials thatthe primary observer scored as “no smile” were recorded as“smile” trials by the second observer and all 5 trials markedby the second observer as “no smile” were recorded as“smile” trials by the experimenter—would be 45% (i.e.,9 trials scored in agreement divided by 20 trials 100).

IOA for Data Obtained by Timing

Interobserver agreement for data obtained by timing du-ration, response latency, or interresponse time (IRT) isobtained and calculated in essentially the same way as it

* 100 = exact count-per-interval IOA %

Number of intervals of 100% IOA

n intervals

* 100 = trial-by-trial IOA %

Number of trials (items) agreement

Total number of trials (items)

÷ × ×

136

Improving and Assessing the Quality of Behavioral Measurement

is for event recording data. Two observers independentlytime the duration, latency, or IRT of the target behavior,and IOA is based on comparing either the total time ob-tained by each observer for the session or the timesrecorded by each observer per occurrence of the behav-ior (for duration measures) or per response (for latencyand IRT measures).

Total Duration IOA. Total duration IOA is com-puted by dividing the shorter of the two durations re-ported by the observers by the longer duration andmultiplying by 100.

As with total count IOA for event recording data,high total duration IOA provides no assurance that theobservers recorded the same durations for the same oc-currences of behavior. This is because a significant degreeof disagreement between the observers’ timings of indi-vidual responses may be canceled out in the sum. For ex-ample, suppose two observers recorded the followingdurations in seconds for five occurrences of a behavior:

R1 R2 R3 R4 R5

Observer 1: 35 15 9 14 17 (total duration = 90 seconds)

Observer 2: 29 21 7 14 14 (total duration = 85 seconds)

Total duration IOA for these data is a perhaps com-

two observers obtained the same duration for only oneof the five responses, and their timings of specific re-sponses varied by as much as 6 seconds. While recog-nizing this limitation of total duration IOA, when totalduration is being recorded and analyzed as a dependentvariable, reporting total duration IOA is appropriate.When possible, total duration IOA should be supple-mented with mean duration-per-occurrence IOA, whichis described next.

Mean Duration-per-Occurrence IOA. Mean dura-tion-per-occurrence IOA should be calculated for dura-tion per occurrence data, and it is a more conservativeand usually more meaningful assessment of IOA fortotal duration data. The formula for calculating meanduration-per-occurrence IOA is similar to the oneused to determine mean count-per-interval IOA:

Using this formula to calculate the mean duration-per-occurrence IOA for the two observers’ timing dataof the five responses just presented would entail the fol-lowing steps:

1. Calculate duration per occurrence IOA for each re-

2. Add the individual IOA percentages for each oc-currence: .83 + .71 + .78 + 1.00 + .82 = 4.14

3. Divide the sum of the individual IOAs per occur-rence by the total number of responses for which

4. Multiply by 100 and round to the nearest whole

This basic formula is also used to compute the meanlatency-per-response IOA or mean IRT-per-response IOAfor latency and IRT data. An observer’s timings of laten-cies or IRTs in a session should never be added and thetotal time compared to a similar total time obtained by

tency and IRT measures.In addition to reporting mean agreement per oc-

currence, IOA assessment for timing data can be en-hanced with information about the range of differencesbetween observers’ timings and the percentage of re-sponses for which the two observers each obtainedmeasures within a certain range of error. For example:Mean duration-per-occurrence IOA for Temple’s com-pliance was 87% (range across responses, 63 to 100%),and 96% of all timings obtained by the second observerwere within +/–2 seconds of the primary observer’smeasures.

IOA for Data Obtained by IntervalRecording/Time Sampling

Three techniques commonly used by applied behavioranalysts to calculate IOA for interval data are interval-by-interval IOA, scored-interval IOA, and unscored-interval IOA.

Interval-by-Interval IOA. When using an interval-by-interval IOA (sometimes referred to as the point-by-point and total interval method), the primary observer’srecord for each interval is matched to the secondary ob-server’s record for the same interval. The formula forcalculating interval-by-interval IOA is as follows:

forting 94% (i.e., 85 ÷ 90 × 100 = 94.4%). However, the

* 100 = mean duration-per-interval IOA %

Dur IOA R1 + Dur IOA R2 + Dur IOA Rn

n responses with Dur IOA

Shorter duration

Longer duration* 100 = total duration IOA %

sponse: R1, 29 ÷ 35 = .83; R2, 15 ÷ 21 = .71; R3, 7 ÷ 9 = .78; R4, 14 ÷ 14 = 1.0; and R5, 14 ÷ 17 = .82

two observers measured duration: 4.14 ÷ 5 = .828

number: .828 × 100 = 83%

* 100 = interval-by-interval IOA %

Number of intervals agreed

Number of intervals agreed + number of intervals disagreed

another observer as the basis for calculating IOA for la-

137

Improving and Assessing the Quality of Behavioral Measurement

Figure 3 When calculating interval-by-interval IOA, the number of intervals in which both observers agreed on the occurrence or the nonoccurrence of the behavior(shaded intervals) is divided by the total number of observation intervals. Interval-by-interval IOA for the data shown here is 70% (7/10).

Interval-by-Interval IOA

Interval no. 1 2 3 4 5 6 7 8 9 10

Observer 1 X X X 0 X X 0 X X 0

Observer 2 0 X X 0 X 0 0 0 X 0

X = behavior was recorded as occurring during interval

0 = behavior was recorded as not occurring during interval

Figure 4 Scored-interval IOA is calculated using only those intervals in which either observer recorded the occurrence of the behavior (shaded intervals). Scored-interval IOA for the data shown here is 33% (1/3).

Scored-Interval IOA

Interval no. 1 2 3 4 5 6 7 8 9 10

Observer 1 X 0 X 0 0 0 0 0 0 0

Observer 2 0 0 X 0 0 0 0 0 X 0

X = behavior was recorded as occurring during interval

0 = behavior was recorded as not occurring during interval

The hypothetical data in Figure 3 show the interval-by-interval method for calculating IOA based on therecord of two observers who recorded the occurrence (X)and nonoccurrence (0) of behavior in each of 10 obser-vation intervals. The observers’ data sheets show that theyagreed on the occurrence or the nonoccurrence of the be-havior for seven intervals (Intervals 2, 3, 4, 5, 7, 9, and10). Interval-by-interval IOA for this data set is 70% (i.e.,

Interval-by-interval IOA is likely to overestimate theactual agreement between observers measuring behav-iors that occur at very low or very high rates. This is be-cause interval-by-interval IOA is subject to random oraccidental agreement between observers. For example,with a behavior whose actual frequency of occurrence isonly about 1 or 2 intervals per 10 observation intervals,even a poorly trained and unreliable observer who missessome of the few occurrences of the behavior and mis-takenly records the behavior as occurring in some inter-vals when the behavior did not occur is likely to markmost intervals as nonoccurrences. As a result of thischance agreement, interval-by-interval IOA is likely tobe quite high. Two IOA methods that minimize the ef-

fects of chance agreements for interval data on behav-iors that occur at very low or very high rates are scored-interval IOA and unscored-interval IOA (Hawkins &Dotson, 1975).

Scored-Interval IOA. Only those intervals inwhich either or both observers recorded the occurrenceof the target behavior are used in calculating scored-interval IOA. An agreement is counted when both ob-servers recorded that the behavior occurred in the sameinterval, and each interval in which one observerrecorded the occurrence of the behavior and the otherrecorded its nonoccurrence is counted as a disagree-ment. For example, for the data shown in Figure 4,only Intervals 1, 3, and 9 would be used in calculatingscored-interval IOA. Intervals 2, 4, 5, 6, 7, 8, and 10would be ignored because both observers recorded thatthe behavior did not occur in those intervals. Becausethe two observers agreed that the behavior occurred inonly one (Interval 3) of the three scored intervals, thescored-interval IOA measure is 33% (1 interval ofagreement divided by the sum of 1 interval of agree-ment plus 2 intervals of disagreement 100 = 33%).

7 ÷ [7 + 3] × 100 = 70%).

×

138

Improving and Assessing the Quality of Behavioral Measurement

Figure 5 Unscored-interval IOA is calculated using only those intervals in whicheither observer recorded the nonoccurrence of the behavior (shaded intervals).Unscored interval IOA for the data shown here is 50% (2/4).

Unscored-Interval IOA

Interval no. 1 2 3 4 5 6 7 8 9 10

Observer 1 X X X 0 X X 0 X X 0

Observer 2 0 X X 0 X X 0 X X X

X = behavior was recorded as occurring during interval

0 = behavior was recorded as not occurring during interval

For behaviors that occur at low rates, scored-intervalIOA is a more conservative measure of agreement than interval-by-interval IOA. This is because scored-intervalIOA ignores the intervals in which agreement by chanceis highly likely. For example, using the interval-by-interval method for calculating IOA for the data in Figure4 would yield an agreement of 80%. To avoid overin-flated and possibly misleading IOA measures, we rec-ommend using scored-interval interobserver agreementfor behaviors that occur at frequencies of approximately30% or fewer intervals.

Unscored-Interval IOA. Only intervals in whicheither or both observers recorded the nonoccurrence ofthe target behavior are considered when calculatingunscored-interval IOA. An agreement is countedwhen both observers recorded the nonoccurrence of thebehavior in the same interval, and each interval inwhich one observer recorded the nonoccurrence of thebehavior and the other recorded its occurrence iscounted as a disagreement. For example, only Intervals1, 4, 7, and 10 would be used in calculating the un-scored-interval IOA for the data in Figure 5 because at least one observer recorded the nonoccurrence of thebehavior in each of those intervals. The two observersagreed that the behavior did not occur in Intervals 4 and7. Therefore, the unscored-interval IOA in this exampleis 50% (2 intervals of agreement divided by the sum of2 intervals of agreement plus 2 intervals of disagree-ment 100 = 50%).

For behaviors that occur at relatively high rates,unscored-interval IOA provides a more stringent assess-ment of interobserver agreement than does interval-by-interval IOA. To avoid overinflated and possibly mis-leading IOA measures, we recommend using unscored-interval interobserver agreement for behaviors that occur at frequencies of approximately 70% or more of intervals.

Considerations in Selecting,Obtaining, and ReportingInterobserver Agreement

The guidelines and recommendations that follow are or-ganized under a series of questions concerning the useof interobserver agreement to evaluate the quality of be-havioral measurement.

How Often and When Should IOA Be Obtained?

Interobserver agreement should be assessed during eachcondition and phase of a study and be distributed acrossdays of the week, times of day, settings, and observers.Scheduling IOA assessments in this manner ensures thatthe results will provide a representative (i.e., valid) pic-ture of all data obtained in a study. Current practice andrecommendations by authors of behavioral researchmethods texts suggest that IOA be obtained for a mini-mum of 20% of a study’s sessions, and preferably be-tween 25% and 33% of sessions (Kennedy, 2005; Polinget al., 1995). In general, studies using data obtained viareal-time measurement will have IOA assessed for ahigher percentage of sessions than studies with data ob-tained from permanent products.

The frequency with which data should be assessedvia interobserver agreement will vary depending on thecomplexity of the measurement code, the number and ex-perience of observers, the number of conditions andphases, and the results of the IOA assessments themselves.More frequent IOA assessments are expected in studiesthat involve complex or new measurement systems, inex-perienced observers, and numerous conditions and phases.If appropriately conservative methods for obtaining andcalculating IOA reveal high levels of agreement early ina study, the number and proportion of sessions in whichIOA is assessed may decrease as the study progresses. Forinstance, IOA assessment might be conducted in each

×

139

Improving and Assessing the Quality of Behavioral Measurement

session at the beginning of an analysis, and then reducedto a schedule of once per four or five sessions.

For What Variables Should IOABe Obtained and Reported?

In general, researchers should obtain and report IOA atthe same levels at which they report and discuss the re-sults of their study. For example, a researcher analyzingthe relative effects of two treatment conditions on twobehaviors of four participants in two settings should re-port IOA outcomes on both behaviors for each partici-pant separated by treatment condition and setting. Thiswould enable consumers of the research to judge the rel-ative believability of the data within each component ofthe experiment.

Which Method of Calculating IOAShould Be Used?

More stringent and conservative methods of calculatingIOA should be used over methods that are likely tooverestimate actual agreement as a result of chance.With event recording data used to evaluate the accu-racy of performance, we recommend reporting overallIOA on a trial-by-trial or item-by-item basis, perhapssupplemented with separate IOA calculations for cor-rect responses and incorrect responses. For data ob-tained by interval or time sampling measurement, werecommend supplementing interval-by-interval IOAwith scored-interval IOA or unscored-interval IOA de-pending on the relative frequency of the behavior. Insituations in which the primary observer scores the tar-get behavior as occurring in approximately 30% orfewer intervals, scored-interval IOA provides a con-servative supplement to interval-by-interval IOA. Con-versely, when the primary observer scores the targetbehavior as occurring in approximately 70% or more ofthe intervals, unscored-interval IOA should supplementinterval-by-interval IOA. If the rate at which the targetbehavior occurs changes from very low to very high, orfrom very high to very low, across conditions or phasesof a study, reporting both unscored-interval and scored-interval IOA may be warranted.

If in doubt about which form of IOA to report, cal-culating and presenting several variations will help read-ers make their own judgments regarding the believabilityof the data. However, if the acceptance of the data for in-terpretation or decision making rests on which formulafor calculating IOA is chosen, serious concerns about thedata’s trustworthiness exist that must be addressed.

What Are Acceptable Levels of IOA?

Carefully collected and conservatively computed IOA as-sessments increasingly enhance the believability of a dataset as agreement approaches 100%. The usual conven-tion in applied behavior analysis is to expect indepen-dent observers to achieve a mean of no less than 80%agreement when using observational recording. However,as Kennedy (2005) pointed out, “There is no scientificjustification for why 80% is necessary, only a long his-tory of researchers using this percentage as a benchmarkof acceptability and being successful in their research ac-tivities” (p. 120).

Miller (1997) recommended that IOA should be 90%or greater for an established measure and at least 80% fora new variable. Various factors at work in a given situa-tion may make an 80% or 90% criterion too low or toohigh. Interobserver agreement of 90% on the number ofwords contained in student compositions should raise se-rious questions about the trustworthiness of the data. IOAnear 100% is needed to enhance the believability of countdata obtained from permanent products. However, someanalysts might accept data with a mean IOA as low as75% for the simultaneous measurement of multiple be-haviors by several subjects in a complex environment, es-pecially if it is based on a sufficient number of individualIOA assessments with a small range (e.g., 73 to 80%).

The degree of behavior change revealed by the datashould also be considered when determining an accept-able level of interobserver agreement. When behaviorchange from one condition to another is small, the vari-ability in the data might represent inconsistent observa-tion more than actual change in the behavior. Therefore,the smaller the change in behavior across conditions, thehigher the criterion should be for an acceptable IOA per-centage (Kennedy, 2005).

How Should IOA Be Reported?

IOA scores can be reported in narrative, table, andgraphic form. Whichever format is chosen, it is importantto note how, when, and how often interobserver agree-ment was assessed.

Narrative Description. The most common ap-proach for reporting IOA is a simple narrative descrip-tion of the mean and range of agreement percentages.For example, Craft, Alber, and Heward (1998) de-scribed the methods and results of IOA assessments ina study in which four dependent variables were mea-sured as follows:

Student recruiting and teacher praise. A second ob-server was present for 12 (30%) of the study’s 40

140

Improving and Assessing the Quality of Behavioral Measurement

Table 1 Interobserver Agreement Results for Each Dependent Variable by Participant and ExperimentalCondition

Range and Mean Percentage Interobserver Agreement on Scripted Interaction, Elaborations, and UnscriptedInteraction by Child and Condition

Condition

Baseline Teaching New recipient Script fading New activitiesType of interaction Range M Range M Range M Range M Range M

Scripted

David 88–100 94 100 100

Jeremiah 89–100 98 100 —a

Ben 80–100 98 90 —a

Elaborations

David 75–100 95 87–88 88 90–100 95

Jeremiah 83–100 95 92–100 96 —a

Ben 75–100 95 95 —a

Unscripted

David 100 100 87–88 88 97–100 98 98–100 99

Jeremiah 100 100 88–100 94 93–100 96 98

Ben 100 100 100 92–93 92 98–100 99

aNo data are available for scripted responses and elaborations in the script-fading condition, because interobserver agreement was obtained afterscripts were removed (i.e., because scripts were absent, there could be only unscripted responses).From “Social Interaction Skills for Children with Autism: A Script-Fading Procedure for Beginning Readers,” by P. J. Krantz and L. E. McClannahan,1998, Journal of Applied Behavior Analysis, 31, p. 196. Copyright 1998 by the Society for the Experimental Analysis of Behavior, Inc. Reprinted bypermission.

sessions. The two observers independently and simulta-neously observed the 4 students, recording the numberof recruiting responses they emitted and teacher praisethey received. Descriptive narrative notes recorded bythe observers enabled each recruiting episode to be iden-tified for agreement purposes. Interobserver agreementwas calculated on an episode-by-episode basis by divid-ing the total number of agreements by the total numberof agreements plus disagreements and multiplying by100%. Agreement for frequency of student recruitingranged across students from 88.2% to 100%; agreementfor frequency of recruited teacher praise was 100% forall 4 students; agreement for frequency of nonrecruitedteacher praise ranged from 93.3% to 100%.

Academic work completion and accuracy. A secondobserver independently recorded each student’s workcompletion and accuracy for 10 (25%) sessions. Interob-server agreement for both completion and accuracy onthe spelling worksheets was 100% for all 4 students.

Table. An example of reporting interobserveragreement outcomes in table format is shown in Table1. Krantz and McClannahan (1998) reported the range and mean IOA computed for three types of so-

cial interactions by three children across each experi-mental condition.

Graphic Display. Interobserver agreement can berepresented visually by plotting the measures obtainedby the secondary observer on a graph of the primaryobserver’s data as shown in Figure 6. Looking at bothobservers’ data on the same graph reveals the extent ofagreement between the observers and the existence ofobserver drift or bias. The absence of observer drift issuggested in the hypothetical study shown in Figure 6because the secondary observer’s measures changed inconcert with the primary observer’s measures. Al-though the two observers obtained the same measure ononly 2 of the 10 sessions in which IOA was assessed(Sessions 3 and 8), the fact that neither observer consis-tently reported measures that were higher or lower thanthe other suggests the absence of observer bias. An ab-sence of bias is usually indicated by a random patternof overestimation and underestimation. In addition torevealing observer drift and bias, a third way thatgraphically displaying IOA assessments can enhancethe believability of measurement is illustrated by the

141

12

10

8

6

4

2

0Nu

mb

er o

f R

esp

on

ses

per

Min

ute

5 10 15 20Sessions

Baseline Response Cost Baseline Response Cost

2nd Observer

Figure 6 Plotting measures obtained by a secondobserver on a graph of the primary observer’s dataprovide a visual representation of the extent and natureof interobserver agreement.

data in Figure 6. When the data reported by the pri-mary observer show clear change in the behavior be-tween conditions or phases and all of the measuresreported by the secondary observer within each phasefall within the range of observed values obtained by theprimary observer, confidence increases that the datarepresent actual changes in the behavior measuredrather than changes in the primary observer’s behaviordue to drift or extra-experimental contingencies.

Although published research reports in applied be-havior analysis seldom include graphic displays of IOAmeasures, creating and using such displays during a studyis a simple and direct way for researchers to detect pat-terns in the consistency (or inconsistency) with whichobservers are measuring behavior that might be not beas evident in comparing a series of percentages.

Which Approach Should Be Used for Assessingthe Quality of Measurement: Accuracy,Reliability, or Interobserver Agreement?

Assessments of the accuracy of measurement, the relia-bility of measurement, and the extent to which differentobservers obtain the same measures each provide differ-ent indications of data quality. Ultimately, the reason forconducting any type of assessment of measurement qual-ity is to obtain quantitative evidence that can be used forthe dual purposes of improving measurement during thecourse of an investigation and judging and convincingothers of the trustworthiness of the data.

After ensuring the validity of what they are measur-ing and how they are measuring it, applied behavioranalysts should choose to assess the accuracy of measurement whenever possible rather than reliability or

interobserver agreement. If it can be determined that allmeasurements in a data set meet an acceptable accuracycriterion, questions regarding the reliability of measure-ment and interobserver agreement are moot. For data con-firmed to be accurate, conducting additional assessmentsof reliability or IOA is unnecessary.

When assessing the accuracy of measurement is notpossible because true values are unavailable, an assess-ment of reliability provides the next best quality indica-tor. If natural or contrived permanent products can bearchived, applied behavior analysts can assess the relia-bility of measurement, allowing consumers to know thatobservers have measured behavior consistently from ses-sion to session, condition to condition, and phase to phase.

When true values and permanent product archivesare unavailable, interobserver agreement provides a levelof believability for the data. Although IOA is not a directindicator of the validity, accuracy, or reliability of mea-surement, it has proven to be a valuable and useful re-search tool in applied behavior analysis. Reportinginterobserver agreement has been an expected and re-quired component of published research in applied be-havior analysis for several decades. In spite of itslimitations, “the homely measures of observer agreementso widely used in the field are exactly relevant” (Baer,1977, p. 119) to efforts to develop a robust technologyof behavior change.

Percentage of agreement, in the interval-recording par-adigm, does have a direct and useful meaning: howoften do two observers watching one subject, andequipped with the same definitions of behavior, see itoccurring or not occurring at the same standard times?The two answers, “They agree about its occurrence X%of the relevant intervals, and about its nonoccurrenceY% of the relevant intervals,” are superbly useful.(Baer, 1977, p. 118)

There are no reasons to prevent researchers fromusing multiple assessment procedures to evaluate thesame data set. When time and resources permit, it mayeven be desirable to include combinations of assessments.Applied behavior analysts can use any possible combi-nation of the assessment (e.g., accuracy plus IOA, relia-bility plus IOA). In addition, some aspects of the data setcould be assessed for accuracy or reliability while otheraspects are assessed with IOA. The previous example ofaccuracy assessment reported by Brown and colleagues(1996) included assessments for accuracy and IOA. In-dependent observers recorded correct and incorrectstudent-delayed retells. When IOA was less than 100%,data for that student and session were assessed for accu-racy. IOA was used as an assessment to enhance believ-ability, and also as a procedure for selecting data to beassessed for accuracy.

Improving and Assessing the Quality of Behavioral Measurement

142

Summary

Improving and Assessing the Quality of Behavioral Measurement

Indicators of Trustworthy Measurement

1. To be most useful for science, measurement must be valid,accurate, and reliable.

2. Valid measurement in ABA encompasses three equally im-portant elements: (a) measuring directly a socially signif-icant target behavior, (b) measuring a dimension of thetarget behavior relevant to the question or concern aboutthe behavior, and (c) ensuring that the data are representa-tive of the behavior under conditions and during times mostrelevant to the reason(s) for measuring it.

3. Measurement is accurate when observed values, the dataproduced by measuring an event, match the true state, ortrue values, of the event.

4. Measurement is reliable when it yields the same valuesacross repeated measurement of the same event.

Threats to Measurement Validity

5. Indirect measurement—measuring a behavior differentfrom the behavior of interest—threatens validity becauseit requires that the researcher or practitioner make infer-ences about the relationship between the measures ob-tained and the actual behavior of interest.

6. A researcher who employs indirect measurement mustprovide evidence that the behavior measured directly re-flects, in some reliable and meaningful way, somethingabout the behavior for which the researcher wishes to drawconclusions.

7. Measuring a dimension of the behavior that is ill suitedfor, or irrelevant to, the reason for measuring the behaviorcompromises validity.

8. Measurement artifacts are data that give an unwarrantedor misleading picture of the behavior because of the waymeasurement was conducted. Discontinuous measure-ment, poorly scheduled observations, and insensitive orlimiting measurement scales are common causes of mea-surement artifacts.

Threats to Measurement Accuracy and Reliability

9. Most investigations in applied behavior analysis usehuman observers to measure behavior, and human error isthe biggest threat to the accuracy and reliability of data.

10. Factors that contribute to measurement error includepoorly designed measurement systems, inadequate ob-server training, and expectations about what the datashould look like.

11. Observers should receive systematic training and practicewith the measurement system and meet predetermined ac-curacy and reliability criteria before collecting data.

12. Observer drift—unintended changes in the way an ob-server uses a measurement system over the course of an in-

vestigation—can be minimized by booster training ses-sions and feedback on the accuracy and reliability ofmeasurement.

13. An observer’s expectations or knowledge about predicted ordesired results can impair the accuracy and reliability of data.

14. Observers should not receive feedback about the extent towhich their data confirm or run counter to hypothesized re-sults or treatment goals.

15. Measurement bias caused by observer expectations can beavoided by using naive observers.

16. Observer reactivity is measurement error caused by anobserver’s awareness that others are evaluating the datahe reports.

Assessing the Accuracy and Reliability of BehavioralMeasurement

17. Researchers and practitioners who assess the accuracy oftheir data can (a) determine early in an analysis whetherthe data are usable for making experimental or treatmentdecisions, (b) discover and correct measurement errors,(c) detect consistent patterns of measurement error thatcan lead to the overall improvement or calibration of themeasurement system, and (d) communicate to others therelative trustworthiness of the data.

18. Assessing the accuracy of measurement is a straightfor-ward process of calculating the correspondence of eachmeasure, or datum, assessed to its true value.

19. True values for many behaviors of interest to applied be-havior analysts are evident and universally accepted or canbe established conditionally by local context. True valuesfor some behaviors (e.g., cooperative play) are difficultbecause the process for determining a true value must bedifferent from the measurement procedures used to obtainthe data one wishes to compare to the true value.

20. Assessing the extent to which observers are reliably ap-plying a valid and accurate measurement system providesa useful indicator of the overall trustworthiness of the data.

21. Assessing the reliability of measurement requires a nat-ural or contrived permanent product so the observer can re-measure the same behavioral events.

22. Although high reliability does not confirm high accuracy,discovering a low level of reliability signals that the data aresuspect enough to be disregarded until problems in the mea-surement system can be determined and repaired.

Using Interobserver Agreement to Assess BehavioralMeasurement

23. The most commonly used indicator of measurement qual-ity in ABA is interobserver agreement (IOA), the degree

143

to which two or more independent observers report thesame observed values after measuring the same events.

24. Researchers and practitioners use measures of IOA to (a)determine the competence of new observers, (b) detect ob-server drift, (c) judge whether the definition of the targetbehavior is clear and the system not too difficult to use, and(d) convince others of the relative believability of the data.

25. Measuring IOA requires that two or more observers (a)use the same observation code and measurement system,(b) observe and measure the same participant(s) andevents, and (c) observe and record the behavior indepen-dent of influence by other observers.

26. There are numerous techniques for calculating IOA, each ofwhich provides a somewhat different view of the extent andnature of agreement and disagreement between observers.

27. Percentage of agreement between observers is the mostcommon convention for reporting IOA in ABA.

28. IOA for data obtained by event recording can be calcu-lated by comparing (a) the total count recorded by eachobserver per measurement period, (b) the counts talliedby each observer during each of a series of smaller inter-vals of time within the measurement period, or (c) eachobserver’s count of 1 or 0 on a trial-by-trial basis.

29. Total count IOA is the simplest and crudest indicator ofIOA for event recording data, and exact count-per-intervalIOA is the most stringent for most data sets obtained byevent recording.

30. IOA for data obtained by timing duration, response la-tency, or interresponse time (IRT) is calculated in essen-tially the same ways as for event recording data.

31. Total duration IOA is computed by dividing the shorter ofthe two durations reported by the observers by the longerduration. Mean duration-per-occurrence IOA is a more

conservative and usually more meaningful assessment ofIOA for total duration data and should always be calculatedfor duration-per-occurrence data.

32. Three techniques commonly used to calculate IOA for in-terval data are interval-by-interval IOA, scored-intervalIOA, and unscored-interval IOA.

33. Because it is subject to random or accidental agreementbetween observers, interval-by-interval IOA is likely tooverestimate the degree of agreement between observersmeasuring behaviors that occur at very low or very highrates.

34. Scored-interval IOA is recommended for behaviors thatoccur at relatively low frequencies; unscored-interval IOAis recommended for behaviors that occur at relatively highfrequencies.

35. IOA assessments should occur during each condition andphase of a study and be distributed across days of the week,times of day, settings, and observers.

36. Researchers should obtain and report IOA at the same lev-els at which they report and discuss the results of their study.

37. More stringent and conservative IOA methods should beused over methods that may overestimate agreement as aresult of chance.

38. The convention for acceptable IOA has been a minimumof 80%, but there can be no set criterion. The nature of thebehavior being measured and the degree of behaviorchange revealed by the data must be considered when de-termining an acceptable level of IOA.

39. IOA scores can be reported in narrative, table, andgraphic form.

40. Researchers can use multiple indices to assess the qualityof their data (e.g., accuracy plus IOA, reliability plus IOA).

Improving and Assessing the Quality of Behavioral Measurement

144

145

Constructing and Interpreting GraphicDisplays of Behavioral Data

Key Terms

bar graphcumulative recordcumulative recorderdatadata pathdependent variablegraph

independent variablelevelline graphlocal response rateoverall response ratescatterplotsemilogarithmic chart

split-middle line of progressStandard Celeration Charttrendvariabilityvisual analysis

Behavior Analyst Certification Board® BCBA® & BCABA®Behavior Analyst Task List ©, Third Edition

Content Area 7: Displaying and Interpreting Behavioral Data

7-1 Select a data display that effectively communicates quantitative relations.

7-2 Use equal-interval graphs.

7-3 Use Standard Celeration Charts (for BCBA only—excluded for BCABA).

7-4 Use a cumulative record to display data.

7-5 Use data displays that highlight patterns of behavior (e.g., scatterplot).

7-6 Interpret and base decision making on data displayed in various formats.

© 2006 The Behavior Analyst Certification Board, Inc.,® (BACB®) all rights reserved. A current version of thisdocument may be found at www.bacb.com. Requests to reprint, copy, or distribute this document and ques-tions about this document must be submitted directly to the BACB.

From Chapter 6 of Applied Behavior Analysis, Second Edition. John O. Cooper, Timothy E. Heron, William L. Heward. Copyright © 2007by Pearson Education, Inc. All rights reserved.

146

Applied behavior analysts document and quan-tify behavior change by direct and repeatedmeasurement of behavior. The product of these

measurements, called data, is the medium with whichbehavior analysts work. In everyday usage the word datarefers to a wide variety of often imprecise and subjec-tive information offered as facts. In scientific usage theword data means “the results of measurement, usuallyin quantified form” (Johnston & Pennypacker, 1993a,p. 365).1

Because behavior change is a dynamic and ongoingprocess, the behavior analyst—the practitioner and theresearcher—must maintain direct and continuous con-tact with the behavior under investigation. The data ob-tained throughout a behavior change program or aresearch study are the means for that contact; they formthe empirical basis for every important decision: to con-tinue with the present procedure, to try a different inter-vention, or to reinstitute a previous condition. But makingvalid and reliable decisions from the raw data themselves(a series of numbers) is difficult, if not impossible, and in-efficient. Inspecting a long row of numbers will revealonly very large changes in performance, or no change atall, and important features of behavior change can easilybe overlooked.

Consider the three sets of data that follow; each con-sists of a series of numbers representing consecutive mea-sures of some target behavior. The first data set shows theresults of successive measures of the number of responsesemitted under two different conditions (A and B):

Condition A Condition B

120, 125, 115, 130, 114, 110, 115, 121,

126, 130, 123, 120, 110, 116, 107, 120,

120, 127 115, 112

Here are some data showing consecutive measures ofthe percentage of correct responses:

80, 82, 78, 85, 80, 90, 85, 85, 90, 92

The third data set consists of measures of responsesper minute of a target behavior obtained on successiveschool days:

65, 72, 63, 60, 55, 68, 71, 65, 65, 62, 70, 75, 79, 63, 60

What do these numbers tell you? What conclusionscan you draw from each data set? How long did it takeyou to reach your conclusions? How sure of them areyou? What if the data sets contained many more mea-sures to interpret? How likely is it that others interested

1Although often used as a singular construction (e.g., “The data showsthat . . .”), data is a plural noun of Latin origin and is correctly used withplural verbs (e.g., “These data are . . .”).

in the behavior change program or research study wouldreach the same conclusions? How could these data be di-rectly and effectively communicated to others?

Graphs—relatively simple formats for visually dis-playing relationships among and between a series of mea-surements and relevant variables—help people “makesense” of quantitative information. Graphs are the majordevice with which applied behavior analysts organize,store, interpret, and communicate the results of their work.Figure 1 includes a graph for each of the three data setspresented previously. The top graph reveals a lower levelof responding during Condition B than during Condi-tion A. The middle graph clearly shows an upward trend

Trials

Res

po

nse

s P

er M

inu

teF

req

uen

cy

Observations

School Days

Per

cen

t C

orr

ect

Figure 1 Graphic displays of three sets of hypothet-ical data illustrating changes in the level of respondingacross conditions (top), trend (middle), and cyclicalvariability (bottom).

Constructing and Interpreting Graphic Displays of Behavioral Data

147

over time in the response measure. A variable pattern ofresponding, characterized by an increasing trend duringthe first part of each week and a decreasing trend towardthe end of each week, is evident in the bottom graph. Thegraphs in Figure 1 illustrate three fundamental proper-ties of behavior change over time—level, trend, and vari-ability—each of which will be discussed in detail later inthe chapter. The graphic display of behavioral data hasproven an effective means of detecting, analyzing, andcommunicating these aspects of behavior change.

Purpose and Benefits of Graphic Displays of Behavioral DataNumerous authors have discussed the benefits of usinggraphs as the primary vehicle for interpreting and com-municating the results of behavioral treatments and re-search (e.g., Baer, 1977; Johnston & Pennypacker, 1993a;Michael, 1974; Parsonson, 2003; Parsonson & Baer,1986, 1992; Sidman, 1960). Parsonson and Baer (1978)said it best:

In essence, the function of the graph is to communicate,in a readily assimilable and attractive manner, descrip-tions and summaries of data that enable rapid and accu-rate analysis of the facts. (p. 134)

There are at least six benefits of graphic display andvisual analysis of behavioral data. First, plotting eachmeasure of behavior on a graph right after the observa-tional period provides the practitioner or researcher withimmediate access to an ongoing visual record of the par-ticipant’s behavior. Instead of waiting until the investi-gation or teaching program is completed, behavior changeis evaluated continually, allowing treatment and experi-mental decisions to be responsive to the participant’s per-formance. Graphs provide the “close, continual contactwith relevant outcome data” that can lead to “measurablysuperior instruction” (Bushell & Baer, 1994, p. 9).

Second, direct and continual contact with the data ina readily analyzable format enables the researcher as wellas the practitioner to explore interesting variations in be-havior as they occur. Some of the most important researchfindings about behavior have been made because scien-tists followed the leads suggested by their data insteadof following predetermined experimental plans (Sidman,1960, 1994; Skinner, 1956).

Third, graphs, like statistical analyses of behaviorchange, are judgmental aids: devices that help the prac-titioner or experimenter interpret the results of a study ortreatment (Michael, 1974). In contrast to the statisticaltests of inference used in group comparison research,however, visual analysis of graphed data takes less time,is relatively easy to learn, imposes no predetermined or

2A comparison of the visual analysis of graphed data and inferences basedon statistical tests of significance is presented in chapter entitled “Plan-ning and Evaluating Applied Behavior Analysis Research.”3Graphs, like statistics, can also be manipulated to make certain interpre-tations of the data more or less likely. Unlike statistics, however, mostforms of graphic displays used in behavior analysis provide direct accessto the original data, which allows the inquisitive or doubtful reader to re-graph (i.e., manipulate) the data.

arbitrary level for determining the significance of be-havior change, and does not require the data to conformto certain mathematical properties or statistical assump-tions to be analyzed.

Fourth, visual analysis is a conservative method fordetermining the significance of behavior change. A be-havior change deemed statistically significant accordingto a test of mathematical probabilities may not look veryimpressive when the data are plotted on a graph that re-veals the range, variability, trends, and overlaps in thedata within and across experimental or treatment condi-tions. Interventions that produce only weak or unstableeffects are not likely to be reported as important findingsin applied behavior analysis. Rather, weak or unstableeffects are likely to lead to further experimentation in aneffort to discover controlling variables that produce mean-ingful behavior change in a reliable and sustained man-ner. This screening out of weak variables in favor ofrobust interventions has enabled applied behavior ana-lysts to develop a useful technology of behavior change(Baer, 1977).2

Fifth, graphs enable and encourage independentjudgments and interpretations of the meaning and sig-nificance of behavior change. Instead of having to rely onconclusions based on statistical manipulations of the dataor on an author’s interpretations, readers of published re-ports of applied behavior analysis can (and should) con-duct their own visual analysis of the data to formindependent conclusions.3

Sixth, in addition to their primary purpose of dis-playing relationships between behavior change (or lackthereof) and variables manipulated by the practitioner orresearcher, graphs can also be effective sources of feed-back to the people whose behavior they represent (e.g.,DeVries, Burnettte, & Redmon, 1991; Stack & Milan,1993). Graphing one’s own performance has also beendemonstrated to be an effective intervention for a vari-ety of academic and behavior change objectives (e.g.,Fink & Carnine, 1975; Winette, Neale, & Grier, 1979).

Types of Graphs Used in Applied Behavior AnalysisVisual formats for the graphic display of data most oftenused in applied behavior analysis are line graphs, bargraphs, cumulative records, semilogarithmic charts, andscatterplots.

Constructing and Interpreting Graphic Displays of Behavioral Data

148

Line Graphs

The simple line graph, or frequency polygon, is the mostcommon graphic format for displaying data in appliedbehavior analysis. The line graph is based on a Cartesianplane, a two-dimensional area formed by the intersectionof two perpendicular lines. Any point within the planerepresents a specific relationship between the two di-mensions described by the intersecting lines. In appliedbehavior analysis, each point on a line graph shows thelevel of some quantifiable dimension of the target be-havior (i.e., the dependent variable) in relation to a spec-ified point in time and/or environmental condition (i.e.,the independent variable) in effect when the measurewas taken. Comparing points on the graph reveals thepresence and extent of changes in level, trend, and/orvariability within and across conditions.

Parts of a Basic Line Graph

Although graphs vary considerably in their final appear-ance, all properly constructed line graphs share certainelements. The basic parts of a simple line graph are shownin Figure 2 and described in the following sections.

1. Horizontal Axis. The horizontal axis, also calledthe x axis, or abscissa, is a straight horizontal line thatmost often represents the passage of time and the pres-ence, absence, and/or value of the independent variable.A defining characteristic of applied behavior analysis isthe repeated measurement of behavior across time.Time is also the unavoidable dimension in which allmanipulations of the independent variable occur. On

most line graphs the passage of time is marked in equalintervals on the horizontal axis. In Figure 2 succes-sive 10-minute sessions during which the number ofproperty destruction responses (including attempts) wasmeasured are marked on the horizontal axis. In thisstudy, 8 to 10 sessions were conducted per day (Fisher,Lindauer, Alterson, & Thompson, 1998).

The horizontal axis on some graphs represents dif-ferent values of the independent variable instead of time.For example, Lalli, Mace, Livezey, and Kates (1998)scaled the horizontal axis on one graph in their studyfrom less than 0.5 meters to 9.0 meters to show how theoccurrence of self-injurious behavior by a girl with severemental retardation decreased as the distance between thetherapist and the girl increased.

2. Vertical Axis. The vertical axis, also called they axis, or ordinate, is a vertical line drawn upward fromthe left-hand end of the horizontal axis. The vertical axismost often represents a range of values of the dependentvariable, which in applied behavior analysis is alwayssome quantifiable dimension of behavior. The intersec-tion of the horizontal and vertical axes is called theorigin and usually, though not necessarily, representsthe zero value of the dependent variable. Each succes-sive point upward on the vertical axis represents agreater value of the dependent variable. The most com-mon practice is to mark the vertical axis with an equal-interval scale. On an equal-interval vertical axis equaldistances on the axis represent equal amounts of behav-ior. The vertical axis in Figure 2 represents the numberof property destruction responses (and attempts) perminute with a range of 0 to 4 responses per minute.

4

3

2

1

0

Pro

per

ty D

estr

uct

ion

an

d A

ttem

pte

dP

rop

erty

Des

tru

ctio

n p

er M

inu

te

5 10 15 20 25 30

Milo

3

65

Baseline Blocking Baseline Blocking Baseline Blocking

4

7

1

2

Sessions

Rates of property destruction (plus attempts) during baseline and the blocking condition for Milo.

Figure 2 The major parts of a simple line graph: (1) horizontal axis, (2) verticalaxis, (3) condition change lines, (4) condi-tion labels, (5) data points, (6) data path,and (7) figure caption.

From “Assessment and Treatment of Destructive BehaviorMaintained by Stereotypic Object Manipulation” by W. W.Fisher, S. E. Lindauer, C. J. Alterson, and R. H. Thompson,1998, Journal of Applied Behavior Analysis, 31, p. 522.Copyright 1998 by the Society for the Experimental Analysisof Behavior, Inc. Used by permission.

Constructing and Interpreting Graphic Displays of Behavioral Data

149

3. Condition Change Lines. Condition change linesare vertical lines drawn upward from the horizontalaxis to show points in time at which changes in the in-dependent variable occurred. The condition changelines in Figure 2 coincide with the introduction or withdrawal of an intervention the researchers calledblocking. Condition change lines can be drawn as solidor dashed lines. When relatively minor changes occurwithin an overall condition, dashed vertical lines shouldbe used to distinguish minor changes from majorchanges in conditions, which are shown by solid lines(see Figure 18).

4. Condition Labels. Condition labels, in the formof single words or brief descriptive phrases, are printedalong the top of the graph and parallel to the horizontalaxis. These labels identify the experimental conditions(i.e., the presence, absence, or some value of the inde-pendent variable) that are in effect during each phase ofthe study.4

5. Data Points. Each data point on a graph repre-sents two facts: (a) a quantifiable measure of the targetbehavior recorded during a given observation periodand (b) the time and/or experimental conditions underwhich that particular measurement was conducted.Using two data points from Figure 2 as examples, wecan see that during Session 5, the last session of thefirst baseline phase, property destruction and attemptedproperty destruction responses occurred at a rate of ap-proximately 2 responses per minute; and in Session 9,the fourth session of the first blocking phase, 0 in-stances of the target behavior were recorded.

6. Data Path. Connecting successive data pointswithin a given condition with a straight line creates adata path. The data path represents the level and trendof behavior between successive data points, and it is aprimary focus of attention in the interpretation andanalysis of graphed data. Because behavior is rarelyobserved and recorded continuously in applied behav-ior analysis, the data path represents an estimate of theactual course taken by the behavior during the timeelapsed between the two measures. The more mea-surements and resultant data points per unit of time(given an accurate observation and recording system),

4The terms condition and phase are related but not synonymous. Properlyused, condition indicates the environmental arrangements in effect at anygiven time; phase refers to a period of time within a study or behavior-change program. For example, the study shown in Figure 2 consisted of two conditions (baseline and blocking) and six phases.

the more confidence one can place in the story told bythe data path.

7. Figure Caption. The figure caption is a concisestatement that, in combination with the axis and condi-tion labels, provides the reader with sufficient informa-tion to identify the independent and dependent variables.The figure caption should explain any symbols or ob-served but unplanned events that may have affected thedependent variable (see Figure 6) and point out andclarify any potentially confusing features of the graph(see Figure 7).

Variations of the Simple Line Graph:Multiple Data Paths

The line graph is a remarkably versatile vehicle for dis-playing behavior change. Whereas Figure 2 is an ex-ample of the line graph in its simplest form (one datapath showing a series of successive measures of behav-ior across time and experimental conditions) by the ad-dition of multiple data paths, the line graph can displaymore complex behavior–environment relations. Graphswith multiple data paths are used frequently in appliedbehavior analysis to show (a) two or more dimensions ofthe same behavior, (b) two or more different behaviors,(c) the same behavior under different and alternating ex-perimental conditions, (d) changes in target behavior rel-ative to the changing values of an independent variable,and (e) the behavior of two or more participants.

Two or More Dimensions of the Same Behavior.Showing multiple dimensions of the dependent variableon the same graph enables visual analysis of the ab-solute and relative effects of the independent variableon those dimensions. Figure 3 shows the results of astudy of the effects of training three members of awomen’s college basketball team proper foul shootingform (Kladopoulos & McComas, 2001). The data pathcreated by connecting the open triangle data pointsshows changes in the percentage of foul shots executedwith the proper form, whereas the data path connectingthe solid data points reveals the percentage of foul shotsmade. Had the experimenters recorded and graphedonly the players’ foul shooting form, they would nothave known whether any improvements in the targetbehavior on which training was focused (correct foulshooting form) coincided with improvements in the be-havior by which the social significance of the studywould ultimately be judged—foul shooting accuracy.By measuring and plotting both form and outcome onthe same graph, the experimenters were able to analyzethe effects of their treatment procedures on two criticaldimensions of the dependent variable.

Constructing and Interpreting Graphic Displays of Behavioral Data

150

100

80

60

40

20

0

100

80

60

40

20

0

Baseline Form Training

Participant 1foul shot dataform data

100

80

60

40

20

0

100

80

60

40

20

0Participant 2

100

80

60

40

20

0

100

80

60

40

20

0Participant 3

Sessions5 1510 2520

Per

cen

tage

of

Sho

ts M

ade

Per

cen

tage

of

Co

rrec

t T

rial

sFigure 1. Percentage of shots made (filled circles) and percentage of shots taken with correct form (open triangles) across sessions for each participant.

Figure 3 Graph using multipledata paths to show the effects of the independent variable (FormTraining) on two dimensions(accuracy and topography) of the target behavior.

From “The Effects of Form Training on Foul-Shooting Performance in Members of a Women’sCollege Basketball Team” by C. N. Kladopoulos andJ. J. McComas, 2001, Journal of Applied BehaviorAnalysis, 34, p. 331. Copyright 2001 by the Societyfor the Experimental Analysis of Behavior, Inc. Usedby permission.

Two or More Different Behaviors. Multiple datapaths are also used to facilitate the simultaneous com-parison of the effects of experimental manipulations ontwo or more different behaviors. Determining the co-variation of two behaviors as a function of changes inthe independent variable is accomplished more easily ifboth can be displayed on the same set of axes. Figure4 shows the percentage of intervals in which a boy with autism exhibited stereotypy (e.g., repetitive bodymovements, rocking) across three conditions and thenumber of times that he raised his hand for attention (inthe attention condition), signed for a break (in the de-mand condition), and signed for access to preferred tan-gible stimuli (in the no-attention condition) in a studyinvestigating a strategy called functional communica-tion training (Kennedy, Meyer, Knowles, & Shukla,2000).5 By recording and graphing both stereotypic re-sponding and appropriate behavior, the investigatorswere able to determine whether increases in alternativecommunication responses (raising his hand and sign-ing) were accompanied by reductions in stereotypy.Note that a second vertical axis is used on Figure 4 to

5Functional communication training is described in chapter entitled “Antecedent Interventions.”

show the proper dimensional units and scaling for sign-ing frequency. Because of the differences in scale, read-ers of dual-vertical axis graphs must view them withcare, particularly when assessing the magnitude of be-havior change.

Measures of the Same Behavior under DifferentConditions. Multiple data paths are also used to repre-sent measures of the same behavior taken under differ-ent experimental conditions that alternate throughoutan experimental phase. Figure 5 shows the number ofself-injurious response per minute by a 6-year-old girlwith developmental disabilities under four differentconditions (Moore, Mueller, Dubard, Roberts, & Sterling-Turner, 2002). Graphing an individual’s behavior undermultiple conditions on the same set of axes allows di-rect visual comparisons of differences in absolute levelsof responding at any given time as well as relativechanges in performance over time.

Changing Values of an Independent Variable.Multiple data path graphs are also used to show changesin the target behavior (shown on one data path) relativeto changing values of the independent variable (repre-sented by a second data path). In each of the two graphs

Constructing and Interpreting Graphic Displays of Behavioral Data

151

100

80

60

40

20

0

30

20

10

0

Attention

100

80

60

40

20

0

Demand

Signing

Stereotypy

30

20

10

0

Per

cen

tage

of

Inte

rval

s o

f St

ereo

typ

y

100

80

60

40

20

0

No Attention 30

20

10

05 10 15 20 25 30 35

BL Functional Communication Training

Sessions

Sign

ing

Fre

qu

ency

Figure 2. Occurrence of stereotypy for James across attention, demand, and no-attention conditions. Data are arrayed as the percentage of intervals of stereotypy on the left y axis and number of signs per sessions on the right y axis.

Figure 4 Graph with multiple datapaths showing two different behaviors by one participant during baseline andtraining across three different conditions.Note the different dimensions andscaling of the dual vertical axes.

From “Analyzing the Multiple Functions of StereotypicalBehavior for Students with Autism: Implications forAssessment and Treatment” by C. H. Kennedy, K. A.Meyer, T. Knowles, and S. Shukla, 2000, Journal ofApplied Behavior Analysis, 33, p. 565. Copyright 2000 bythe Society for the Experimental Analysis of Behavior, Inc.Used by permission.

in Figure 6 one data path shows the duration of prob-lem behavior (plotted against the left-hand y axis scaledin seconds) relative to changes in noise level, which aredepicted by the second data path (plotted against theright-hand y axis scaled in decibels) (McCord, Iwata,Galensky, Ellingson, & Thomson, 2001).

The Same Behavior of Two or More Participants.Multiple data paths are sometimes used to show the be-havior of two or more participants on the same graph.

Depending on the levels and variability of the dataencompassed by each data path, a maximum of four dif-ferent data paths can be displayed effectively on one setof axes. However, there is no rule; Didden, Prinsen, andSigafoos displayed five data paths in a single display(2000, p. 319). If too many data paths are displayed on thesame graph, the benefits of making additional compar-isons may be outweighed by the distraction of too muchvisual “noise.” When more than four data paths must be

6A superb example of combining visual display techniques is Charles Mi-nard’s use of space-time-story graphics to illustrate the interrelations ofsix variables during Napoleon’s ill-fated Russian campaign of 1812–1813(see Tufte, 1983, p. 41). Tufte called Minard’s graph perhaps “the best sta-tistical graphic ever drawn” (p. 40).

included on the same graph, other methods of displaycan be incorporated.6 For example, Gutowski and Stromer(2003) effectively used striped and shaded bars in com-bination with conventional data paths to display the num-ber of names spoken and the percentage of correctmatching-to-sample responses by individuals with men-tal retardation (see Figure 7).

Bar Graphs

The bar graph, or histogram, is a simple and versatileformat for graphically summarizing behavioral data. Likethe line graph, the bar graph is based on the Cartesian

Constructing and Interpreting Graphic Displays of Behavioral Data

152

5

4

3

2

1

0

Res

po

nse

s p

er M

inu

te

4 8 12 16 20 24 28 32 36Sessions

Attention

Tangible

PlayDemand

Figure 1. Rate of self-injurious behavior during the initial functional analysis.

Figure 5 Graph with multiple data paths showingthe same behavior measuredunder four different conditions.

From “The Influence of Therapist Attentionon Self-Injury during a Tangible Condition”by J. W. Moore, M. M. Mueller, M. Dubard,D. S. Roberts, and H. E. Sterling-Turner,2002, Journal of Applied BehaviorAnalysis, 35, p. 285. Copyright 2002 by theSociety for the Experimental Analysis ofBehavior, Inc. Used by permission.

60

50

40

30

20

10

0

85

80

75

70

65

60

55

50

45

Noise

Problem behavior

Debbie

BL(EXT) EXT + Stimulus Fading

50

40

30

20

10

0

70

65

60

55

50

45

Sarah

20 40 60 80 100 120 140

20 40 60 80 100 120 140

DRO

A BF

Sessions

Du

rati

on

of

Pro

ble

m B

ehav

ior

(sec

)

No

ise

Leve

l (d

B)

Figure 4. Results of Debbie’s and Sarah’s treatment evaluation. Sessions marked A and B near the end of treatment indicate two generalization probes in the natural environment; F indicates a follow-up probe.

A BF

Figure 6 Graph using two data paths to show the duration of problembehavior (dependentvariable) by two adults withsevere or profound mentalretardation as noise levelwas increased gradually(independent variable).

From “Functional Analysis andTreatment of Problem Behavior Evokedby Noise” by B. E. McCord, B. A. Iwata,T. L. Galensky, S. A. Ellingson, and R. J.Thomson, 2001, Journal of AppliedBehavior Analysis, 34, p. 457. Copyright2001 by the Society for the ExperimentalAnalysis of Behavior, Inc. Used bypermission.

plane and shares most of the line graph’s features withone primary difference: The bar graph does not have dis-tinct data points representing successive response mea-sures through time. Bar graphs can take a wide variety offorms to allow quick and easy comparisons of perfor-mance across participants and/or conditions.

Bar graphs serve two major functions in applied be-havior analysis. First, bar graphs are used for displayingand comparing discrete sets of data that are not related toone another by a common underlying dimension bywhich the horizontal axis can be scaled. For example,Gottschalk, Libby, and Graff (2000), in a study analyzing

Constructing and Interpreting Graphic Displays of Behavioral Data

153

100

90

80

70

60

100

90

80

70

602 10 17

25

20

15

10

5

RepeatingNames

Matching to TwoDictated Names

Matching toTwo Pictures

No PromptSm (1)

No Prompt0–s Delay (Set 1)

No Prompt1-s Delay (Set 1)

100

90

80

70

60

25

20

15

10

5

Prompt1–s Delay (Set 1)

No Prompt1–s Delay (Set 1)

Prompt1–s (1)

No Prompt1–s Delay (Set 1)

No Prompt1–s Del (Set 2)

No Pt1–s (3)

24 (58) 30 32 (50) 41 46 48

25

20

15

10

5

No PromptSm (3)

No Prompt0–s Delay (Set 3)

Prompt0–s Del (Set 3)

Olivia

Dan

2 (50) 9 (33)(58) (4225)

(5033)

100

90

80

70

60

25

20

15

10

5

17

25 28 37 40 44 47 49

Per

cen

tage

Co

rrec

t M

atch

ing

to S

amp

le

Sessions

Nu

mb

er o

f N

ames

Nu

mb

er o

f N

ames

No Prompt0–s Delay (Set 3)

Prompt0–s (3)

No Prompt0–s Delay (Set 3)

Prompt0–s (3)

No Prompt0–s (Set 3)

No Pt0–s (2)

No Pt0–s (1)

(58) (42)

NamingPictures

Figure 4. Results for Olivia and Dan across simultaneous, delay, prompt, and no-prompt conditions: open circles and solid squares reflect percentages of correct matching. Striped bars and shaded bars reflect the number of names spoken on trials with two-name and two-picture samples, respectively. Bars with extended tic marks on the abscissa indicate that the number of names exceeded 25.

Figure 7 Graph using a combination of bars anddata points to displaychanges in two responseclasses to two types ofmatching stimuli underdifferent prompting conditions.

From “Delayed Matching to Two-Picture Samples by Individuals Withand Without Disabilities: An Analysisof the Role of Naming” by S. J.Gutowski and Robert Stromer, 2003,Journal of Applied Behavior Analysis,36, p. 498. Copyright 2003 by theSociety for the Experimental Analysisof Behavior, Inc. Reprinted bypermission.

the effects of establishing operations on preference as-sessments, used bar graphs to show the percentage of tri-als in which four children reached toward and picked updifferent items (see Figure 8).

Another common use of bar graphs is to give a visualsummary of the performance of a participant or groupof participants during the different conditions of an ex-periment. For example, Figure 9 shows the mean per-centage of spelling worksheet items completed and themean percentage of completed items that were done cor-rectly by four students during baseline and combinedgeneralization programming and maintenance conditions

that followed training each child how to recruit teacherattention while they were working (Craft, Alber, &Heward, 1998).

Bar graphs sacrifice the presentation of the variabil-ity and trends in behavior (which are apparent in a linegraph) in exchange for the efficiency of summarizing andcomparing large amounts of data in a simple, easy-to-interpret format. They should be viewed with the under-standing that they may mask important variability in thedata. Although bar graphs are typically used to present ameasure of central tendency, such as the mean or medianscore for each condition, the range of measures repre-

Constructing and Interpreting Graphic Displays of Behavioral Data

154

100

80

60

40

20

0oyster cracker graham cracker gummi candy Twinkie

SatiationControlDeprivation

Ethan

Per

cen

tage

of

Tri

als

wit

hA

pp

roac

h R

esp

on

ses

Stimulus

100

80

60

40

20

0jellybean popcorn Cheez-It cracker

SatiationControlDeprivation

Daniel

Per

cen

tage

of

Tri

als

wit

hA

pp

roac

h R

esp

on

ses

Stimulus

100

80

60

40

20

0Skittles licorice jellybeans Reese's Pieces

SatiationControlDeprivation

Mark

Per

cen

tage

of

Tri

als

wit

hA

pp

roac

h R

esp

on

ses

Stimulus

100

80

60

40

20

0jellybeans licorice Necco Wafers Reese's Pieces

SatiationControlDeprivation

Ashley

Per

cen

tage

of

Tri

als

wit

hA

pp

roac

h R

esp

on

ses

StimulusFigure 1. Percentage of approach responses across conditions for Ethan, Daniel, Mark, and Ashley.

Figure 8 Bar graph used tosummarize and display results of measurements taken underdiscrete conditions lacking anunderlying dimension by which the horizontal axis could be scaled(e.g., time, duration of stimuluspresentations).

From “The Effects of Establishing Operationson Preference Assessment Outcomes” by J. M. Gottschalk, M. E. Libby, and R. B Graff,2000, Journal of Applied Behavior Analysis, 33,p. 87. Copyright 2000 by the Society for theExperimental Analysis of Behavior, Inc. Reprintedby permission.

sented by the mean can also be incorporated into the dis-play (e.g., see Figure 5 in Lerman, Kelley, Vorndran,Kuhn, & LaRue, 2002).

Cumulative Records

The cumulative record (or graph) was developed bySkinner as the primary means of data collection in theexperimental analysis of behavior. A device called thecumulative recorder enables a subject to actually drawits own graph (see Figure 10). In a book cataloging 6years of experimental research on schedules of rein-forcement, Ferster and Skinner (1957) described cumu-lative records in the following manner:

A graph showing the number of responses on the ordi-nate against time on the abscissa has proved to be themost convenient representation of the behavior observedin this research. Fortunately, such a “cumulative” recordmay be made directly at the time of the experiment. Therecord is raw data, but it also permits a direct inspectionof rate and changes in rate not possible when the behav-ior is observed directly. . . . Each time the bird responds,the pen moves one step across the paper. At the sametime, the paper feeds continuously. If the bird does notrespond at all, a horizontal line is drawn in the directionof the paper feed. The faster the bird pecks, the steeperthe line. (p. 23)

When cumulative records are plotted by hand or cre-ated with a computer graphing program, which is most

Constructing and Interpreting Graphic Displays of Behavioral Data

155

100

80

60

40

20

0Baseline (6) Gen./Main. (24)

Latasha 100

80

60

40

20

0Baseline (14) Gen./Main. (18)

Olivia

100

80

60

40

20

0Baseline (16) Gen./Main. (16)

Octavian 100

80

60

40

20

0Baseline (26) Gen./Main. (8)

Kenny% Completion

% Accuracy

Per

cen

t C

om

ple

tio

n/A

ccu

racy

Experimental Conditions

Figure 4. Mean percentage of spelling worksheet items completed and mean percentage of accuracy by each student during baseline and combined generalization programming and maintenance conditions. Numbers in parentheses show total number of sessions per condition.

Figure 9 Bar graph comparing mean levelsfor two dimensions of participants’ performancebetween experimental conditions.

From “Teaching Elementary Students with DevelopmentalDisabilities to Recruit Teacher Attention in a General EducationClassroom: Effects on Teacher Praise and Academic Productivity”by M. A. Craft, S. R. Alber, and W. L. Heward, 1998, Journal ofApplied Behavior Analysis, 31, p. 410. Copyright 1998 by theSociety for the Experimental Analysis of Behavior, Inc. Reprinted bypermission.

Figure 10 Diagram of a cumulative recorder.

From Schedules of Reinforcement, pp. 24–25, by C. B. Ferster and B. F.Skinner, 1957, Upper Saddle River, NJ: Prentice Hall. Copyright 1957 byPrentice Hall. Used by permission.

often the case in applied behavior analysis, the number ofresponses recorded during each observation period isadded (thus the term cumulative) to the total number ofresponses recorded during all previous observation peri-ods. In a cumulative record the y axis value of any datapoint represents the total number of responses recordedsince the beginning of data collection. The exception oc-curs when the total number of responses has exceededthe upper limit of the y axis scale, in which case the datapath on a cumulative curve resets to the 0 value of the

y axis and begins its ascent again. Cumulative recordsare almost always used with frequency data, althoughother dimensions of behavior, such as duration and la-tency, can be displayed cumulatively.

Figure 11 is an example of a cumulative record from the applied behavior analysis literature (Neef, Iwata,& Page, 1980). It shows the cumulative number ofspelling words mastered by a person with mental retar-dation during baseline and two training conditions. Thegraph shows that the individual mastered a total of 1 wordduring the 12 sessions of baseline (social praise for cor-rect spelling responses and rewriting incorrectly spelledwords three times), a total of 22 words under the inter-spersal condition (baseline procedures plus the presen-tation of a previously learned word after each unknownword), and a total of 11 words under the high-density re-inforcement condition (baseline procedures plus socialpraise given after each trial for task-related behaviorssuch as paying attention and writing neatly).

In addition to the total number of responses recordedat any given point in time, cumulative records show theoverall and local response rates. Rate is the number ofresponses emitted per unit of time, usually reported asresponses per minute in applied behavior analysis. Anoverall response rate is the average rate of response overa given time period, such as during a specific session,phase, or condition of an experiment. Overall rates are

Constructing and Interpreting Graphic Displays of Behavioral Data

156

Sessions

Cu

mu

lati

ve N

um

ber

of

Wo

rds

Mea

sure

dFigure 11 Cumulative graph of number ofspelling words learned by a man with mentalretardation during baseline, interspersaltraining, and high-density reinforcementtraining. Points a–e have been added to illus-trate the differences between overall and localresponse rates.

From “The Effects of Interspersal Training Versus High DensityReinforcement on Spelling Acquisition and Retention” by N. A.Neef, B. A. Iwata, and T. J. Page, 1980, Journal of AppliedBehavior Analysis, 13, p. 156. Copyright 1980 by the Society forthe Experimental Analysis of Behavior, Inc. Adapted bypermission.

calculated by dividing the total number of responsesrecorded during the period by the number of observationperiods indicated on the horizontal axis. In Figure 11the overall response rates are 0.46 and 0.23 words mas-tered per session for the interspersal and high-density re-inforcement conditions, respectively.7

On a cumulative record, the steeper the slope, thehigher the response rate. To produce a visual representa-tion of an overall rate on a cumulative graph, the first andlast data points of a given series of observations shouldbe connected with a straight line. A straight line con-necting Points a and c in Figure 11 would represent thelearner’s overall rate of mastering spelling words duringthe interspersal condition. A straight line connectingPoints a and e represents the overall rate during the high-density reinforcement condition. Relative rates of re-sponse can be determined by visually comparing oneslope to another; the steeper the slope, the higher the rateof response. A visual comparison of Slopes a–c and a–eshows that the interspersal condition produced the higheroverall response rate.

Response rates often fluctuate within a given period.The term local response rate refers to the rate of re-sponse during periods of time smaller than that for whichan overall rate has been given. Over the last four ses-sions of the study shown in Figure 11, the learner ex-hibited a local rate of responding during interspersal

7Technically, Figure 11 does not represent true rates of response becausethe number of words spelled correctly was measured and not the rate, orspeed, at which they were spelled. However, the slope of each data path rep-resents the different ”rates” of mastering the spelling words in each sessionwithin the context of a total of 10 new words presented per session.

training (Slope b–c) that was considerably higher thanhis overall rate for that condition. At the same time hisperformance during the final four sessions of the high-density reinforcement condition (Slope d–e) shows alower local response rate than his overall rate for thatcondition.

A legend giving the slopes of some representativerates can aid considerably in the determination and com-parison of relative response rates both within and acrosscumulative curves plotted on the same set of axes (e.g.,see Kennedy & Souza, 1995, Figure 2). However, veryhigh rates of responding are difficult to compare visuallywith one another on cumulative records.

Although the rate of responding is directly proportionalto the slope of the curve, at slopes above 80 degreessmall differences in angle represent very large differ-ences in rate; and although these can be measured ac-curately, they cannot be evaluated easily by [visual]inspection. (Ferster & Skinner, 1957, pp. 24–25)

Even though cumulative records derived from con-tinuous recording are the most directly descriptive dis-plays of behavioral data available, two other features ofbehavior, in addition to the comparison of very high rates,can be difficult to determine on some cumulative graphs.One, although the total number of responses since datacollection began can be easily seen on a cumulativegraph, the number of responses recorded for any givensession can be hard to ascertain, given the number of datapoints and the scaling of the vertical axis. Two, gradualchanges in slope from one rate to another can be hard todetect on cumulative graphs.

Constructing and Interpreting Graphic Displays of Behavioral Data

157

Days

Cu

mu

lati

ve N

um

ber

of

Res

po

nse

s

Baseline1 Intervention1 Baseline2 Intervention2

Baseline1 Intervention1 Baseline2 Intervention2

Figure 12 Same set of hypothetical dataplotted on noncumulative and cumulativegraphs. Cumulative graphs more clearlyreveal patterns of and changes in respond-ing for behaviors that can occur only onceduring each period of measurement.

From Working with Parents of Handicapped Children, p. 100,by W. L. Heward, J. C. Dardig, and A. Rossett, 1979, Columbus,OH: Charles E. Merrill. Copyright 1979 by Charles E. Merrill.Used by permission.

Four situations in which a cumulative graph may bepreferable to a noncumulative line graph are as follows.First, cumulative records are desirable when the totalnumber of responses made over time is important or whenprogress toward a specific goal can be measured in cu-mulative units of behavior. The number of new wordslearned, dollars saved, or miles trained for an upcomingmarathon are examples. One look at the most recent datapoint on the graph reveals the total amount of behavior upto that point in time.

Second, a cumulative graph might also be more ef-fective than noncumulative graphs when the graph is usedas a source of feedback for the participant. This is be-cause both total progress and relative rate of performanceare easily detected by visual inspection (Weber, 2002).

Third, a cumulative record should be used when thetarget behavior is one that can occur or not occur onlyonce per observation session. In these instances the ef-fects of any intervention are easier to detect on a cumula-tive graph than on a noncumulative graph. Figure 12shows the same data plotted on a noncumulative graphand a cumulative graph. The cumulative graph clearlyshows a relation between behavior and intervention,whereas the noncumulative graph gives the visual im-pression of greater variability in the data than really exists.

Fourth, cumulative records can “reveal the intricaterelations between behavior and environmental variables”(Johnston & Pennypacker, 1993a, p. 317) Figure 13 is

an excellent example of how a cumulative graph enablesa detailed analysis of behavior change (Hanley, Iwata, &Thompson, 2001). By plotting the data from single ses-sions cumulatively by 10-second intervals, the researchersrevealed patterns of responding not shown by a graph ofsession-by-session data. Comparing the data paths forthe three sessions for which the results were graphed cu-mulatively (Mult #106, Mixed #107, and Mixed #112)revealed two undesirable patterns of responding (in thiscase, pushing a switch that operated a voice output devicethat said “talk to me, please” as an alternative to self-injurious behavior and aggression) that are likely to occurduring mixed schedules, and the benefits of includingschedule-correlated stimuli (Mult #106).

Semilogarithmic Charts

All of the graphs discussed so far have been equal-inter-val graphs on which the distance between any two con-secutive points on each axis is always the same. On thex axis the distance between Session 1 and Session 2 isequal to the distance between Session 11 and Session 12;on the y axis, the distance between 10 and 20 responsesper minute is equal to the distance between 35 and 45 re-sponses per minute. On an equal-interval graph equal ab-solute changes in behavior, whether an increase ordecrease in performance, are expressed by equal distanceson the y axis.

Constructing and Interpreting Graphic Displays of Behavioral Data

158

200

150

100

50

0

Alt

ern

ativ

e R

esp

on

ses

(Cu

mu

lati

ve #

)

6 12 18 24 30 36 42 48 54 60

Mult #106

Mixed #107

Mixed #112

FR 1 EXT FR 1 EXT FR 1 EXT FR1

Julie

10-S Bins

Figure 4. Cumulative number of alternative responses across schedule components for three sessions from Julie’s assessment of schedule-correlated stimuli. Open symbols represent two mixed-schedule sessions; filled symbols represent a single multiple-schedule session.

Figure 13 Cumulative recordused to make a detailed analysisand comparison of behavioracross components of multiple-and mixed-reinforcementschedules within specificsessions of a study.

From “Reinforcement Schedule ThinningFollowing Treatment with Functional Com-munication Training” by G. P. Hanley, B. A.Iwata, and R. H. Thompson, 2001, Journal ofApplied Behavior Analysis, 34, p. 33. Copy-right 2001 by the Society for the ExperimentalAnalysis of Behavior, Inc. Used by permission.

TimeEqual-Distance y Axis

TimeLogarithmic y Axis

(Log Base 2)

Res

po

nse

Rat

e

Res

po

nse

Rat

e

Figure 14 Same set of data plotted onequal-interval arithmetic scale (left) and onequal-proportion ratio scale (right).

Another way of looking at behavior change is to ex-amine proportional or relative change. Logarithmicscales are well suited to display and communicate pro-portional change. On a logarithmic scale equal relativechanges in the variable being measured are representedby equal distances. Because behavior is measured andcharted over time, which progresses in equal intervals,the x axis is marked off in equal intervals and only the y axis is scaled logarithmically. Hence, the term semi-logarithmic chart refers to graphs in which only oneaxis is scaled proportionally.

On semilog charts all behavior changes of equal pro-portion are shown by equal vertical distances on the ver-tical axis, regardless of the absolute values of thosechanges. For example, a doubling of response rate from4 to 8 per minute would appear on a semilogarithmicchart as the same amount of change as a doubling of 50to 100 responses per minute. Likewise, a decrease in re-sponding from 75 to 50 responses per minute (a decrease

of one third) would occupy the same distance on the ver-tical axis as a change from 12 to 8 responses per minute(a decrease of one third).

Figure 14 shows the same data graphed on an equal-interval chart (sometimes called arithmetic or add-subtract charts) and on a semilogarithmic chart (some-times called ratio or multiply-divide charts). The behaviorchange that appears as an exponential curve on the arith-metic chart is a straight line when plotted on the semilogchart. The vertical axis in the semilog chart in Figure 14is scaled by log-base-2 or X2 cycles, which means thateach cycle going up on the y axis represents a times-2 in-crease (i.e., a doubling) of the cycle below it.

Standard Celeration Charts

In the 1960s, Ogden Lindsley developed the StandardCeleration Chart to provide a standardized means ofcharting and analyzing how frequency of behavior

Constructing and Interpreting Graphic Displays of Behavioral Data

159

x 16

x 4

x 2

x 1.4x 1.0

TM

per week 1000

100

10

1

.1

01

2 Sep 01Dy Mo Yr

30 Sep 01Dy Mo Yr

28 Oct 01Dy Mo Yr

25 Nov 01Dy Mo Yr

23 Dec 01Dy Mo Yr

20 Jan 02Dy Mo Yr

Calendar Weeks

Likeness of Daily per minute Standard Celeration Chart

Actual Charts available from BEHAVIOR RESEARCH CO.

BOX 3351 - KANSAS CITY, KS 65103-3351 FAX ORDERS 913 362

5900

0 4 8 12 16 20

1b

1a

9.

6.

3.

7. 10

8.

8.

5.2.4.

1.

Calendarsynchronization

Phase change line

Deceleration targetfrequency

Acceleration targetfrequency

Charted day

Changeindicator

Ignored day

Slice back to 50 basicaddition problems

Celeration line

Aim star

Counting-time bar No-chance dayZero performance

Aim star

Co

un

t p

er M

inu

te

The name of the person whosees the performer's chart ona monthly basis. The personmay give advice to theAdviser or Manager.

The name of the personwho advises the manageror performer on a weeklybasis.

The name of theperson who workswith the performeron a daily basis.

The name of the person whocharts the performer's countedbehavior.

OPTIONAL: The age of the performerwhen the chart begins. If not filled in,draw a line through the space.

The name of the personwhose performanceappears on the chart.

A clear description of theperformer's counted behavior.Use a learning channel andactive verb/noun (e.g., see/sayreading books).

0 14 28 42 56 70 84 98 112 126 140

Successive Calendar Days

SUPERVISOR

ORGANIZATION

ADVISOR

DIVISION

MANAGER

ROOM TIMER COUNTER

PERFORMER

CHARTER

COUNTED

Labelled Blanks (Adapted from Pennypacker, Gutierrez, & Lindsley. 2003)

The name of the organizationwhere the counted behaviortakes place.

The name of thedivision of theorganization.

The room wherethe counting occurs.

The name of the personwho times the performer.

The name of the person whocounts the performer's behavior.

OPTIONAL: Any additional informationrelevant to the performer or chart. If notfilled in, draw a line through the space.

Figure 15 Standard Celeration Chart showing basic charting conventions. See Table 1 for explanation.

From the Journal of Precision Teaching and Celeration, 19(1), p. 51. Copyright 2003 by The Standard Celeration Society. Used by permission.

changes over time (Lindsley, 1971; Pennypacker, Gutier-rez & Lindsley, 2003). The Standard Celeration Chart isa semilogarithmic chart with six X10 cycles on the ver-tical axis that can accommodate response rates as low as1 per 24 hours (0.000695 per minute) or as high as 1,000per minute.

There are four standard charts, differentiated from oneanother by the scaling on the horizontal axis: a daily chartwith 140 calendar days, a weekly chart, a monthly chart,and a yearly chart. The daily chart shown in Figure 15is used most often. Table 1 describes major parts of theStandard Celeration Chart and basic charting conventions.

The size of the chart and the consistent scaling of they axis and x axis do not make the Standard CelerationChart standard, as is commonly believed. What makes

the Standard Celeration Chart standard is its consistentdisplay of celeration, a linear measure of frequencychange across time, a factor by which frequency multi-plies or divides per unit of time. The terms accelerationand deceleration are used to describe accelerating per-formances or decelerating performances.

A line drawn from the bottom left corner to the topright corner has a slope of 34° on all Standard Celera-tion Charts. This slope has a celeration value of X2 (readas “times-2”; celerations are expressed with multiples ordivisors). A X2 celeration is a doubling in frequency perceleration period. The celeration period for the daily chartis per week; it is per month for the weekly chart, per 6months for the monthly chart, and per 5 years for theyearly chart.

Constructing and Interpreting Graphic Displays of Behavioral Data

160

Table 1 Basic Charting Conventions for the Daily Standard Celeration Chart (See also Figure 15)

Term Definition Convention

1. Charted day A day on which the behavior is recorded 1. Chart the behavior frequency on the chart on and charted. the appropriate day line.

2. Connect charted days except across phase change lines, no chance days, and ignored days.

a) Acceleration target Responses of the performer intended Chart a dot (•) on the appropriate day line.frequency to accelerate.

b) Deceleration target Responses of the performer intended Chart an (x) on the appropriate day line.frequency to decelerate.

2. No chance day A day on which the behavior had no chance Skip day on daily chart.to occur.

3. Ignored day A day on which the behavior could have occurred Skip day on daily chart. (Connect data across but no one recorded it. ignored days.)

4. Counting-time bar Designates on the chart the performer’s lowest Draw solid horizontal line from the Tuesday to (aka record floor) possible performance (other than zero) in a Thursday day lines on the chart at the “counting-

counting time. Always designated as “once per time bar.”counting time.”

5. Zero performance No performance recorded during the recording Chart on the line directly below the “counting-period. time bar.”

6. Phase change line A line drawn in the space between the last Draw a vertical line between the intervention charted day of one intervention phase and the phases. Draw the line from the top of the data to first charted day of a new intervention phase. the “counting-time bar.”

7. Change indicator Words, symbols, or phrases written on the chart Write word, symbol, and/or phrase. An arrow (➞) in the appropriate phase to indicate changes may be used to indicate the continuance of a during that phase. change into a new phase.

8. Aim star A symbol used to represent (a) the desired Place the point of the caret . . .frequency, and (b) the desired date to achieve ^ for acceleration data the frequency. v for deceleration data

. . . on the desired aim date. Place the horizontal bar on the desired frequency. The caret and horizontal line will create a “star.”

9. Calendar synchronize A standard time for starting all charts. It requires three charts to cover a full year. The Sunday before Labor Day begins the first week of the first chart. The twenty-first week after Labor Day begins the second chart. The forty-first week after Labor Day begins the third chart.

10. Celeration line A straight line drawn through 7–9 or more charted days. This line indicates the amount of improvement that has taken place in a given period of time. A new line is drawn for each phase for both acceleration and deceleration targets. (Note: For nonresearch projects it is acceptable to draw freehand celeration lines.) Acceleration target Deceleration target

From the Journal of Precision Teaching and Celeration, 19(1), pp. 49–50. Copyright 2002 by The Standard Celeration Society. Used by permission.

Constructing and Interpreting Graphic Displays of Behavioral Data

161

x 16

x 4

x 2

x 1.4x 1.0

TM

per week 1000

100

10

1

.1

.01

Dy Mo Yr Dy Mo Yr Dy Mo Yr Dy Mo Yr Dy Mo Yr Dy Mo Yr

Calendar Weeks

Likeness of Daily per minute Standard Celeration Chart

Actual Charts available from BEHAVIOR RESEARCH CO

BOX 3351. KANSAS CITY, KS 65103-3351 FAX ORDERS 913 362

5900

Co

un

t p

er M

inu

te

0 14 28 42 56 70 84 98 112 126 140

Successive Calendar Days

SUPERVISOR

ORGANIZATION

ADVISER MANAGER

TIMER COUNTER CHARTER

PERFORMER

1. Frequencychange = x 3.0

5.2.

4.

6.

7.

8.

8.

18/min

6/min

Celerationcalculation(Quarter-intersectmethod)

Quarter: 1st 2nd 3rd 4th

Frequency Change (FC) Celeration Change (CC) Bounce Change (BC)

Celerationchange

calculation

Projection lines

CelerationCollection

High

Medium

Low

7 Celerations

Bouncechange

Analysis Matrix

Totalbounce= x 1.96

Totalbounce= X 3.6

OutlierTotal

bounce= X 1.69

Totalbounce= x 2.25

FC = x 2.8

FCP = .001

CC = x 2.3

CCP = .003

BC = x 1.8

BCP = .005

Acceleration Target

FC = ÷ 2.8

FCP = .005

CC = ÷ 1.6

CCP = .0001

BC = ÷ 1.1

BCP = .01

Deceleration Target

Analysis Matrix

COUNTED

.001

Figure 16 Standard Celeration Chart showing advanced charting conventions. See Table 2 for explanation.

From the Journal of Precision Teaching and Celeration, 19(1), p. 54. Copyright 2002 by The Standard Celeration Society. Used by permission.

8Detailed descriptions and examples of precision teaching are provided bythe Journal of Precision Teaching and Celeration; the Standard Celera-tion Society’s Web site (http://celeration.org/); Binder (1996); Kubina andCooper (2001); Lindsley (1990, 1992, 1996); Potts, Eshleman, and Cooper(1993); West, Young, and Spooner (1990); and White and Haring (1980).

An instructional decision-making system, calledprecision teaching, has been developed for use with theStandard Celeration Chart.8 Precision teaching is predi-cated on the position that (a) learning is best measured asa change in response rate, (b) learning most often occursthrough proportional changes in behavior, and (c) pastchanges in performance can project future learning.

Precision teaching focuses on celeration, not on thespecific frequency of correct and incorrect responses asmany believe. That frequency is not an emphasis on theChart is clear because the Chart uses estimations for mostfrequency values. A practitioner or researcher might say,”I don’t use the chart because I can’t tell by looking at thechart if the student emitted 24, 25, 26, or 27 responses.”

However, the purpose of the Chart makes such a fine dis-crimination irrelevant because celeration, not specific fre-quency, is the issue. A frequency of 24 or 27 will notchange the line of progress—the celeration course.

Advanced charting conventions used by precisionteachers are illustrated in Figure 16 and described inTable 2. Detailed explanations of the Standard Celera-tion Chart and its uses can be found in Cooper, Kubina,and Malanga (1998); Graf and Lindsley (2002); and Pen-nypacker, Gutierrez, and Lindsley (2003).

Scatterplots

A scatterplot is a graphic display that shows the relativedistribution of individual measures in a data set with re-spect to the variables depicted by the x and y axes. Datapoints on a scatterplot are unconnected. Scatterplots showhow much changes in the value of the variable depictedby one axis correlate with changes in the value of thevariable represented by the other axis. Patterns of data

Constructing and Interpreting Graphic Displays of Behavioral Data

162

Table 2 Advanced Charting Conventions for the Daily Standard Celeration Chart (See also Figure 16)

Constructing and Interpreting Graphic Displays of Behavioral Data

Term Definition Convention

Frequency:1. Frequency change (FC) The multiply “×” or divide “÷” value that compares the final fre- Place an “FC =” in the upper left (aka frequency jump up or quency of one phase to the beginning frequency in the next cell of the analysis matrix.jump down) phase. Compute this by comparing (1) the frequency where the Indicate the value with a “×”

celeration line crosses the last day of one phase to (2) the or “÷” sign (e.g., FC = × 3.0).frequency where the celeration line crosses the first day of the next phase (e.g., a frequency jump from 6/minute to 18/minute.FC = × 3.0).

Celeration:2. Celeration calculation The process for graphically determining a celeration line (aka See advanced charting (quarter-intersect method) “the line of best fit”). (1) Divide the frequencies for each phase conventions sample chart.

into four equal quarters (include ignored and no chance days), (2) locate the median frequency for each half, and (3) draw a celeration line connecting the quarter intersect points.

3. Celeration finder A piece of mylar with standard celeration lines that can be used Buy commercially or copy and to compute celeration line values. cut out part of the vertical axis

on the Standard Celeration Chart.

4. Projection line A dashed line extending to the future from the celeration line. See advanced charting The projection offers a forecast that enables the calculation of conventions sample chart.the celeration change value.

5. Celeration change (CC) The multiply “×” or divide “÷” value that compares the celeration Place a “CC =” in the upper mid-(aka celeration turn up or of one phase to the celeration in the next phase (e.g., a celer- dle cell of the analysis matrix turn down) ation turn down from × 1.3 to ÷ 1.3. CC = ÷ 1.7). with the value indicated with a

“×” or “÷” sign (e.g., CC = ÷ 1.7).

6. Celeration collection A group of three or more celerations for different performers Numerically identify the high, relating to the same behavior over approximately the same middle, and low celeration in time period. the celeration collection and

indicate the total number of celerations in the collection.

7. Bounce change (BC) The multiply “×” or divide “÷” value that compares the bounce Place a “BC =” in the upper in one phase to the bounce in the next phase. Computed by right cell of the analysis comparing (1) the total bounce of one phase to (2) the total matrix with the value indicatedbounce of the next phase (e.g., a bounce change from 5.0 to with a multiply “×” or divide “÷”× 1.4, BC = ÷ 3.6). symbol (e.g., BC = ÷ 3.6).

8. Analysis matrix The analysis matrix provides the numeric change information Place the analysis matrix be-regarding the effects of the independent variable(s) on tween the two phases being frequency, celeration, and bounce between two phases. compared. For acceleration tar-

gets place the matrix above the data. For deceleration targets place the matrix below the data.

Optional:9. Frequency change The frequency change p-value is the probability that the noted Use “FCP =” and indicate the p-value (FCP) change in frequency would have occurred by chance. (Use the p-value in the lower left cell

Fisher exact probability formula to compute the p-value.) on the analysis matrix (e.g., FCP = .0001).

10. Celeration change The celeration change p-value is the probability that the change Use “CCP =” and indicate the p-value (CCP) noted in celeration would have occurred by chance. (Use the p-value in the lower middle cell

Fisher exact probability formula to compute the p-value.) of the matrix (e.g., CCP = .0001).

11. Bounce change The bounce change p-value is the probability that the change Use “BCP =” and indicate the p-value (BCP) noted in bounce would have occurred by chance. (Use the p-value in the lower right cell

Fisher exact probability formula to compute the p-value.) of the analysis matrix (e.g., BCP = .0001).

From the Journal of Precision Teaching and Celeration, 19(1), pp. 52–53. Copyright 2002 by The Standard Celeration Society. Used by permission.

163

60

55

50

45

40

35

30

25

Fo

llo

win

g D

ista

nce

in

Met

ers

35 35 35Speed in Miles per Hour

Safe

At-Risk

Young MalesYoung FemalesMiddle MalesMiddle FemalesOlder MalesOlder Females

Figure 17 Scatterplot showing how thebehavior of individuals from different demo-graphic groups relates to a standard measureof safe driving.

From “A Technology to Measure Multiple Driving Behaviorswithout Self-Report or Participant Reactivity” by T. E. Boyce andE. S. Geller, 2001, Journal of Applied Behavior Analysis, 34,p. 49. Copyright 2001 by the Society for the ExperimentalAnalysis of Behavior, Inc. Used by permission.

points falling along lines on the plane or clusters suggestcertain relationships.

Scatterplots can reveal relationships among differ-ent subsets of data. For example, Boyce and Geller (2001)created the scatterplot shown in Figure 17 to see how the behavior of individuals from different demographicgroups related to a ratio of driving speed and followingdistance that represents one element of safe driving (e.g.,the proportion of data points for young males falling inthe at-risk area of the graph compared to the proportionof drivers from other groups). Each data point shows asingle driver’s behavior in terms of speed and followingdistance and whether the speed and following distancecombination is considered safe or at risk for accidents.Such data could be used to target interventions for certaindemographic groups.

Applied behavior analysts sometimes use scatterplotsto discover the temporal distribution of a target behavior(e.g., Kahng et al., 1998; Symons, McDonald, & Wehby,1998; Touchette, MacDonald, & Langer, 1985). Touchetteand colleagues described a procedure for observing andrecording behavior that produces a scatterplot that graph-ically shows whether the behavior’s occurrence is typi-cally associated with certain time periods.

Constructing Line GraphsThe skills required to construct effective, distortion-freegraphic displays are as important as any in the behavioranalyst’s repertoire. As applied behavior analysis has de-veloped, so have certain stylistic conventions and expec-tations regarding the construction of graphs. An effectivegraph presents the data accurately, completely, andclearly, and makes the viewer’s task of understanding thedata as easy as possible. The graph maker must strive tofulfill each of these requirements while remaining alert to

features in the graph’s design or construction that mightcreate distortion and bias—either the graph maker’s orthat of a future viewer when interpreting the extent andnature of the behavior change depicted by the graph.

Despite the graph’s prominent role in applied be-havior analysis, relatively few detailed treatments of howto construct behavioral graphs have been published. No-table exceptions have been chapters by Parsonson andBaer (1978, 1986) and a discussion of graphic displaytactics by Johnston and Pennypacker (1980, 1993a). Rec-ommendations from these excellent sources and others(Journal of Applied Behavior Analysis, 2000; AmericanPsychological Association, 2001; Tufte, 1983, 1990) con-tributed to the preparation of this section. Additionally,hundreds of graphs published in the applied behavioranalysis literature were examined in an effort to discoverthose features that communicate necessary informationmost clearly.

Although there are few hard-and-fast rules for con-structing graphs, adhering to the following conventionswill result in clear, well-designed graphic displays con-sistent in format and appearance with current practice.Although most of the recommendations are illustratedby graphs presented throughout this text, Figures 18and 19 have been designed to serve as models for mostof the practices suggested here. The recommendationsgiven here generally apply to all behavioral graphs. How-ever, each data set and the conditions under which thedata were obtained present their own challenges to thegraph maker.

Drawing, Scaling, and Labeling Axes

Ratio of the Vertical and Horizontal Axes

The relative length of the vertical axis to the horizontalaxis, in combination with the scaling of the axes, deter-mines the degree to which a graph will accentuate or min-

Constructing and Interpreting Graphic Displays of Behavioral Data

164

Self-Record Self-Record Tokens Follow-Up

Figure 1. Percent of 10-second intervals in which an 8-year-old boy emitted appropriate and inappropriate study behaviors. Each interval was scored as appropriate, inappropriate, or neither, so the two behaviors do not always total 100%.

Baseline

School Days

Per

cen

t o

f In

terv

als

Figure 18 Graph of hypothetical data illustrating a variety of conventions and guidelines forgraphic display.

imize the variability in a given data set. The legibility ofa graph is enhanced by a balanced ratio between theheight and width so that data are neither too close togethernor too spread apart. Recommendations in the behavioralliterature for the ratio, or relative length, of the verticalaxis to the horizontal axis range from 5:8 (Johnston &Pennypacker, 1980) to 3:4 (Katzenberg, 1975). Tufte(1983), whose book The Visual Display of QuantitativeInformation is a wonderful storehouse of guidelines andexamples of effective graphing techniques, recommendsa 1:1.6 ratio of vertical axis to horizontal axis.

A vertical axis that is approximately two-thirds thelength of the horizontal axis works well for most be-havioral graphs. When multiple sets of axes will be pre-sented atop one another in a single figure and/or whenthe number of data points to be plotted on the horizon-tal axis is very large, the length of the vertical axis rel-ative to the horizontal axis can be reduced (as shown inFigures 3 and 7).

Scaling the Horizontal Axis

The horizontal axis should be marked in equal intervals,with each unit representing from left to right the chrono-logical succession of equal time periods or response op-portunities in which the behavior was (or will be)measured and from which an interpretation of behaviorchange is to be made (e.g., days, sessions, trials). Whenmany data points are to be plotted, it is not necessary to

mark each point along the x axis. Instead, to avoid un-necessary clutter, regularly spaced points on the hori-zontal axis are indicated with tic marks numbered by 5s,10s, or 20s.

When two or more sets of axes are stacked verticallyand each horizontal axis represents the same time frame,it is not necessary to number the tic marks on the hori-zontal axes of the upper tiers. However, the hatch markscorresponding to those numbered on the bottom tiershould be placed on each horizontal axis to facilitate com-parison of performance across tiers at any given point intime (see Figure 4).

Representing Discontinuities of Time on the Horizontal Axis

Behavior change, its measurement, and all manipulationsof treatment or experimental variables occur within andacross time. Therefore, time is a fundamental variable inall experiments that should not be distorted or arbitrarilyrepresented in a graphic display. Each equally spaced uniton the horizontal axis should represent an equal passageof time. Discontinuities in the progression of time on thehorizontal axis should be indicated by a scale break: anopen spot in the axis with a squiggly line at each end.Scale breaks on the x axis can also be used to sig-nal periods of time when data were not collected or when regularly spaced data points represent consecu-tive measurements made at unequal intervals (see the

Constructing and Interpreting Graphic Displays of Behavioral Data

165

Wo

rds

Rea

d p

er M

inu

teSt

ud

ent

Stu

den

t

Follow-Up

Sessions

Figure 1. Number of words read correctly and incorrectly during 1-minute probes following each session. Arrows under horizontal axes indicate sessions in which student used reading material brought from home. Break in data path for Student 2 was caused by 2 days’ absence.

Figure 19 Graph of hypothetical data illus-trating a variety of conventions and guidelinesfor graphic display.

numbering of school days for the follow-up condition inFigure 18).

When measurement occurs across consecutive ob-servations (e.g., stories read, meals, interactions) ratherthan standard units of time, the horizontal axis still servesas a visual representation of the progression of time be-cause the data plotted against it have been recorded oneafter the other. The text accompanying such a figureshould indicate the real time in which the consecutivemeasurements were made (e.g., “Two or three peer tu-toring sessions were conducted each school week”), anddiscontinuities in that time context should be clearlymarked with scale breaks (see Figure 19).

Labeling the Horizontal Axis

The dimension by which the horizontal axis is scaledshould be identified in a brief label printed and centeredbelow and parallel to the axis.

Scaling the Vertical Axis

On an equal-interval graph the scaling of the vertical axisis the most significant feature of the graph in terms of itsportrayal of changes in level and variability in the data.Common practice is to mark the origin at 0 (on cumula-tive graphs the bottom on the vertical axis must be 0) andthen to mark off the vertical axis so that the full range ofvalues represented in the data set are accommodated. In-creasing the distance on the vertical axis between eachunit of measurement magnifies the variability in the data,whereas contracting the units of measurement on the ver-tical axis minimizes the portrayal of variability in the dataset. The graph maker should plot the data set against sev-eral different vertical axis scales, watching for distortionof the graphic display that might lead to inappropriateinterpretations.

The social significance of various levels of behaviorchange for the behavior being graphed should be con-sidered in scaling the vertical axis. If relatively small

Constructing and Interpreting Graphic Displays of Behavioral Data

166

numerical changes in performance are socially signifi-cant, the y-axis scale should reflect a smaller range ofvalues. For example, to display data most effectivelyfrom a training program in which an industrial em-ployee’s percentage of correctly executed steps in asafety checklist increased from an unsafe preinterven-tion range of 80% to 90% to an accident-free postinter-vention level of 100%, the vertical axis should focus onthe 80% to 100% range. On the other hand, the scalingof the vertical axis should be contracted when small nu-merical changes in behavior are not socially importantand the degree of variability obscured by the compressedscale is of little interest.

Horizontal numbering of regularly spaced tic markson the vertical axis facilitates use of the scale. The verti-cal axis should not be extended beyond the hatch mark in-dicating the highest value on the axis scale.

When the data set includes several measures of 0,starting the vertical axis at a point slightly above the hor-izontal axis keeps data points from falling directly on theaxis. This produces a neater graph and helps the viewerdiscriminate 0-value data points from those representingmeasures close to 0 (see Figure 18).

In most instances, scale breaks should not be usedon the vertical axis, especially if a data path would crossthe break. However, when two sets of data with widelydifferent and nonoverlapping ranges are displayed againstthe same y axis, a scale break can be used to separate therange of measures encompassed by each data set (seeFigure 19).

In multiple-tier graphs, equal distances on each ver-tical axis should represent equal changes in behavior toaid the comparison of data across tiers. Also, wheneverpossible, similar positions on each vertical axis ofmultiple-tier graphs should represent similar absolute val-ues of the dependent variable. When the differences inbehavioral measures from one tier to another would re-sult in an overly long vertical axis, a scale break can beused to highlight the difference in absolute values, againaiding a point-to-point comparison of y axis positions.

Labeling the Vertical Axis

A brief label, printed and centered to the left and paral-lel to the vertical axis, should identify the dimension bywhich the axis is scaled. On multiple-tiered graphs, onelabel identifying the dimension portrayed on all of thevertical axes can be centered along the axes as a group.Additional labels identifying the different behaviors (orsome other relevant aspect) graphed within each set ofaxes are sometimes printed to the left and parallel to eachvertical axis. These individual tier labels should be printedto the right of and in smaller-sized font than the label

identifying the dimension by which all of the verticalaxes are scaled.

Identifying Experimental Conditions

Condition Change Lines

Vertical lines extending upward from the horizontal axisindicate changes in treatment or experimental procedures.Condition change lines should be placed after (to the rightof ) the data point representing the last measure prior tothe change in conditions signified by the line and before(to the left of ) the data point representing the first mea-sure obtained after the change in procedure. In this waydata points fall clearly on either side of change lines andnever on the lines themselves. Drawing condition changelines to a height equal to the height of the vertical axishelps the viewer estimate the value of data points nearthe top of the vertical axis range.

Condition change lines can be drawn with eithersolid or dashed lines. However, when an experiment or atreatment program includes relatively minor changeswithin an ongoing condition, a combination of solid anddashed lines should be used to distinguish the major andminor changes in conditions. For example, the solid linesin Figure 18 change from baseline, to self-record, to self-record + tokens, to follow-up conditions, and dashedlines indicate changes in the schedule of reinforcementfrom CRF, to VR 5, to VR 10 within the self-record + tokens condition.

When the same manipulation of an independent vari-able occurs at different points along the horizontal axesof multiple-tiered graphs, a dog-leg connecting the con-dition change lines from one tier to the next makes it easyto follow the sequence and timing of events in the ex-periment (see Figure 19).

Unplanned events that occur during an experimentor treatment program, as well as minor changes in pro-cedure that do not warrant a condition change line, can beindicated by placing small arrows, asterisks, or other sym-bols next to the relevant data points (see Figure 6) or just under the x axis (see Figure 19). The figure captionshould explain any special symbols.

Condition Labels

Labels identifying the conditions in effect during each period of an experiment are centered above the space de-lineated by the condition change lines. Whenever spacepermits, condition labels should be parallel to the horizontalaxis. Labels should be brief but descriptive (e.g., Contin-gent Praise is preferable to Treatment), and the labels

Constructing and Interpreting Graphic Displays of Behavioral Data

167

should use the same terms or phrases used in the accom-panying text describing the condition. Abbreviations maybe used when space or design limitations prohibit print-ing the complete label. A single condition label should beplaced above and span across labels identifying minorchanges within that condition (see Figure 18). Num-bers are sometimes added to condition labels to indicatethe number of times the condition has been in effect dur-ing the study (e.g., Baseline 1, Baseline 2).

Plotting Data Points and DrawingData Paths

Data Points

When graphing data by hand, behavior analysts musttake great care to ensure that they plot each data pointexactly on the coordinate of the horizontal and verticalaxis values of the measurement it represents. The in-accurate placement of data points is an unnecessarysource of error in graphic displays, which can lead tomistakes in clinical judgment and/or experimentalmethod. Accurate placement is aided by careful selec-tion of graph paper with grid lines sized and spaced ap-propriately for the data to be plotted. When manydifferent values must be plotted within a small distanceon the vertical axis, a graph paper with many grid linesper inch should be used.9

Should a data point fall beyond the range of valuesdescribed by the vertical axis scale, it is plotted just abovethe scale it transcends with the actual value of the mea-surement printed in parentheses next to the data point.Breaks in the data path leading to and from the off-the-scale data point also help to highlight its discrepancy (seeFigure 19, Session 19).

Data points should be marked with bold symbols thatare easily discriminated from the data path. When onlyone set of data is displayed on a graph, solid dots are mostoften used. When multiple data sets are plotted on thesame set of axes, a different geometric symbol should beused for each set of data. The symbols for each data setshould be selected so that the value of each data point can be determined when data points fall near or on thesame coordinates on the graph (see Figure 18, Sessions9–11).

9Although most graphs published in behavior analysis journals since themid-1990s were constructed with computer software programs that en-sure precise placement of data points, knowing how to draw graphs byhand is still an important skill for applied behavior analysts, who often usehand-drawn graphs to make treatment decisions on a session-by-sessionbasis.

Data Paths

Data paths are created by drawing a straight line fromthe center of each data point in a given data set to thecenter of the next data point in the same set. All datapoints in a given data set are connected in this mannerwith the following exceptions:

• Data points falling on either side of a conditionchange line are not connected.

• Data points should not be connected across a sig-nificant span of time in which behavior was notmeasured. To do so implies that the resultant datapath represents the level and trend of the behaviorduring the span of time in which no measurementwas conducted.

• Data points should not be connected across discon-tinuities of time in the horizontal axis (see Figure18, 1-week school vacation).

• Data points on either side of a regularly scheduledmeasurement period in which data were not col-lected or were lost, destroyed, or otherwise not avail-able (e.g., participant’s absence, recordingequipment failure) should not be joined together (seeFigure 18, baseline condition of bottom graph).

• Follow-up or postcheck data points should not beconnected with one another (see Figure 18) un-less they represent successive measures spaced intime in the same manner as measures obtained dur-ing the rest of the experiment (see Figure 19).

• If a data point falls beyond the values described bythe vertical axis scale, breaks should be made inthe data path connecting that data point with thosethat fall within the described range (Figure 19,top graph, Session 19 data point).

When multiple data paths are displayed on the samegraph, different styles of lines, in addition to differentsymbols for the data points, may be used to help distin-guish one data path from another (see Figure 19). The behavior represented by each data path should be clearlyidentified, either by printed labels with arrows drawn tothe data path (see Figures 18 and 19) or by a legend showing models of the symbols and line styles (see Figure13). When two data sets travel the same path, their linesshould be drawn close to and parallel with one another tohelp clarify the situation (see Figure 18, Sessions 9–11).

Writing the Figure Caption

Printed below the graph, the figure caption should give aconcise but complete description of the figure. The cap-tion should also direct the viewer’s attention to any fea-

Constructing and Interpreting Graphic Displays of Behavioral Data

168

tures of the graph that might be overlooked (e.g., scalechanges) and should explain the meaning of any addedsymbols representing special events.

Printing Graphs

Graphs should be printed in only one color—black. Al-though the use of color can enhance the attractiveness ofa visual display and can effectively highlight certain fea-tures, it is discouraged in the scientific presentation ofdata. Every effort must be made to let the data stand ontheir own. The use of color can encourage perceptions ofperformance or experimental effects that differ from per-ceptions of the same data displayed in black. The factthat graphs and charts may be reproduced in journals andbooks is another reason for using black only.

Constructing Graphs with Computer Software

Software programs for producing computer-generatedgraphs have been available and are becoming both in-creasingly sophisticated and easier to use. Most of thegraphs displayed throughout this book were constructedwith computer software. Even though computer graphicsprograms offer a tremendous time savings over hand-plotted graphs, careful examination should be made ofthe range of scales available and the printer’s capabilityfor both accurate data point placement and precise print-ing of data paths.

Carr and Burkholder (1998) provided an intro-duction to creating single-subject design graphs with Microsoft Excel. Silvestri (2005) wrote detailed,step-by-step instructions for creating behavioral graphsusing Microsoft Excel. Her tutorial can be found on the companion Web site that accompanies this text,www.prenhall.com/cooper.

Interpreting GraphicallyDisplayed Behavioral DataThe effects of an intervention that produces dramatic,replicable changes in behavior that last over time are read-ily seen in a well-designed graphic display. People withlittle or no formal training in behavior analysis can readthe graph correctly in such cases. Many times, however, be-havior changes are not so large, consistent, or durable. Be-havior sometimes changes in sporadic, temporary, delayed,or seemingly uncontrolled ways; and sometimes behaviormay hardly change at all. Graphs displaying these kinds ofdata patterns often reveal equally important and interestingsubtleties about behavior and its controlling variables.

Behavior analysts employ a systematic form of ex-amination known as visual analysis to interpret graphi-cally displayed data. Visual analysis of data from anapplied behavior analysis study is conducted to answertwo questions: (a) Did behavior change in a meaningfulway, and (b) if so, to what extent can that change in be-havior be attributed to the independent variable? Althoughthere are no formalized rules for visual analysis, the dy-namic nature of behavior, the scientific and technologi-cal necessity of discovering effective interventions, andthe applied requirement of producing socially meaning-ful levels of performance all combine to focus the behavior analyst’s interpretive attention on certain fundamental properties common to all behavioral data: (a)the extent and type of variability in the data, (b) the levelof the data, and (c) trends in the data. Visual analysis en-tails an examination of each of these characteristics bothwithin and across the different conditions and phases ofan experiment.

As Johnston and Pennypacker (1993b) so aptly noted,“It is impossible to interpret graphic data without beinginfluenced by various characteristics of the graph itself”(p. 320). Therefore, before attempting to interpret themeaning of the data displayed in a graph, the viewershould carefully examine the graph’s overall construc-tion. First, the figure legend, axis labels, and all condi-tion labels should be read to determine a basicunderstanding of what the graph is about. The viewershould then look at the scaling of each axis, taking noteof the location, numerical value, and relative significanceof any scale breaks.

Next, a visual tracking of each data path should bemade to determine whether data points are properlyconnected. Does each data point represent a single mea-surement or observation, or are the data “blocked” suchthat each data point represents an average or some othersummary of multiple measurements? Do the data showthe performance of an individual subject or the aver-age performance of a group of subjects? If blocked orgroup data are displayed, is a visual representation ofthe range or variation of scores provided (e.g., Armen-dariz & Umbreit, 1999; Epstein et al., 1981); or do thedata themselves allow determination of the amount ofvariability that was collapsed in the graph? For exam-ple, if the horizontal axis is scaled in weeks and eachdata point represents a student’s average score for aweek of daily five-word spelling tests, data pointsfalling near 0 or at the top end of the closed scale, suchas 4.8, pose little problem because they can be the re-sult of only minimal variability in the daily scores forthat week. However, data points near the center of thescale, such as 2 to 3, can result from either stable orhighly variable performance.

Constructing and Interpreting Graphic Displays of Behavioral Data

169

If the viewer suspects distortion produced by agraph’s construction, interpretive judgments of the datashould be withheld until the data are replotted on a newset of axes. Distortion due to a loss of important data fea-tures in summarizing is not so easily remedied. The viewermust consider the report incomplete and forestall any in-terpretive conclusions until he has access to the raw data.

Only when the viewer is satisfied that the graph isproperly constructed and does not visually distort the be-havioral and environmental events it represents, shouldthe data themselves be examined. The data are then in-spected to find what they reveal about the behavior mea-sured during each condition of the study.

Visual Analysis within Conditions

Data within a given condition are examined to determine(a) the number of data points, (b) the nature and extent ofvariability in the data, (c) the absolute and relative levelof the behavioral measure, and (d) the direction and de-gree of any trend(s) in the data.

Number of Data Points

First, the viewer should determine the quantity of datareported during each condition. This entails a simplecounting of data points. As a general rule, the more mea-surements of the dependent variable per unit of time andthe longer the period of time in which measurement oc-curred, the more confidence one can have in the datapath’s estimation of the true course of behavior change(given, of course, a valid and accurate observation andmeasurement system).

The number of data points needed to provide a be-lievable record of behavior during a given condition alsodepends on how many times the same condition has beenrepeated during the study. As a rule, fewer data pointsare needed in subsequent replications of an experimen-tal condition if the data depict the same level and trendin performance that were noted in earlier applications ofthe condition.

The published literature of applied behavior analysisalso plays a part in determining how many data pointsare sufficient. In general, less lengthy phases are requiredof experiments investigating relations between previouslystudied and well-established variables if the results arealso similar to those of the previous studies. More data areneeded to demonstrate new findings, whether or not newvariables are under investigation.

There are other exceptions to the rule of the-more-data-the-better. Ethical concerns do not permit the repeated measurement of certain behaviors (e.g.,

self-injurious behavior) under an experimental conditionin which there is little or no expectation for improvement(e.g., during a no-treatment baseline condition or a con-dition intended to reveal variables that exacerbate prob-lem behavior). Also, there is little purpose in repeatedmeasurement in situations in which the subject cannotlogically perform the behavior (e.g., measuring the num-ber of correct answers to long division problems whenconcurrent observations indicate that the student has notlearned the necessary component skills of multiplicationand subtraction). Nor are many data points required todemonstrate that behavior did not occur when in fact ithad no opportunity to occur.

Familiarity with the response class measured and theconditions under which it was measured may be the graphviewer’s biggest aid in determining how many data pointsconstitute believability. The quantity of data needed in agiven condition is also partly determined by the analytictactics employed in a given study.

Variability

How often and the extent to which multiple measures ofbehavior yield different outcomes is called variability. Ahigh degree of variability within a given condition usu-ally indicates that the researcher or practitioner hasachieved little control over the factors influencing the be-havior. (An important exception to this statement is whenthe purpose of an intervention is to produce a high de-gree of variability.) In general, the greater the variabilitywithin a given condition, the greater the number of datapoints that are necessary to establish a predictable pat-tern of performance. By contrast, fewer data points are re-quired to present a predictable pattern of performancewhen those data reveal relatively little variability.

Level

The value on the vertical axis scale around which a set ofbehavioral measures converge is called level. In the visualanalysis of behavioral data, level is examined within acondition in terms of its absolute value (mean, median,and/or range) on the y-axis scale, the degree of stabilityor variability, and the extent of change from one level toanother. The graphs in Figure 20 illustrate four differ-ent combinations of level and variability.

The mean level of a series of behavioral measureswithin a condition can be graphically illustrated by the ad-dition of a mean level line: a horizontal line drawnthrough a series of data points within a condition at thatpoint on the vertical axis equaling the average value of the

Constructing and Interpreting Graphic Displays of Behavioral Data

170

Figure 20 Four data paths illustrating (A) a low, stable level of responding; (B) ahigh, variable level of responding; (C) aninitially high, stable level of respondingfollowed by a lower, more variable level ofresponding; and (D) an extremely variablepattern of responding not indicative of anyoverall level of responding. Dashed hori-zontal lines on graphs B, C, and D repre-sent the mean levels of responding.

series of measures (e.g., Gilbert, Williams, & McLaugh-lin, 1996). Although mean level lines provide an easy-to-see summary of average performance within a givencondition or phase, they should be used and interpretedwith caution. With highly stable data paths, mean levellines pose no serious drawbacks. However, the less vari-ability there is within a series of data points, the less needthere is for a mean level line. For instance, a mean levelline would serve little purpose in Graph A in Figure 20.And although mean level lines have been added to GraphsB, C, and D in Figure 20, Graph B is the only one of thethree for which a mean level line provides an appropri-ate visual summary of level. The mean level line in GraphC is not representative of any measure of behavior takenduring the phase. The data points in Graph C show a be-havior best characterized as occurring at two distinct lev-els during the condition and beg for an investigation of thefactor(s) responsible for the clear change in levels. Themean level line in Graph D is also inappropriate becausethe variability in the data is so great that only 4 of the 12data points fall close to the mean level line.

A median level line is another method for visuallysummarizing the overall level of behavior in a condition.Because a median level line represents the most typicalperformance within a condition, it is not so influencedby one or two measures that fall far outside the range ofthe remaining measures. Therefore, one should use a me-dian level line instead of a mean level line to graphicallyrepresent the central tendency of a series of data pointsthat include several outliers, either high or low.

Change in level within a condition is determined bycalculating the difference in absolute values on the y axis

between the first and last data points within the condi-tion. Another method, somewhat less influenced by vari-ability in the data, is to compare the difference betweenthe median value of the first three data points in the con-dition with the median value of the final three data pointsin the condition (Koenig & Kunzelmann, 1980).

Trend

The overall direction taken by a data path is its trend.Trends are described in terms of their direction (increas-ing, decreasing, or zero trend), degree or magnitude, andextent of variability of data points around the trend. Thegraphs in Figure 21 illustrate a variety of trends. The di-rection and degree of trend in a series of graphed datapoints can be visually represented by a straight line drawnthrough the data called a trend line or line of progress.Several methods for calculating and fitting trends lines toa series of data have been developed. One can simply in-spect the graphed data and draw a straight line that visu-ally provides the best fit through the data. For thisfreehand method, Lindsley (1985) suggested ignoringone or two data points that fall well beyond the range ofthe remaining values in a data series and fitting the trendline to the remaining scores. Although the freehandmethod is the fastest way of drawing trend lines and canbe useful for the viewer of a published graph, hand-drawntrend lines may not always result in an accurate repre-sentation of trend and are typically not found in graphsof published studies.

Trend lines can also be calculated using a mathe-matical formula called the ordinary least-squares linear

Constructing and Interpreting Graphic Displays of Behavioral Data

171

Figure 21 Data patterns indicating various combinations of trend direction,degree, and variability: (A) zero trend, highstability; (B) zero trend, high variability; (C)gradually increasing stable trend; (D) rapidlyincreasing variable trend; (E) rapidlydecreasing stable trend; (F) graduallydecreasing variable trend; (G) rapidlyincreasing trend followed by rapidlydecreasing trend; (H) no meaningful trend,too much variability and missing data. Split-middle lines of progress have been added toGraphs C–F.

regression equation (McCain & McCleary, 1979; Par-sonson & Baer, 1978). Trend lines determined in thisfashion have the advantage of complete reliability: Thesame trend line will always result from the same data set.The disadvantage of this method is the many mathemat-ical operations that must be performed to calculate thetrend line. A computer program that can perform theequation can eliminate the time concern in calculating aleast-squares trend line.

A method of calculating and drawing lines ofprogress that is more reliable than the freehand methodand much less time-consuming than linear regressionmethods is the split-middle line of progress. The split-middle technique was developed by White (1971, 2005)for use with rate data plotted on semilogarithmic charts,

and it has proven a useful technique for predicting futurebehavior from such data. Split-middle lines of progresscan also be drawn for data plotted against an equal-interval vertical axis, but it must be remembered that sucha line is only an estimate that summarizes the overalltrend (Bailey, 1984). Figure 22 provides a step-by-step illustration of how to draw split-middle lines of progress.A trend line cannot be drawn by any method through a se-ries of data points spanning a scale break in the verticalaxis and generally should not be drawn across scalebreaks in the horizontal axis.

The specific degree of acceleration or deceleration oftrends in data plotted on semilogarithmic charts can bequantified in numerical terms. For example, on the dailyStandard Celeration Chart a “times-2” celeration means

Constructing and Interpreting Graphic Displays of Behavioral Data

172

that the response rate is doubling each week, and a “times-1.25” means that the response rate is accelerating by a fac-tor of one fourth each week. A “divide-by-2” celerationmeans that each week the response rate will be one half ofwhat it was the week before, and a “divide-by-1.5” meansthat the frequency is decelerating by one third each week.

There is no direct way to determine visually fromdata plotted on equal-interval charts the specific rates atwhich trends are increasing or decreasing. But visualcomparison of trend lines drawn through data on equal-interval charts can provide important information aboutthe relative rates of behavior change.

First Half Second Half

Mid-Rate

Mid

-Dat

e

Mid

-Dat

e

Mid-Rate(s)

Quarter-Intersect Line of Progress

Split-Middle Line of Progress

Figure 22 How to draw a split-middle line of progress through a series of graphically displayeddata points.

Adapted from Exceptional Teaching, p. 118, by O. R. White and N. G. Haring, 1980, Columbus, OH: Charles E. Merrill. Copyright 1980 byCharles E. Merrill. Used by permission.

Constructing and Interpreting Graphic Displays of Behavioral Data

173

Time

Res

po

nse

Mea

sure

Res

po

nse

Mea

sure

Time

Figure 23 Inappropriate use of mean levellines, encouraging interpretation of a higheroverall level of responding in Condition B whenextreme variability (top graph) and trends(bottom graph) warrant different conclusions.

Condition A, but the mean level lines suggest a clearchange in behavior. Second, measures of central tendencycan obscure important trends in the data that warrant in-terpretations other than those suggested by the centraltendency indicators. Although a mean or median line ac-curately represents the average or typical performance,neither provides any indication of increasing or decreas-ing performance. In the bottom graph of Figure 23, for example, the mean line suggests a higher level of per-formance in Condition B than in Condition A, but an ex-amination of trend yields a very different picture ofbehavior change within and between Conditions A and B.

Those analyzing behavioral data should also noteany changes in level that occur after a new condition hasbeen in place for some time and any changes in level thatoccur early in a new condition but are later lost. Such de-layed or temporary effects can indicate that the indepen-dent variable must be in place for some time beforebehavior changes, or that the temporary level change wasthe result of an uncontrolled variable. Either case callsfor further investigation in an effort to isolate and controlrelevant variables.

A trend may be highly stable with all of the datapoints falling on or near the trend line (see Figure 21,Graphs C and E). Data paths can also follow a trend eventhough a high degree of variability exists among the datapoints (see Figure 21, Graphs D and F).

Visual Analysis between Conditions

After inspection of the data within each condition orphase of a study, visual analysis proceeds with a com-parison of data between conditions. Drawing proper con-clusions entails comparing the previously discussedproperties of behavioral data—level, trend, and stability/variability—between different conditions and among sim-ilar conditions.

A condition change line indicates that an indepen-dent variable was manipulated at a given point in time. Todetermine whether an immediate change in behavior oc-curred at that point in time, one needs to examine the dif-ference between the last data point before the conditionchange line and the first data point in the new condition.

The data are also examined in terms of the overalllevel of performance between conditions. In general,when all data points in one condition fall outside therange of values for all data points in an adjacent condi-tion (that is, there is no overlap of data points between thehighest values obtained in one condition and the lowestvalues obtained in the other condition), there is littledoubt that behavior changed from one condition to thenext. When many data points in adjacent conditions over-lap one another on the vertical axis, less confidence canbe placed in the effect of the independent variable asso-ciated with the change in conditions.10

Mean or median level lines can be helpful in exam-ining the overall level between conditions. However,using mean or median level lines to summarize and com-pare the overall central tendency of data across condi-tions poses two serious problems. First, the viewer ofsuch a visual display must guard against letting “appar-ently large differences among measures of central ten-dency visually overwhelm the presence of equally largeamounts of uncontrolled variability” (Johnston & Pen-nypacker, 1980, p. 351). Emphasis on mean changes inperformance in a graphic display can lead the viewer tobelieve that a greater degree of experimental control wasobtained than is warranted by the data. In the top graphof Figure 23 half of the data points in Condition B fallwithin the range of values of the measures taken during

10Whether a documented change in behavior should be interpreted as afunction of the independent variable depends on the experimental designused in the study. Strategies and tactics for designing experiments are pre-sented in chapters entitled “Analyzing Behavior Change: Basic Assump-tions and Strategies” through “Planning and Evaluating Applied BehaviorAnalysis Research.”

Constructing and Interpreting Graphic Displays of Behavioral Data

174

Figure 24 Stylized data paths illustrating the different combinations of change or lack of change in level and trend between twoadjacent conditions: Graphs A and B show no change in either level or trend between the two conditions, Graphs C and D showchanges in level and no change in trend,Graphs E and F depict no immediate changein level and a change in trend, and Graphs Gand H reveal change in both level and trend.

From “Time-Series Analysis in Operant Research” by R. R.Jones, R. S. Vaught, and M. R. Weinrott, 1977, Journal ofApplied Behavior Analysis, 10, p. 157. Copyright 1977 by theSociety for the Experimental Analysis of Behavior, Inc. Adaptedby permission.

Visual analysis of data between adjacent conditionsincludes an examination of the trends exhibited by thedata in each condition to determine whether the trendfound in the first condition changed in direction or slopeduring the subsequent condition. In practice, because eachdata point in a series contributes to level and trend, thetwo characteristics are viewed in conjunction with oneanother. Figure 24 presents stylized data paths illustrat-ing four basic combinations of change or lack of changein level and trend between adjacent conditions. Of course,many other data patterns could display the same charac-teristics. Idealized, straight-line data paths that eliminatethe variability found in most repeated measures of be-havior have been used to highlight level and trend.

Visual analysis includes not only an examination andcomparison of changes in level and trend between adja-cent conditions, but also an examination of performanceacross similar conditions. Interpreting what the data froman applied behavior analysis mean requires more thanvisual analysis and the identification and description oflevel, trend, and variability. When behavior change isdemonstrated over the course of a treatment program orresearch study, the next question to be asked is, Was thechange in behavior a function of the treatment or experi-mental variables?

Constructing and Interpreting Graphic Displays of Behavioral Data

175

Summary1. Applied behavior analysts document and quantify behav-

ior change by direct and repeated measurement of behav-ior, and the product of those measurements called data.

2. Graphs are relatively simple formats for visually display-ing relationships among and between a series of mea-surements and relevant variables.

Purpose and Benefits of Graphic Display of Behavioral Data

3. Graphing each measure of behavior as it is collected pro-vides the practitioner or researcher with an immediate andongoing visual record of the participant’s behavior, al-lowing treatment and experimental decisions to be re-sponsive to the participant’s performance.

4. Direct and continual contact with the data in a readily an-alyzable format enables the practitioner or researcher toidentify and investigate interesting variations in behavioras they occur.

5. As a judgmental aid for interpreting experimental results,graphic display is a fast, relatively easy-to-learn methodthat imposes no arbitrary levels of significance for evalu-ating behavior change.

6. Visual analysis of graphed data is a conservative methodfor determining the significance of behavior change; onlyvariables able to produce meaningful effects repeatedlyare considered significant, and weak and unstable vari-ables are screened out.

7. Graphs enable and encourage independent judgments ofthe meaning and significance of behavior change by others.

8. Graphs can serve as effective sources of feedback to thepeople whose behavior they represent.

Types of Graphs Used in Applied Behavior Analysis

9. Line graphs, the most commonly used format for thegraphic display of behavioral data, are based on the Carte-sian plane, a two-dimensional area formed by the inter-section of two perpendicular lines.

10. Major parts of the simple line graph are the horizontal axis(also called the x axis), the vertical axis (also called the y axis), condition change lines, condition labels, datapoints, the data path, and the figure caption.

11. Graphs with multiple data paths on the same set of axes areused in applied behavior analysis to show (a) two or moredimensions of the same behavior, (b) two or more differ-ent behaviors, (c) the same behavior under different and al-ternating experimental conditions, (d) changes in targetbehavior relative to the changing values of an independentvariable, and (e) the behavior of two or more participants.

12. A second vertical axis, which is drawn on the right-handside of the horizontal axis, is sometimes used to show dif-ferent scales for multiple data paths.

13. Bar graphs are used for two primary purposes: (a) to dis-play discrete data not related by an underlying dimensionthat can be used to scale the horizontal axis and (b) to sum-marize and enable easy comparison of the performance ofa participant or group of participants during the differentconditions of an experiment.

14. Each data point on a cumulative record represents the totalnumber of responses emitted by the subject since mea-surement began. The steeper the slope of the data path ona cumulative graph, the higher the response rate.

15. Overall response rate refers to the average rate of re-sponse over a given time period; a local response raterefers to the rate of response during a smaller period oftime within a larger period for which an overall responserate has been given.

16. Cumulative records are especially effective for displayingdata when (a) the total number of responses made overtime is important, (b) the graph is used as a source of feed-back to the subject, (c) the target behavior can occur onlyonce per measurement period, and (d) a fine analysis ofa single instance or portions of data from an experimentis desired.

17. Semilogarithmic charts use a logarithmic-scaled y axis sothat changes in behavior that are of equal proportion (e.g.,doublings of the response measure) are represented byequal distances on the vertical axis.

18. The Standard Celeration Chart is a six-cycle multiply-divide graph that enables the standardized charting of cel-eration, a linear measure of frequency change across time,a factor by which frequency multiplies or divides per unitof time.

19. A scatterplot shows the relative distribution of individualmeasures in a data set with respect to the variables de-picted by the x and y axes.

Constructing Line Graphs

20. The vertical axis is drawn to a length approximately twothirds that of the horizontal axis.

21. The horizontal axis is marked off in equal intervals, eachrepresenting from left to right the chronological successionof equal time periods within which behavior was measured.

22. Discontinuities of time are indicated on the horizontal axisby scale breaks.

23. The vertical axis is scaled relative to the dimension of be-havior measured, the range of values of the measures ob-tained, and the social significance of various levels ofchange in the target behavior.

24. Condition change lines indicate changes in the treatmentprogram or manipulations of an independent variable andare drawn to the same height as the vertical axis.

Constructing and Interpreting Graphic Displays of Behavioral Data

176

25. A brief, descriptive label identifies each condition of anexperiment or behavior change program.

26. Data points should be accurately placed with bold, soliddots. When multiple data paths are used, different geo-metric symbols are used to distinguish each data set.

27. Data paths are created by connecting successive data pointswith a straight line.

28. Successive data points should not be connected when (a)they fall on either side of a condition change line; (b) theyspan a significant period of time in which behavior wasnot measured; (c) they span discontinuities of time on thehorizontal axis; (d) they fall on either side of a regularlyscheduled measurement period in which data were not col-lected or were lost, destroyed, or otherwise not available;(e) they fall in a follow-up or postcheck period that is notregularly spaced in time in the same manner as the rest ofthe study; or (f) one member of the pair falls outside therange of values described by the vertical axis.

29. The figure caption provides a concise but complete de-scription of the graph, giving all of the information neededto interpret the display.

30. Graphs should be printed in black ink only.

Interpreting Graphically Displayed Behavioral Data

31. Visual analysis of graphed data attempts to answer twoquestions: (a) did a socially meaningful change in behav-ior take place, and (b) if so, can the behavior change be at-tributed to the independent variable?

32. Before beginning to evaluate the data displayed in a graph,a careful examination of the graph’s construction shouldbe undertaken. If distortion is suspected from the featuresof the graph’s construction, the data should be replotted ona new set of axes before interpretation is attempted.

33. Blocked data and data representing the average perfor-mance of a group of subjects should be viewed with the un-derstanding that significant variability may have been lostin the display.

34. Visual analysis of data within a given condition focuseson the number of data points, the variability of perfor-

mance, the level of performance, and the direction and de-gree of any trends in the data.

35. As a general rule, the more data in a condition and thegreater the stability of those data, the more confidence onecan place in the data path’s estimate of behavior duringthat time. The more variability in the behavioral measuresduring a condition, the greater the need for additional data.

36. Variability refers to the frequency and degree to whichmultiple measures of behavior yield different outcomes. Ahigh degree of variability within a given condition usuallyindicates that little or no control has been achieved over thefactors influencing the behavior.

37. Level refers to the value on the vertical axis around whicha series of data points converges. When the data in a givencondition all fall at or near a specific level, the behavior isconsidered stable with respect to level; to the extent thatthe behavioral measures vary considerably from one to an-other, the data are described as showing variability with re-spect to level. In cases of extreme variability, no particularlevel of performance is evidenced.

38. Mean or median level lines are sometimes added to graphicdisplays to represent the overall average or typical per-formance during a condition. Mean and median level linesshould be used and interpreted with care because they canobscure important variability and trends in the data.

39. Trend refers to the overall direction taken by a data path;trends are described in terms of their direction (increas-ing, decreasing, or zero trend), degree (gradual or steep),and the extent of variability of data points around the trend.

40. Trend direction and degree can be visually represented bydrawing a trend line, or line of progress, through a seriesof data points. Trend lines can be drawn freehand, usingthe least-squares regression equation, or using a methodcalled the split-middle line of progress. Split-middle linesof progress can be drawn quickly and reliably and haveproven useful in analyzing behavior change.

41. Visual analysis of data across conditions determineswhether change in level, variability, and/or trend occurredand to what extent any changes were significant.

Constructing and Interpreting Graphic Displays of Behavioral Data

177

A-B designaffirmation of the consequentascending baselinebaselinebaseline logicconfounding variabledependent variabledescending baselineexperimental control

experimental designexperimental questionexternal validityextraneous variableindependent variableinternal validityparametric analysispractice effects

predictionreplicationsingle-subject designsstable baselinesteady state respondingsteady state strategyvariable baselineverification

Behavior Analyst Certification Board® BCBA® & BCABA®Behavior Analyst Task List©, Third Edition

Content Area 3: Principles, Processes, and Concepts

3–10 Define and provide examples of functional relations.

Content Area 5: Experimental Evaluation of Interventions

5–1 Systematically manipulate independent variables to analyze their effects ontreatment.

5–2 Identify and address practical and ethical considerations in using various ex-perimental designs.

5–4 Conduct a parametric analysis (i.e., determining effective parametric values ofconsequences, such as duration or magnitude).

© 2006 The Behavior Analyst Certification Board, Inc.,® (BACB®) all rights reserved. A current version of thisdocument may be found at www.bacb.com. Requests to reprint, copy, or distribute this document and ques-tions about this document must be submitted directly to the BACB.

Analyzing Behavior Change:Basic Assumptions and Strategies

Key Terms

From Chapter 7 of Applied Behavior Analysis, Second Edition. John O. Cooper, Timothy E. Heron, William L. Heward. Copyright © 2007by Pearson Education, Inc. All rights reserved.

178

Measurement can show whether, when, and howmuch behavior has changed, but measurementalone cannot reveal why, or more accurately

how, behavior change occurred. A useful technology ofbehavior change requires an understanding of the spe-cific arrangements of environmental variables that willproduce desired behavior change. Without this knowl-edge, efforts to change behavior could only be consid-ered one-shot affairs consisting of procedures selectedrandomly from a bag of tricks with little or no general-ity from one situation to the next.

The search for and demonstration of functional andreliable relations between socially important behaviorand its controlling variables is a defining characteristic ofapplied behavior analysis. A major strength of appliedbehavior analysis is its insistence on experimentation asits method of proof, which enables and demands an on-going, self-correcting search for effectiveness.

Our technology of behavior change is also a technol-ogy of behavior measurement and of experimental design; it developed as that package, and as long as itstays in that package, it is a self-evaluating enterprise.Its successes are successes of known magnitude; itsfailures are almost immediately detected as failures;and whatever its outcomes, they are attributable toknown inputs and procedures rather than to chanceevents or coincidences. (D. M. Baer, personal com-munication, October 21, 1982)

An experimental analysis must be accomplished todetermine if and how a given behavior functions in rela-tion to specific changes in the environment. This chapterintroduces the basic concepts and strategies that underliethe analysis in applied behavior analysis.1 The chapterbegins with a brief review of some general conceptionsof science, followed by a discussion of two defining fea-tures and two assumptions about the nature of behaviorthat dictate the experimental methods most conducive tothe subject matter. The chapter then describes the neces-sary components of any experiment in applied behavioranalysis and concludes by explaining the basic logic thatguides the experimental methods used by applied be-havior analysts.

1The analysis of behavior has benefited immensely from two particu-larly noteworthy contributions to the literature on experimental method:Sidman’s Tactics of Scientific Research (1960/1988) and Johnston andPennypacker’s Strategies and Tactics of Human Behavioral Research(1980, 1993a). Both books are essential reading and working referencesfor any serious student or practitioner of behavior analysis, and we ac-knowledge the significant part each has played in the preparation of thischapter.

Concepts and AssumptionsUnderlying the Analysis of BehaviorScientists share a set of common perspectives that in-clude assumptions about the nature of the phenomenathey study (determinism), the kind of information thatshould be gathered on the phenomena of interest (em-piricism), the way questions about the workings of na-ture are most effectively examined (experimentation),and how the results of experiments should be judged(with parsimony and philosophic doubt). These attitudesapply to all scientific disciplines, including the scientificstudy of behavior. “The basic characteristics of scienceare not restricted to any particular subject matter” (Skin-ner, 1953, p. 11).

The overall goal of science is to achieve an under-standing of the phenomena under study—socially sig-nificant behavior, in the case of applied behavior analysis.Science enables various degrees of understanding at threelevels: description, prediction, and control. First, sys-tematic observation enhances the understanding of naturalphenomena by enabling scientists to describe them ac-curately. Descriptive knowledge of this type yields a col-lection of facts about the observed events—facts that canbe quantified and classified, a necessary and importantelement of any scientific discipline.

A second level of scientific understanding occurswhen repeated observation discovers that two events con-sistently covary. That is, the occurrence of one event (e.g.,marriage) is associated with the occurrence of anotherevent at some reliable degree of probability (e.g., longerlife expectancy). The systematic covariation between twoevents—termed a correlation—can be used to predict theprobability that one event will occur based on the pres-ence of the other event.

The ability to predict successfully is a useful resultof science; prediction allows preparation. However, thegreatest potential benefits of science are derived fromthe third, and highest, level of scientific understanding,which comes from establishing experimental control.“The experimental method is a method for isolating therelevant variables within a pattern of events. Methodsthat depend merely on observed correlations, without ex-perimental intervention, are inherently ambiguous”(Dinsmoor, 2003, p. 152).

Experimental Control: The Path to and Goal of Behavior Analysis

Behavior is the interaction between an organism and itsenvironment and is best analyzed by measuring changesin behavior that result from imposed variations on the

Analyzing Change: Basic Assumptions and Strategies

179

Analyzing Change: Basic Assumptions and Strategies

environment. This statement embodies the general strat-egy and the goal of behavioral research: to demonstratethat measured changes in the target behavior occur be-cause of experimentally manipulated changes in the en-vironment.

Experimental control is achieved when a pre-dictable change in behavior (the dependent variable) canbe reliably produced by the systematic manipulation ofsome aspect of the person’s environment (the indepen-dent variable). Experimentally determining the effects ofenvironmental manipulation on behavior and demon-strating that those effects can be reliably produced con-stitute the analysis in applied behavior analysis. Ananalysis of a behavior has been achieved when a reliablefunctional relation between the behavior and some spec-ified aspect of the environment has been demonstratedconvincingly. Knowledge of functional relations enablesthe behavior analyst to reliably alter behavior in mean-ingful ways.

An analysis of behavior “requires a believabledemonstration of the events that can be responsible for theoccurrence or nonoccurrence of that behavior. An exper-imenter has achieved an analysis of a behavior when hecan exercise control over it” (Baer, Wolf, & Risley, 1968,p. 94).2 Baer and colleague’s original definition of analy-sis highlights an important point. Behavior analysis’seeking and valuing of the experimental isolation of agiven environmental variable of which a behavior isshown to be a function has often been misinterpreted assupport for a simplistic conception of the causes of be-havior. The fact that a behavior varies as a function of agiven variable does not preclude its varying as a functionof other variables. Thus, Baer and colleagues describedan experimental analysis as a convincing demonstrationthat a variable can be responsible for the observed be-havior change. Even though a complete analysis (i.e., un-derstanding) of a behavior has not been achieved untilall of its multiple causes have been accounted for, an app-lied (i.e., technologically useful) analysis has been ac-complished when the investigator has isolated anenvironmental variable (or group of variables that oper-ate together as a treatment package) that reliably pro-duces socially significant behavior change. An appliedanalysis of behavior also requires that the target behav-ior be a function of an environmental event that can bepractically and ethically manipulated.

Experiments that show convincingly that changes inbehavior are a function of the independent variable andare not the result of uncontrolled or unknown variablesare said to have a high degree of internal validity. A

2The researcher’s audience ultimately determines whether a claimed func-tional relation is believable, or convincing. We will explore the believabil-ity of research findings further in chapter entitled “Planning and EvaluatingApplied Behavior Analysis Research.”

study without internal validity can yield no meaningfulstatements regarding functional relations between thevariables examined in the experiment, nor can it be usedas the basis for any statements regarding the generality ofthe findings to other persons, settings, and/or behaviors.3

When initially planning an experiment and laterwhen examining the actual data from an ongoing study,the investigator must always be on the lookout for threatsto internal validity. Uncontrolled variables known or sus-pected to exert an influence on the dependent variableare called confounding variables. For example, supposea researcher wants to analyze the effects of guided lecturenotes on high school biology students’ learning as mea-sured by their scores on next-day quizzes. One potentialconfounding variable that the researcher would need totake into account would be each student’s changing levelof interest in and background knowledge about the spe-cific curriculum content (e.g., a student’s high score ona quiz following a lecture on sea life may be due to hisprior knowledge about fishing, not the guided notes pro-vided during that lecture).

A primary factor in evaluating the internal validity ofan experiment is the extent to which it eliminates or controls the effects of confounding variables while stillinvestigating the research questions of interest. It is impossible to eliminate all sources of uncontrolled vari-ability in an experiment, although the researcher alwaysstrives for that ideal. In reality, the goal of experimentaldesign is to eliminate as many uncontrolled variables aspossible and to hold constant the influence of all othervariables except the independent variable, which is pur-posefully manipulated to determine its effects.

Behavior: Defining Features andAssumptions that Guide Its Analysis

Behavior is a difficult subject matter, not because it isinaccessible, but because it is extremely complex. Sinceit is a process, rather than a thing, it cannot easily beheld still for observation. It is changing, fluid, evanes-cent, and for this reason it makes great technical de-mands upon the ingenuity and energy of the scientist.

—B. F. Skinner (1953, p. 15)

How a science defines its subject matter exerts profoundinfluence and imposes certain constraints on the experi-mental strategies that will be most effective in an under-standing of it. “In order for the scientific study of behaviorto be as effective as possible, it is necessary for the meth-ods of the science to accommodate the characteristics ofits subject matter” (Johnston & Pennypacker, 1993a,

3External validity commonly refers to the degree to which a study’s resultsare generalizable to other subjects, settings, and/or behaviors. Strategies forassessing and extending the external validity of experimentally demon-strated functional relations are discussed in Chapter entitled “Planning andEvaluating Applied Behavior Analysis Research.”

180

Analyzing Change: Basic Assumptions and Strategies

p. 117). The experimental methods of behavior analysisare guided by two defining features of behavior: (a) thefact that behavior is an individual phenomenon and (b) the fact that behavior is a continuous phenomenon;and by two assumptions about its nature: (a) that behav-ior is determined and (b) that behavioral variability is ex-trinsic to the organism.

Behavior Is an Individual Phenomenon

If behavior is defined as a person’s interaction with theenvironment, it follows that a science seeking to discovergeneral principles or laws that govern behavior muststudy the behavior of individuals. Groups of people do notbehave; individual people do. Thus, the experimentalstrategy of behavior analysis is based on within-subject(or single-subject) methods of analysis.

The average performance of a group of individuals isoften interesting and useful information and may, de-pending on the methods by which individuals were se-lected to be in the group, enable probability statementsabout the average performance within the larger popula-tion represented by the group. However, “group data”provide no information about the behavior of any indi-vidual or how any individual might perform in the fu-ture. For example, although administrators and taxpayersmay be justifiably interested in the average increase instudents’ reading comprehension from grade level tograde level, such information is of little use to the class-room teacher who must decide how to improve a givenstudent’s comprehension skills.

Nonetheless, learning how behavior–environmentrelations work with many individuals is vital. A scienceof behavior contributes to a useful technology of be-havior change only to the extent that it discovers func-tional relations with generality across individuals. Theissue is how to achieve that generality. Behavior ana-lysts have found that discovery of behavioral principleswith generality across persons is best accomplished byreplicating the already demonstrated functional relationswith additional subjects.

Behavior Is a Dynamic,Continuous Phenomenon

Just as behavior cannot take place in an environmentalvoid (it must happen somewhere), so must behavior occurat particular points in time. Behavior is not a static event;it takes place in and changes over time. Therefore, singlemeasures, or even multiple measures sporadically dis-persed over time, cannot provide an adequate descriptionof behavior. Only continuous measurement over timeyields a complete record of behavior as it occurs in con-

tinuous measurement is seldom feasible in applied set-tings, the systematic repeated measurement of behaviorhas become the hallmark of applied behavior analysis.

Behavior Is Determined

All scientists hold the assumption that the universe is alawful and orderly place and that natural phenomenaoccur in relation to other natural events.

The touchstone of all scientific research is order. In theexperimental analysis of behavior, the orderliness of rela-tions between environmental variables and the subject’sbehavior is at once the operating assumption upon whichthe experimenter proceeds, the observed fact that permitsdoing so, and the goal that continuously focuses experi-mental decisions. That is, the experimenter begins withthe assumption that the subject’s behavior is the result ofvariables in the environment (as opposed to having nocauses at all). (Johnston & Pennypacker, 1993a, p. 238)

In other words, the occurrence of any event is deter-mined by the functional relations it holds with otherevents. Behavior analysts consider behavior to be a nat-ural phenomenon that, like all natural phenomena, is de-termined. Although determinism must always remain anassumption—it cannot be proven—it is an assumptionwith strong empirical support.

Data gathered from all scientific fields indicate that de-terminism holds throughout nature. It has become clearthat the law of determinism, that is, that all things aredetermined, holds for the behavioral area also. . . . Whenlooking at actual behavior we’ve found that in situation 1,behavior is caused; in situation 2, behavior is caused; insituation 3, behavior is caused; . . . and in situation1001, behavior is caused. Every time an experimenterintroduces an independent variable that produces somebehavior or some change in behavior, we have furtherempirical evidence that behavior is caused or determin-istic. (Malott, General, & Snapper, 1973, pp. 170, 175)

Behavioral Variability Is Extrinsic to the Organism

When all conditions during a given phase of an experi-ment are held constant and repeated measures of the be-havior result in a great deal of “bounce” in the data (i.e.,the subject is not responding in a consistent fashion), thebehavior is said to display variability.

The experimental approach most commonly used inpsychology and other social and behavioral sciences (e.g.,education, sociology, political science) makes two as-sumptions about such variability: (a) Behavioral vari-ability is an intrinsic characteristic of the organism, and

text with its environmental influences. Because true con-

181

Analyzing Change: Basic Assumptions and Strategies

(b) behavioral variability is distributed randomly amongindividuals in any given population. These two assump-tions have critical methodological implications: (a) At-tempting to experimentally control or investigatevariability is a waste of time—it simply exists, it’s agiven; and (b) by averaging the performance of individ-ual subjects within large groups, the random nature ofvariability can be statistically controlled or canceled out.Both of these assumptions about variability are likelyfalse (empirical evidence points in the opposite direc-tion), and the methods they encourage are detrimental toa science of behavior. “Variables are not canceled statis-tically. They are simply buried so their effects are notseen” (Sidman, 1960/1988, p. 162).4

Behavior analysts approach variability in their dataquite differently. A fundamental assumption underlyingthe design and guiding the conduct of experiments inbehavior analysis is that, rather than being an intrinsiccharacteristic of the organism, behavioral variability isthe result of environmental influence: the independentvariable with which the investigator seeks to producechange, some uncontrolled aspect of the experiment it-self, and/or an uncontrolled or unknown factor outside ofthe experiment.

The assumption of extrinsic variability yields the fol-lowing methodological implication: Instead of averagingthe performance of many subjects in an attempt to maskvariability (and as a result forfeiting the opportunity to un-derstand and control it), the behavior analyst experi-mentally manipulates factors suspected of causing thevariability. Searching for the causal factors contributesto the understanding of behavior, because experimentaldemonstration of a source of variability implies experi-mental control and thus another functional relation. Infact, “tracking down these answers may even turn out tobe more rewarding than answering the original experi-mental question” (Johnston & Pennypacker, 1980 p. 226).

From a purely scientific viewpoint, experimentallytracking down sources of variability is always the pre-ferred approach. However, the applied behavior analyst,with a problem to solve, must often take variability as itpresents itself (Sidman, 1960/1988). Sometimes the ap-plied researcher has neither the time nor the resources toexperimentally manipulate even suspected and likelysources of variability (e.g., a teacher who interacts witha student for only part of the day has no hope of control-ling the many variables outside the classroom). In mostsettings the applied behavior analyst seeks a treatmentvariable robust enough to overcome the variability in-

4Some investigators use group comparison designs not just to cancel ran-domly distributed variability but also to produce results that they believewill have more external validity. Group comparison and within subject ex-perimental methods are compared in Chapter entitled “Planning and Eval-

duced by uncontrolled variables and produce the desiredeffects on the target behavior (Baer, 1977b).

Components ofExperiments in AppliedBehavior Analysis

Nature to be commanded must be obeyed. . . . But,that coin has another face. Once obeyed, nature canbe commanded.

—B. F. Skinner (1956, p. 232)

Experimentation is the scientist’s way of discovering na-ture’s rules. Discoveries that prove valid and reliable cancontribute to a technology of effective behavior change.All experiments in applied behavior analysis includethese essential components:

• At least one participant (subject)

• At least one behavior (dependent variable)

• At least one setting

• A system for measuring the behavior and ongoingvisual analysis of the data

• At least one treatment or intervention condition(independent variable)

• Manipulations of the independent variable so thatits effects on the dependent variable, if any, can bedetected (experimental design)

Because the reason for conducting any experimentis to learn something from nature, a well-planned exper-iment begins with a specific question for nature.

Experimental QuestionWe conduct experiments to find out something we donot know.

—Murray Sidman (1960/1988, p. 214)

For the applied behavior analyst, Sidman’s “somethingwe do not know” is cast in the form of a question aboutthe existence and/or specific nature of a functional rela-tion between meaningful improvement in socially sig-nificant behavior and one or more of its controllingvariables. An experimental question is “a brief but spe-cific statement of what the researcher wants to learn fromconducting the experiment” (Johnston & Pennypacker(1993b, p. 366). In published reports of applied behavioranalysis studies, the experimental (or research) questionis sometimes stated explicitly in the form of a question,as in these examples:

• Which method of self-correction, after attemptingeach of 10 words or after attempting a list of 10uating Applied Behavior Analysis Research.”

182

Analyzing Change: Basic Assumptions and Strategies

words, will produce better effects on (a) the acqui-sition of new spelling words as measured by end-of-the-week tests, and (b) the maintenance ofpracticed spelling words as measured by 1-weekmaintenance tests by elementary school studentswith learning disabilities? (Morton, Heward, &Alber, 1998)

• What are the effects of training middle schoolstudents with learning disabilities to recruitteacher attention in the special education class-room on (a) the number of recruiting responsesthey emit in the general education classroom,(b) the number of teacher praise statements re-ceived by the students in the general educationclassroom, (c) the number of instructional feed-back statements received by the students in thegeneral education classroom, and (d) the stu-dents’ academic productivity and accuracy in thegeneral education classroom? (Alber, Heward, &Hippler, 1999, p. 255)

More often, however, the research question exam-ined by the experiment is implicit within a statement ofthe study’s purpose. For example:

• The purpose of the present study was to comparethe relative effectiveness of nonremoval of thespoon and physical guidance as treatments forfood refusal and to assess the occurrence of cor-ollary behaviors produced by each procedure.(Ahearn, Kerwin, Eicher, Shantz, & Swearingin,1996, p. 322)

• The present study was conducted to determine ifhabit reversal is effective in treating verbal tics inchildren with Tourette syndrome. (Woods, Twohig,Flessner, & Roloff, 2003, p. 109)

• The purpose of this study was to determine if ob-served SIB during the tangible condition was con-founded by the simultaneous delivery of therapistattention. (Moore, Mueller, Dubard, Roberts, &Sterling-Turner, 2002, p. 283)

• The purpose of this study was to determinewhether naturally occurring meals would affectperformance adversely during postmeal sessionsin which highly preferred food was used as re-inforcement. (Zhou, Iwata, & Shore, 2002,pp. 411–412)

Whether an experimental question is stated explicitlyin the form of a question or implicit within a statement ofpurpose, all aspects of an experiment’s design and con-duct should follow from it.

5It has become commonplace in the behavior analysis literature to refer tothe person(s) whose behavior is the dependent variable in an experiment asa participant, instead of the more traditional term, subject. We use bothterms in this text and urge readers to consider Sidman’s (2002) perspectiveon the issue: “[W]e are no longer permitted to call our subjects ‘subjects.’The term is supposed to be dehumanizing, and so we are supposed to callthem ‘participants.’ I think this is completely misguided. Experimenters,too, are participants in their experiments. What does making them non-participants do to our perception of science and of scientists? Are experi-menters merely robots who follow prescribed and unbreakable scientificrules? Are they supposed just to manipulate variables and coldly recordthe results of their manipulations? Separating them as nonparticipating ma-nipulators and recorders of the behavior of participants really dehumanizesnot only experimenters but, along with them, the whole scientific process.”(p. 9)6An experiment by Rindfuss, Al-Attrash, Morrison, and Heward (1998)provides a good example of the extent to which the term single-subject re-search can be a misnomer. A within-subject reversal design was used toevaluate the effects of response cards on the quiz and exam scores of 85 stu-dents in five eighth-grade American history classes. Although a large groupof subjects participated in this study, it actually consisted of 85 individualexperiments; or 1 experiment and 84 replications!

A good design is one that answers the question convinc-ingly, and as such needs to be constructed in reaction tothe question and then tested through arguments in thatcontext (sometimes called, “thinking through”), ratherthan imitated from a textbook. (Baer, Wolf, & Risley,1987, p. 319)

Subject

Experiments in applied behavior analysis are most oftenreferred to as single-subject (or single-case) designs.This is not because behavior analysis studies are neces-sarily conducted with only one subject (though some are),but because the experimental logic or reasoning for ana-lyzing behavior changes often employs the subject as herown control.5 In other words, repeated measures of eachsubject’s behavior are obtained as she is exposed to eachcondition of the study (e.g., the presence and absence ofthe independent variable). A subject is often exposed toeach condition several times over the course of an ex-periment. Measures of the subject’s behavior during eachphase of the study provide the basis for comparing theeffects of experimental variables as they are presented orwithdrawn in subsequent conditions.

Although most applied behavior analysis studies in-volve more than one subject (four to eight is common),each subject’s data are graphed and analyzed separately.6

Instead of using single-subject design to refer to the ex-periments in which each subject serves as his or her owncontrol, some authors use more aptly descriptive termssuch as within-subject design or intra-subject design.

Sometimes the behavior analyst is interested in as-sessing the total effect of a treatment variable within agroup of subjects—for example, the number of home-work assignments completed by members of a class of

183

Analyzing Change: Basic Assumptions and Strategies

fifth-grade students. In such cases the total number of as-signments completed may be measured, graphed, and an-alyzed as a dependent variable within a “single-subject”design. However, it must be remembered that unless eachstudent’s data are individually graphed and interpreted, noindividual student’s behavior has been analyzed, and thedata for the group may not be representative of any indi-vidual subject.

Use of a single participant, or a small number of par-ticipants, each of whom is considered an intact experi-ment, stands in sharp contrast to the group comparisondesigns traditionally used in psychology and the othersocial sciences that employ large numbers of subjects.7

Proponents of group comparison designs believe thatlarge numbers of subjects control for the variability dis-cussed earlier and increase the generality (or external va-lidity) of any findings to the population from which thesubjects were selected. For now, we will leave this issue with Johnston and Pennypacker’s (1993b) astuteobservation:

When well done, the procedures of within-subject de-signs preserve the pure characteristics of behavior, un-contaminated by intersubject variability. In contrast, thebest between groups design practices obfuscate the rep-resentation of behavior in various ways, particularly bymixing intersubject variability with treatment-inducedvariability. (p. 188)

Behavior: Dependent Variable

The target behavior in an applied behavior analysis ex-periment, or more precisely a measurable dimensionalquantity of that behavior (e.g., rate, duration), is calledthe dependent variable. It is so labeled because the ex-periment is designed precisely to determine whether the behavior is, in fact, dependent on (i.e., a functionof ) the independent variable(s) manipulated by the investigator.

In some studies more than one behavior is measured.One reason for measuring multiple behaviors is to providedata patterns that can serve as controls for evaluating andreplicating the effects of an independent variable as it is

7For a history of single-case research, see Kennedy (2005).

on behaviors other than the response class to which itwas applied directly. This strategy is used to determinewhether the independent variable had any collateral ef-fects—either desired or undesired—on other behaviorsof interest. Such behaviors are referred to as secondarydependent variables. The experimenter obtains regularmeasures of their rate of occurrence, though perhaps notwith the same frequency with which measures of the pri-mary dependent variable are recorded.

Still another reason for measuring multiple behav-iors is to determine whether changes in the behavior ofa person other than the subject occur during the courseof an experiment and whether such changes might inturn explain observed changes in the subject’s behav-ior. This strategy is implemented primarily as a controlstrategy in assessing the effects of a suspected con-founding variable: The extra behavior(s) measured arenot true dependent variables in the sense of undergoinganalysis. For example, in a classic study analyzing theeffects of the self-recording by a junior high school girlon her classroom study behavior, Broden, Hall, andMitts (1971) observed and recorded the number of timesthe girl’s teacher paid attention to her throughout theexperiment. If teacher attention had been found to co-vary with changes in study behavior, a functional rela-tion between self-recording and study behavior wouldnot have been demonstrated. In that case, teacher at-tention would likely have been identified as a potentialconfounding variable, and the focus of the investigationwould likely have shifted to include efforts to experi-mentally control it (i.e., to hold teacher attention con-stant) or to systematically manipulate and analyze itseffects. However, the data revealed no functional rela-tion between teacher attention and study behavior dur-ing the first four phases of the experiment, whenconcern was highest that teacher attention may havebeen a confounding variable.

SettingControl the environment and you will see order in behavior.

—B. F. Skinner (1967, p. 399)

Functional relations are demonstrated when observed vari-ations in behavior can be attributed to specific operationsimposed on the environment. Experimental control is

8This is the distinguishing feature of the multiple baseline across behav-iors design, an experimental tactic used widely in applied behavior analy-sis. Multiple baseline designs are presented in Chapter entitled “MultipleBaseline and Changing Criterion Designs ”.

sequentially applied to each of the behaviors.8 A secondreason for multiple dependent measures is to assess thepresence and extent of the independent variable’s effects

184

Analyzing Change: Basic Assumptions and Strategies

achieved when a predictable change in behavior (the de-pendent variable) can be reliably and repeatedly producedby the systematic manipulation of some aspect of the sub-ject’s environment (the independent variable). To makesuch attributions properly, the investigator must, amongother things, control two sets of environmental variables.First, the investigator must control the independent variableby presenting it, withdrawing it, and/or varying its value.Second, the investigator must control, by holding constant,all other aspects of the experimental setting—extraneousvariables—to prevent unplanned environmental variation.These two operations—precisely manipulating the inde-pendent variable and maintaining the constancy of everyother relevant aspect of the experimental setting—definethe second meaning of experimental control.

In basic laboratory research, experimental space isdesigned and furnished to maximize experimental con-trol. Lighting, temperature, and sound, for example, areall held constant, and programmed apparatus virtuallyguarantee the presentation of antecedent stimuli and thedelivery of consequences as planned. Applied behavioranalysts, however, conduct their studies in the settingswhere socially important behaviors naturally occur—theclassroom, home, and workplace. It is impossible to con-trol every feature of an applied environment; and to addto the difficulty, subjects are typically in the experimen-tal setting for only part of each day, bringing with themthe influences of events and contingencies operating inother settings.

In spite of the complexity and ever-changing natureof applied settings, the behavior analyst must make everyeffort to hold constant all seemingly relevant aspects ofthe environment. When unplanned variations take place,the investigator must either wait out their effects or try toincorporate them into the design of the experiment. Inany event, repeated measures of the subject’s behaviorare the barometer for assessing whether unplanned envi-ronmental changes are of concern.

Applied studies are often conducted in more thanone setting. Researchers sometimes use concurrent mea-sures of the same behavior obtained in multiple settingsas controls for analyzing the effects of an independentvariable that is sequentially applied to the behavior ineach setting.9 In addition, data are often collected in mul-tiple settings to assess the extent to which behaviorchanges observed in the primary setting have also oc-curred in the other setting(s).

9This analytic tactic is known as a multiple-baseline across settings de-sign. Multiple-baseline designs are presented inChapter entitled “MultipleBaseline and Changing Criterion Designs ”

Measurement System and OngoingVisual Analysis

Beginning students of behavior analysis sometimes be-lieve that the discipline is preoccupied with issues andprocedures related to the observation and measurement ofbehavior. They want to get on with the analysis. How-ever, the results of any experiment can be presented andinterpreted only in terms of what was measured, and theobservation and recording procedures used in the studydetermine not only what was measured, but also how wellit was measured (i.e., how representative of the subject’sactual behavior is the estimate provided by the experi-mental data—all measurements of behavior, no matterhow frequent and technically precise, are estimates oftrue values). It is critical that observation and recordingprocedures be conducted in a completely standardizedmanner throughout each session of an experiment. Stan-dardization involves every aspect of the measurementsystem, from the definition of the target behavior (de-pendent variable) to the scheduling of observations to themanner in which the raw data are transposed from record-ing sheets to session summary sheets to the way the dataare graphed. An adventitious change in measurement tac-tics can result in unwanted variability or confoundedtreatment effects.

Advantages accrue to the behavioral researcher whomaintains direct contact with the experimental data byongoing visual inspection of graphic displays. The be-havior analyst must become skilled at recognizingchanges in level, trend, and degree of variability as thesechanges develop in the data. Because behavior is a con-tinuous, dynamic phenomenon, experiments designed todiscover its controlling variables must enable the inves-tigator to inspect and respond to the data continuously asthe study progresses. Only in this way can the behavioranalyst be ready to manipulate features of the environ-ment at the time and in the manner that will best revealfunctional relations and minimize the effects of con-founding variables.

Intervention or Treatment:Independent Variable

Behavior analysts seek reliable relations between behav-ior and the environmental variables of which it is a func-tion. The particular aspect of the environment that theexperimenter manipulates to find out whether it affectsthe subject’s behavior is called the independent vari-able. Sometimes called the intervention, treatment, orexperimental variable, this component of an experimentis called the independent variable because the researcher.

185

Analyzing Change: Basic Assumptions and Strategies

can control or manipulate it independent of the subject’sbehavior or any other event. (Though, as we will soonsee, manipulating the independent variable without re-gard to what is happening with the dependent variable isunwise.) Whereas any changes that must be made in theexperimental setting to conduct the study (e.g., the addi-tion of observers to measure behavior) are made with thegoal of minimizing their effects on the dependent vari-able, “changes in the independent variable are arrangedby the experimenter in order to maximize . . . its influenceon responding” (Johnston & Pennypacker, 1980, p. 260).

Manipulations of the IndependentVariable: Experimental Design

Experimental design refers to the particular arrange-ment of conditions in a study so that meaningful com-parisons of the effects of the presence, absence, ordifferent values of the independent variable can be made.Independent variables can be introduced, withdrawn, in-creased or decreased in value, or combined across be-haviors, settings, and/or subjects in an infinite number ofways.10 However, there are only two basic kinds of inde-pendent variable changes that can be made with respectto the behavior of a given subject in a given setting.

A new condition can be introduced or an old conditioncan be reintroduced. . . . Experimental designs aremerely temporal arrangements of various new and oldconditions across behaviors and settings in ways thatproduce data that are convincing to the investigator andthe audience. (Johnston & Pennypacker, 1980, p. 270)

In the simplest case—from an analytic perspective,but not necessarily a practical point of view—an inde-pendent variable can be manipulated so that it is eitherpresent or absent during each time period or phase of thestudy. When the independent variable is in either of theseconditions during a study, the experiment is termed a non-parametric study. In contrast, a parametric analysisseeks to discover the differential effects of a range of val-ues of the independent variable. For example, Lerman,Kelley, Vorndran, Kuhn, and LaRue (2002) conducted aparametric study when they assessed the effects of dif-ferent reinforcer magnitudes (i.e., 20 seconds, 60 sec-onds, or 300 seconds of access to toys or escape fromdemands) on the duration of postreinforcement pause andresistance to extinction. Parametric experiments aresometimes used because a functional relation may have

10How many different experimental designs are there? Because an exper-iment’s design includes careful selection and consideration of all of thecomponents discussed here (i.e., subject, setting, behavior, etc.), not count-ing the direct replication of experiments, one could say that there are asmany experimental designs as there are experiments.

more generality if it is based on several values of the in-dependent variable.

Sometimes the investigator is interested in compar-ing the effects of several treatment alternatives. In thiscase multiple independent variables become part of theexperiment. For example, perhaps two separate treat-ments are evaluated as well as the effects of a third treat-ment, which represents a combination of both variables.However, even in experiments with multiple independentvariables, the researcher must heed a simple but funda-mental rule of experimentation: Change only one vari-able at a time. Only in this manner can the behavioranalyst attribute any measured changes in behavior to aspecific independent variable. If two or more variablesare altered simultaneously and changes in the dependentvariable are noted, no conclusions can be made with re-gard to the contribution of any one of the altered vari-ables to the behavior change. If two variables changedtogether, both could have contributed equally to the re-sultant behavior change; one variable could have beensolely, or mostly, responsible for the change; or one vari-able may have had a negative or counterproductive ef-fect, but the other independent variable was sufficientlystrong enough to overcome this effect, resulting in a netgain. Any of these explanations or combinations mayhave accounted for the change.

As stated previously, applied behavior analysts oftenconduct their experiments in “noisy” environments whereeffective treatments are required for reasons related topersonal safety or exigent circumstances. In such cases,applied behavior analysts sometimes “package” multipleand well-documented and effective treatments, knowingthat multiple independent variables are being introduced.As implied earlier, a package intervention is one in whichmultiple independent variables are being combined orbundled into one program (e.g., token reinforcement +praise, + self-recording + time out). However, from theperspective of experimental analysis, the rule still holds.When manipulating a treatment package, the experi-menter must ensure that the entire package is presentedor withdrawn each time a manipulation occurs. In thissituation, it is important to understand that the entirepackage is being evaluated, not the discrete componentsthat make up the package. If at a later time, the analystwishes to determine the relative contributions of eachpart of the package, a component analysis would need tobe carried out.

There are no off-the-shelf experimental designs avail-able for a given research problem (Baer et al., 1987; John-ston & Pennypacker, 1980, 1993a; Sidman, 1960/1988;Skinner, 1956, 1966). The investigator must not getlocked into textbook “designs” that (a) require a prioriassumptions about the nature of the functional relations

186

Analyzing Change: Basic Assumptions and Strategies

one seeks to investigate and (b) may be insensitive tounanticipated changes in behavior. Instead, the behavioranalyst should select and combine experimental tacticsthat best fit the research question, while standing everready to “explore relevant variables by manipulating themin an improvised and rapidly changing design” (Skinner,1966, p. 21).

Simultaneous with, and to a large degree responsiblefor, the growth and success of applied behavior analysishave been the development and refinement of a powerfulgroup of extremely flexible experimental tactics for ana-lyzing behavior-environment relations. However, to mosteffectively select, modify, and combine these tactics intoconvincing experiments, the behavior analyst must firstfully understand the experimental reasoning, or logic, thatprovides the foundation for within-subject experimentalcomparisons.

Steady State Strategyand Baseline LogicSteady or stable state responding—which “may be de-fined as a pattern of responding that exhibits relativelylittle variation in its measured dimensional quantities overa period of time” (Johnston & Pennypacker, 1993a,p. 199)—provides the basis for a powerful form of ex-perimental reasoning commonly used in behavior analy-sis, called baseline logic. Baseline logic entails threeelements—prediction, verification, and replication—eachof which depends on an overall experimental approachcalled steady state strategy. Steady state strategy entailsrepeatedly exposing a subject to a given condition whiletrying to eliminate or control any extraneous influenceson the behavior and obtaining a stable pattern of re-sponding before introducing the next condition.

Nature and Function of Baseline Data

Behavior analysts discover behavior–environment rela-tions by comparing data generated by repeated measuresof a subject’s behavior under the different environmentalconditions of the experiment. The most common methodof evaluating the effects of a given variable is to imposeit on an ongoing measure of behavior obtained in its ab-sence. These original data serve as the baseline againstwhich any observed changes in behavior when the inde-pendent variable is applied can be compared. A baselineserves as a control condition and does not necessarilymean the absence of instruction or treatment as such, onlythe absence of a specific independent variable of exper-imental interest.

Why Establish a Baseline?

From a purely scientific or analytic perspective, the pri-mary purpose for establishing a baseline level of re-sponding is to use the subject’s performance in theabsence of the independent variable as an objective basisfor detecting the effects of the independent variable whenit is introduced in the future. However, obtaining baselinedata can yield a number of applied benefits. For one, sys-tematic observation of the target behavior before a treat-ment variable is introduced provides the opportunity tolook for and note environmental events that occur justbefore and just after the behavior. Such empirically ob-tained descriptions of antecedent-behavior-consequentcorrelations are often invaluable in planning an effectiveintervention. For example, baseline observations reveal-ing that a child’s disruptive outbursts are consistently fol-lowed by parent or teacher attention can be used indesigning an intervention of ignoring outbursts and con-tingent attention following desired behavior.

Second, baseline data can provide valuable guidancein setting initial criteria for reinforcement, a particularlyimportant step when a contingency is first put into effect.If the criteria are too high, the subject never comes intocontact with the contingency; if they are too low, little orno improvement can be expected.

From a practical perspective, a third reason for col-lecting baseline data concerns the merits of objectivemeasurement versus subjective opinion. Sometimes theresults of systematic baseline measurement convince thebehavior analyst or significant others to alter their per-spectives on the necessity and value of attempting tochange the behavior. For example, a behavior being con-sidered for intervention because of several recent and ex-treme instances is no longer targeted because baselinedata show it is decreasing. Or, perhaps a behavior’s topog-raphy attracted undue attention from teachers or parents,but objective baseline measurement over several days re-veals that the behavior is not occurring at a frequencythat warrants an intervention.

Types of Baseline Data Patterns

Examples of four data patterns sometimes generated bybaseline measurement are shown in Figure 1. It must bestressed that these hypothetical baselines represent onlyfour examples of the wide variety of baseline data pat-terns an experimenter or practitioner will encounter. Thepotential combinations of different levels, trends, and de-grees of variability are, of course, infinite. Nevertheless,in an effort to provide guidance to the beginning behav-ior analyst, some general statements will be given aboutthe experimental decisions that might be warranted bythe data patterns shown in Figure 1.

187

Analyzing Change: Basic Assumptions and Strategies

Time

(A)

Res

po

nse

Mea

sure

Time

(B)

Res

po

nse

Mea

sure

Time

(C)

Res

po

nse

Mea

sure

Time

(D)

Res

po

nse

Mea

sure

Figure 1 Data patterns illustrating stable (A), ascending (B), descending (C), andvariable (D) baselines.

Graph A shows a relatively stable baseline. The datashow no evidence of an upward or downward trend, andall of the measures fall within a small range of values. Astable baseline provides the most desirable basis, or con-text, against which to look for effects of an independentvariable. If changes in level, trend, and/or variability co-incide with the introduction of an independent variable ona baseline as stable as that shown in Graph A, one canreasonably suspect that those changes may be related tothe independent variable.

The data in Graphs B and C represent an ascendingbaseline and a descending baseline, respectively. Thedata path in Graph B shows an increasing trend in the be-havior over time, whereas the data path in Graph C showsa decreasing trend. The applied behavior analyst musttreat ascending and descending baseline data cautiously.By definition, dependent variables in applied behavioranalysis are selected because they represent target be-haviors that need to be changed. But ascending and de-scending baselines reveal behaviors currently in theprocess of changing. The effects of an independent vari-able introduced at this point are likely to be obscured orconfounded by the variables responsible for the already-occurring change. But what if the applied investigatorneeds to change the behavior immediately? The appliedperspective can help solve the dilemma.

Whether a treatment variable should be introduceddepends on whether the trending baseline data represent

improving or deteriorating performance. When an as-cending or descending baseline represents behaviorchange in the therapeutically desired direction, the in-vestigator should withhold treatment and continue tomonitor the dependent variable under baseline conditions.When the behavior ceases to improve (as evidenced bystable responding) or begins to deteriorate, the indepen-dent variable can be applied. If the trend does not level offand the behavior continues to improve, the original prob-lem may no longer be present, leaving no reason for in-troducing the treatment as planned (although theinvestigator might be motivated to isolate and analyzethe variables responsible for the “spontaneous” im-provement). Introducing an independent variable to analready-improving behavior makes it difficult, and oftenimpossible, to claim any continued improvement as afunction of the independent variable.

An ascending or descending baseline that repre-sents significantly deteriorating performance signalsan immediate application of the independent variable.From an applied perspective the decision to interveneis obvious: The subject’s behavior is deteriorating, anda treatment designed to improve it should be intro-duced. An independent variable capable of affectingdesired behavior change in spite of other variables“pushing” the behavior in the opposite direction is mostlikely a robust variable, one that will be a welcome addition to the behavior analyst’s list of effective treat-

188

Analyzing Change: Basic Assumptions and Strategies

Beh

avio

r

Time

Figure 2 Solid data points represent actual measures of behavior that might be generated in astable baseline; open data points within the boxrepresent the level of responding that would bepredicted on the basis of the obtained measures,should the environment remain constant.

ments. The decision to introduce a treatment variableon a deteriorating baseline is also a sound one from ananalytic perspective, which will be discussed in the nextsection.

Graph D in Figure 1 shows a highly unstable orvariable baseline. The data in Graph D show just one ofmany possible patterns of unstable responding. The datapoints do not consistently fall within a narrow range ofvalues, nor do they suggest any clear trend. Introducingthe independent variable in the presence of such vari-ability is unwise from an experimental standpoint. Vari-ability is assumed to be the result of environmentalvariables, which in the case shown by Graph D, seem tobe operating in an uncontrolled fashion. Before the re-searcher can analyze the effects of an independent vari-able effectively, these uncontrolled sources of variabilitymust be isolated and controlled.

Stable baseline responding provides an index of thedegree of experimental control the researcher has estab-lished. Johnston and Pennypacker stressed this point inboth editions of Strategies and Tactics of Human Behav-ioral Research:

If unacceptably variable responding occurs under base-line conditions, this is a statement that the researcher isprobably not ready to introduce the treatment condi-tions, which involves adding an independent variablewhose effects are in question. (1993a, p. 201)

These authors were more direct and blunt in the first edi-tion of their text:

If sufficiently stable responding cannot be obtained, theexperimenter is in no position to add an independentvariable of suspected but unknown influence. To do sowould be to compound confusion and lead to further ig-norance. (1980, p. 229)

Again, however, applied considerations must be bal-anced against purely scientific pursuits. The appliedproblem may be one that cannot wait to be solved (e.g.,severe self-injurious behavior). Or, confounding vari-ables in the subject’s environment and the setting(s) ofthe investigation may simply be beyond the experi-menter’s control.11 In such situations the independentvariable is introduced with the hope of producing stableresponding in its presence. Sidman (1960/1988) agreedthat “the behavioral engineer must ordinarily take vari-ability as he finds it, and deal with it as an unavoidablefact of life” (p. 192).

11The applied researcher must guard very carefully against assuming au-tomatically that unwanted variability is a function of variables beyond hiscapability or resources to isolate and control, and thus fail to pursue the in-vestigation of potentially important functional relations.

12“The above reference to ‘relatively constant environmental conditions’means only that the experimenter is not knowingly producing uncontrolledvariations in functionally related environmental events” (Johnston & Pennypacker, 1980, p. 228).

Prediction

Prediction can be defined as “the anticipated outcomeof a presently unknown or future measurement. It is themost elegant use of quantification upon which validationof all scientific and technological activity rests” (Johnston& Pennypacker, 1980, p. 120). Figure 2 shows a series of hypothetical measures representing a stable pattern ofbaseline responding. The consistency of the first five datapoints in the series encourage the prediction that—if nochanges occur in the subject’s environment—subsequentmeasures will fall within the range of values obtainedthus far. Indeed, a sixth measure is taken that gives cre-dence to this prediction. The same prediction is madeagain, this time with more confidence, and another mea-sure of behavior shows it to be correct. Throughout abaseline (or any other experimental condition), an ongo-ing prediction is made and confirmed until the investi-gator has every reason to believe that the responsemeasure will not change appreciably under the presentconditions. The data within the shaded portion of Figure2 represent unobtained but predicted measures of fu-ture responding under “relatively constant environmen-tal conditions.”12 Given the stability of the obtainedmeasures, few experienced scientists would quarrel withthe prediction.

189

Analyzing Change: Basic Assumptions and Strategies

How many measures must be taken before an ex-perimenter can use a series of data points to predict fu-ture behavior with confidence? Baer and colleagues(1968) recommended continuing baseline measurementuntil “its stability is clear.” Even though there are no setanswers, some general statements can be made about thepredictive power of steady states. All things being equal,many measurements are better than a few; and the longerthe period of time in which stable responding is obtained,the better the predictive power of those measures. Also,if the experimenter is not sure whether measurement hasproduced stable responding, in all likelihood it has not,and more data should be collected before the indepen-dent variable is introduced. Finally, the investigator’sknowledge of the characteristics of the behavior beingstudied under constant conditions is invaluable in decid-ing when to terminate baseline measurement and intro-duce the independent variable. That knowledge can bedrawn from personal experience in obtaining stable base-lines on similar response classes and from familiaritywith patterns of baseline responding found in the pub-lished literature.

It should be clear that guidelines such as “collectbaseline data for at least five sessions” or “obtain base-line measures over two consecutive weeks” are misguidedor naive. Depending on the situation, five data points ob-tained over one or two weeks of baseline conditions mayor may not provide a convincing picture of steady stateresponding. The question that must be addressed is: Arethe data sufficiently stable to serve as the basis for ex-perimental comparison? This question can be answeredonly by ongoing prediction and confirmation using re-peated measures in an environment in which all relevantconditions are held constant.

Behavior analysts are often interested in analyzingfunctional relations between an instructional variableand the acquisition of new skills. In such situations itis sometimes assumed that baseline measures are zero.For example, one would expect repeated observationsof a child who has never tied her shoes to yield a per-fectly stable baseline of zero correct responses. How-ever, casual observations that have never shown a childto use a particular skill do not constitute a scientifi-cally valid baseline and should not be used to justifyany claims about the effects of instruction. It could bethat if given repeated opportunities to respond, thechild would begin to emit the target behavior at anonzero rate. The term practice effects refers to im-provements in performance resulting from repeatedopportunities to emit the behavior so that baselinemeasurements can be obtained. For example, attempt-ing to obtain stable baseline data for students per-forming arithmetic problems can result in improvedlevels of responding simply because of the repeated

practice inherent in the measurement process. Practiceeffects confound a study, making it impossible to sep-arate and account for the effects of practice and in-struction on the student’s final performance. Repeatedbaseline measures should be used either to reveal theexistence or to demonstrate the nonexistence of prac-tice effects. When practice effects are suspected orfound, baseline data collection should be continueduntil steady state responding is attained.

The necessity to demonstrate a stable baseline andto control for practice effects empirically does not re-quire applied behavior analysts to withhold needed treat-ment or intervention. Nothing is gained by collectingunduly long baselines of behaviors that cannot reason-ably be expected to be in the subject’s repertoire. For ex-ample, many behaviors cannot be emitted unless thesubject is competent in certain prerequisite behaviors;there is no legitimate possibility of a child’s tying hisshoes if he currently does not pick up the laces, or of astudent’s solving division problems if she cannot sub-tract and multiply. Obtaining extended baseline data insuch cases is unnecessary pro forma measurement. Suchmeasures would “not so much represent zero behavior aszero opportunity for behavior to occur, and there is noneed to document at the level of well-measured data thatbehavior does not occur when it cannot” (Horner & Baer,1978, p. 190).

Fortunately, applied behavior analysts need neitherabandon the use of steady state strategy nor repeatedlymeasure nonexistent behavior at the expense of begin-ning treatment. The multiple probe design is an experi-mental tactic that enables the use of steady state logic toanalyze functional relations between instruction and theacquisition of behaviors shown to be nonexistent in thesubject’s repertoire prior to the introduction of the inde-pendent variable.

Affirmation of the Consequent

The predictive power of steady state responding enablesthe behavior analyst to employ a kind of inductive logicknown as affirmation of the consequent (Johnston &Pennypacker, 1980). When an experimenter introducesan independent variable on a stable baseline, an explicitassumption has been made: If the independent variablewere not applied, the behavior, as indicated by the base-line data path, would not change. The experimenter isalso predicting that (or more precisely, questioningwhether) the independent variable will result in a changein the behavior.

The logical reasoning behind affirmation of the con-sequent begins with a true antecedent–consequent (if-A-then-B) statement and proceeds as follows:

190

Analyzing Change: Basic Assumptions and Strategies

1. If A is true, then B is true.

2. B is found to be true.

3. Therefore, A is true.

The behavior analyst’s version goes like this:

1. If the independent variable is a controlling factorfor the behavior (A), then the data obtained in thepresence of the independent variable will show thatthe behavior has changed (B).

2. When the independent variable is present, the datashow that the behavior has changed (B is true).

3. Therefore, the independent variable is a controllingvariable for the behavior (therefore, A is true).

The logic, of course, is flawed; other factors could beresponsible for the truthfulness of A. But, as will beshown, a successful (i.e., convincing) experiment affirmsseveral if-A-then-B possibilities, each one reducing thelikelihood of factors other than the independent variablebeing responsible for the observed changes in behavior.

Data shown in Figures 3 to 5 illustrate how pre-diction, verification, and replication are employed in ahypothetical experiment using the reversal design, oneof the most common and powerful analytic tactics usedby behavior analysts. Figure 3 shows a successful affir-mation of the consequent. Steady state responding duringbaseline enabled the prediction that, if no changes weremade in the environment, continued measurement wouldyield data similar to those in the shaded portion of thegraph. The independent variable was then introduced,and repeated measures of the dependent variable duringthis treatment condition showed that the behavior did in-deed change. This enables two comparisons, one real andone hypothetical. First, the real difference between theobtained measures in the presence of the independentvariable and the baseline level of responding representsthe extent of a possible effect of the independent variableand supports the prediction that treatment would changethe behavior.

The second, hypothetical, comparison is between thedata obtained in the treatment condition with the pre-dicted measures had the treatment variable not been introduced (i.e., the open data points within the boxedarea of Figure 3). This comparison represents the be-havior analyst’s hypothetical approximation of the idealbut impossible-to-achieve experimental design: the si-multaneous measurement and comparison of the behav-ior of an individual subject in both the presence andabsence of the treatment variable (Risley, 1969).

Although the data in Figure 3 affirm the initial antecedent–consequent statement—a change in the be-havior was observed in the presence of the independent

13Although two-phase experiments consisting of a pretreatment baselinecondition followed by a treatment condition (called A–B design) enable neither verification of the prediction of continued responding at baselinelevels nor replication of the effects of the independent variable, studiesusing A–B designs can nevertheless contribute important and useful find-ings (e.g., Azrin & Wesolowski, 1974; Reid, Parsons, Phillips, & Green,1993).

variable—asserting a functional relation between the in-dependent and dependent variables at this point is un-warranted. The experiment has not yet ruled out thepossibility of other variables being responsible for thechange in behavior. For example, perhaps some otherevent that is responsible for the change in behavior oc-curred at the same time that the independent variable wasintroduced.13

A firmer statement about the relation between thetreatment and the behavior can be made at this point,however, if changes in the dependent variable are not ob-served in the presence of the independent variable. As-suming accurate measures of the behavior and ameasurement system sensitive to changes in the behavior,then no behavior change in the presence of the indepen-dent variable constitutes a disconfirmation of the conse-quent (B was shown not to be true), and the independentvariable is eliminated as a controlling variable. However,eliminating a treatment from the ranks of controlling vari-ables on the basis of no observed effects presupposes ex-perimental control of the highest order (Johnston &Pennypacker, 1993a).

Beh

avio

r

Time

Baseline Treatment

Figure 3 Affirmation of the consequent supporting the possibility of a functional relation between thebehavior and treatment variable. The measuresobtained in the presence of the treatment variable differfrom the predicted level of responding in the absence ofthe treatment variable (open data points within boxedarea).

191

Analyzing Change: Basic Assumptions and Strategies

However, in the situation illustrated in Figure 3, achange in behavior was observed in the presence of theindependent variable, revealing a correlation between theindependent variable and the behavior change. To whatextent was the observed behavior change a function ofthe independent variable? To pursue this question, thebehavior analyst employs the next component of base-line logic: verification.

Verification

The experimenter can increase the probability that an ob-served change in behavior was functionally related to theintroduction of the independent variable by verifying theoriginal prediction of unchanging baseline measures.Verification can be accomplished by demonstrating thatthe prior level of baseline responding would have re-mained unchanged had the independent variable not beenintroduced (Risley, 1969). If that can be demonstrated,this operation verifies the accuracy of the original pre-diction of continued stable baseline responding and reduces the probability that some uncontrolled (con-founding) variable was responsible for the observedchange in behavior. Again, the reasoning behind affir-mation of the consequent is the logic that underlies the

Figure 4 illustrates the verification of effect in ourhypothetical experiment. When steady state respondinghas been established in the presence of the independentvariable, the investigator removes the treatment variable,thereby returning to the previous baseline conditions.This tactic allows the possibility of affirming two differ-ent antecedent–consequent statements. The first state-ment and its affirmation follows this pattern:

1. If the independent variable is a controlling factorfor the behavior (A), then its removal will coincidewith changes in the response measure (B).

2. Removal of the independent variable is accompa-nied by changes in the behavior (B is true).

3. Therefore, the independent variable controls re-sponding (therefore, A is true).

The second statement and affirmation follows this pattern:

1. If the original baseline condition controlled the be-havior (A), then a return to baseline conditions willresult in similar levels of responding (B).

2. The baseline condition is reinstated and levels ofresponding similar to those obtained during theoriginal baseline phase are observed (B is true).

3. Therefore, the baseline condition controlled the be-havior both then and now (therefore, A is true).

The six measures within the shaded area obtainedduring Baseline 2 of our hypothetical experiment inFigure 4 verify the prediction made for Baseline 1. Theopen data points in the shaded area in Baseline 2 repre-sent the predicted level of responding if the independentvariable had not been removed. (The prediction compo-nent of baseline logic applies to steady state respondingobtained during any phase of an experiment, baseline andtreatment conditions alike.) The difference between thedata actually obtained during Treatment (solid datapoints) and the data obtained during Base-line 2 (soliddata points) affirms the first if-A-then-B statement: If thetreatment is a controlling variable, then its removal willresult in changes in behavior. The similarity betweenmeasures obtained during Baseline 2 and those obtainedduring Baseline 1 confirms the second if-A-then-B state-ment: If baseline conditions controlled the behavior be-fore, reinstating baseline conditions will result in similarlevels of responding.

Again, of course, the observed changes in behaviorassociated with the application and withdrawal of the in-dependent variable are subject to interpretations other

Beh

avio

r

Time

Baseline Treatment Baseline 2Figure 4 Verification of a previouslypredicted level of baseline responding bytermination or withdrawal of the treatmentvariable. The measures obtained duringBaseline 2 (solid data points within shadedarea) show a successful verification anda second affirmation of the consequentbased on a comparison with the predictedlevel of responding (open dots in Base-line 2) in the continued presence of thetreatment variable.

experimental strategy.

192

Analyzing Change: Basic Assumptions and Strategies

than a claim of a functional relation between the twoevents. However, the case for the existence of a func-tional relation is becoming stronger. When the indepen-dent variable was applied, behavior change was observed;when the independent variable was withdrawn, behavioragain changed and responding returned to baseline lev-els. To the extent that the experimenter effectively con-trols the presence and absence of the independent variableand holds constant all other variables in the experimen-tal setting that might influence the behavior, a functionalrelation appears likely: An important behavior changehas been produced and reversed by the introduction andwithdrawal of the independent variable. The process ofverification reduces the likelihood that a variable otherthan the independent variable was responsible for the ob-served behavior changes.

Does this two-step strategy of prediction and verifi-cation constitute sufficient demonstration of a functionalrelation? What if some uncontrolled variable covariedwith the independent variable as it was presented andwithdrawn and this uncontrolled variable was actuallyresponsible for the observed changes in behavior? If suchwas the case, claiming a functional relation between thetarget behavior and the independent variable would atbest be inaccurate and at the worst perhaps end a searchfor the actual controlling variables whose identificationand control would contribute to an effective and reliabletechnology of behavior change.

The appropriately skeptical investigator (and researchconsumer) will also question the reliability of the ob-tained effect. How reliable is this verified behaviorchange? Was the apparent functional relation a fleeting,one-time-only phenomenon, or will repeated applicationof the independent variable reliably (i.e., consistently)produce a similar pattern of behavior change? An effec-tive (i.e., convincing) experimental design yields data thatare responsive to these important questions. To investigateuncertain reliability, the behavior analyst employs thefinal, and perhaps the most important, component of base-line logic and experimental design: replication.

14Replication also refers to the repeating of experiments to determine thereliability of functional relations found in previous experiments and theextent to which those findings can be extended to other subjects, settings,and/or behaviors (i.e., generality or external validity). The replication of ex-periments is examined in Chapter entitled “Planning and Evaluating Ap-plied Behavior Analysis Research ”

ReplicationReplication is the essence of believability.

—Baer, Wolf, and Risley (1968, p. 95)

Within the context of any given experiment, replicationmeans repeating independent variable manipulations con-ducted previously in the study and obtaining similar out-comes.14 Replication within an experiment has twoimportant purposes. First, replicating a previously ob-served behavior change reduces the probability that avariable other than the independent variable was respon-sible for the now twice-observed behavior change. Sec-ond, replication demonstrates the reliability of thebehavior change; it can be made to happen again.

Figure 5 adds the component of replication to our hypothetical experiment. After steady state respondingwas obtained during Baseline 2, the independent variableis reintroduced; this is the Treatment 2 phase. To the ex-tent that the data obtained during the second applicationof the treatment (data points within area shaded withcross-hatched lines) resemble the level of responding ob-served during Treatment 1, replication has occurred. Ourhypothetical experiment has now produced powerful ev-idence of a functional relation exists between the inde-pendent and the dependent variable. The extent to whichone has confidence in the assertion of a functional rela-tion rests on numerous factors, some of the most impor-tant of which are the accuracy and sensitivity of themeasurement system, the degree of control the experi-menter maintained over all relevant variables, the durationof experimental phases, the stability of responding withineach phase, and the speed, magnitude, and consistencyof behavior change between conditions. If each of these

Beh

avio

r

Time

Baseline Treatment 1 Baseline 2 Treatment 2 Figure 5 Replication of experi-mental effect accomplished by re-introducing the treatment variable. The measures obtained duringTreatment 2 (data points within areashaded with cross hatching) enhancethe case for a functional relationbetween the treatment variable and the target behavior.

.

193

SummaryIntroduction

1. Measurement can show whether and when behaviorchanges, but measurement alone cannot reveal how thechange has come about.

2. Knowledge of specific functional relations between be-havior and environment is necessary if a systematic anduseful technology of behavior change is to develop.

3. An experimental analysis must be performed to determinehow a given behavior functions in relation to specific en-vironmental events.

Concepts and Assumptions Underlying the Analysis of Behavior

4. The overall goal of science is to achieve an understand-ing of the phenomena under study—socially important be-haviors, in the case of applied behavior analysis.

5. Science produces understanding at three levels: descrip-tion, prediction, and control.

6. Descriptive research yields a collection of facts about theobserved events—facts that can be quantified and classified.

7. A correlation exists when two events systematically co-vary with one another. Predictions can be made about theprobability that one event will occur based on the occur-rence of the other event.

8. The greatest potential benefits of science are derived fromthe third, and highest, level of scientific understanding,which comes from establishing experimental control.

9. Experimental control is achieved when a predictablechange in behavior (the dependent variable) can be reliablyproduced by the systematic manipulation of some aspectof the person’s environment (the independent variable).

10. A functional analysis does not eliminate the possibilitythat the behavior under investigation is also a function ofother variables.

11. An experiment that shows convincingly that changes inbehavior are a function of the independent variable andnot the result of uncontrolled or unknown variables hasinternal validity.

12. External validity refers to the degree to which a study’sresults are generalizable to other subjects, settings,and/or behaviors.

13. Confounding variables exert unknown or uncontrolled in-fluences on the dependent variable.

14. Because behavior is an individual phenomenon, the ex-perimental strategy of behavior analysis is based on within-subject (or single-subject) methods of analysis.

15. Because behavior is a continuous phenomenon that occursin and changes through time, the repeated measurementof behavior is a hallmark of applied behavior analysis.

16. The assumption of determinism guides the methodologyof behavior analysis.

17. Experimental methods in behavior analysis are based onthe assumption that variability is extrinsic to the organ-ism; that is, variability is imposed by environmental vari-ables and is not an inherent trait of the organism.

18. Instead of masking variability by averaging the perfor-mance of many subjects, behavior analysts attempt to iso-late and experimentally manipulate the environmentalfactors responsible for the variability.

Components of Experiments in Applied Behavior Analysis

19. The experimental question is a statement of what the re-searcher seeks to learn by conducting the experiment andshould guide and be reflected in all aspects of the experi-ment’s design.

20. Experiments in applied behavior analysis are most oftenreferred to as single-subject (or single-case) research de-signs because the experimental logic or reasoning for an-alyzing behavior change often employs the subject as herown control.

21. The dependent variable in an applied behavior analysisexperiment is a measurable dimensional quantity of thetarget behavior.

22. Three major reasons behavior analysts use multiple-response measures (dependent variables) in some studiesare (a) to provide additional data paths that serve as con-trols for evaluating and replicating the effects of an inde-pendent variable that is sequentially applied to eachbehavior, (b) to assess the generality of treatment effectsto behaviors other than the response class to which the in-dependent variable was applied, and (c) to determinewhether changes in the behavior of a person other than thesubject occur during the course of an experiment and

Analyzing Change: Basic Assumptions and Strategies

considerations is satisfied by the experimental design andis supported by the data as displayed within the design,then replication of effect becomes perhaps the most crit-ical factor in claiming a functional relation.

An independent variable can be manipulated in aneffort to replicate an effect many times within an exper-

iment. The number of replications required to demon-strate a functional relation convincingly is related to manyconsiderations, including all of those just enumerated,and to the existence of other similar experiments thathave produced the same effects.

194

Analyzing Change: Basic Assumptions and Strategies

whether such changes might in turn explain observedchanges in the subject’s behavior.

23. In addition to precise manipulation of the independentvariable, the behavior analyst must hold constant all otheraspects of the experimental setting—extraneous vari-ables—to prevent unplanned environmental variation.

24. When unplanned events or variations occur in the exper-imental setting, the behavior analyst must either wait outtheir effects or incorporate them into the design of theexperiment.

25. Observation and measurement procedures must be con-ducted in a standardized manner throughout an experiment.

26. Because behavior is a continuous and dynamic phenome-non, ongoing visual inspection of the data during thecourse of an experiment is necessary to identify changesin level, trend, and/or variability as they develop.

27. Changes in the independent variable are made in an effortto maximize its effect on the target behavior.

28. The term experimental design refers to the way the inde-pendent variable is manipulated in a study.

29. Although an infinite number of experimental designs arepossible as a result of the many ways independent vari-ables can be manipulated and combined, there are only twobasic kinds of changes in independent variables: introduc-ing a new condition or reintroducing an old condition.

30. A parametric study compares the differential effects of arange of different values of the independent variable.

31. The fundamental rule of experimental design is to changeonly one variable at a time.

32. Rather than follow rigid, pro forma experimental designs,the behavior analyst should select experimental tacticssuited to the original research questions, while standingready to “explore relevant variables by manipulating themin an improvised and rapidly changing design” (Skinner,1966, p. 21).

Steady State Strategy and Baseline Logic

33. Stable, or steady state, responding enables the behavioranalyst to employ a powerful form of inductive reasoning,sometimes called baseline logic. Baseline logic entailsthree elements: prediction, verification, and replication.

34. The most common method for evaluating the effects of agiven variable is to impose it on an ongoing measure of be-havior obtained in its absence. These preintervention dataserve as the baseline by which to determine and evaluateany subsequent changes in behavior.

35. A baseline condition does not necessarily mean the absenceof instruction or treatment per se, only the absence of thespecific independent variable of experimental interest.

36. In addition to the primary purpose of establishing a base-line as an objective basis for evaluating the effects of the

independent variable, three other reasons for baseline datacollection are as follows: (a) Systematic observation ofthe target behavior prior to intervention sometimes yieldsinformation about antecedent-behavior-consequent corre-lations that may be useful in planning an effective inter-vention; (b) baseline data can provide valuable guidancein setting initial criteria for reinforcement; and (c) some-times baseline data reveal that the behavior targeted forchange does not warrant intervention.

37. Four types of baseline data patterns are stable, ascending,descending, and variable.

38. The independent variable should be introduced when sta-ble baseline responding has been achieved.

39. The independent variable should not be introduced if ei-ther an ascending or descending baseline indicates im-proving performance.

40. The independent variable should be introduced if eitheran ascending or descending baseline indicates deteriorat-ing performance.

41. The independent variable should not be imposed on ahighly variable, unstable baseline.

42. Prediction of future behavior under relatively constant en-vironmental conditions can be made on the basis of re-peated measures of behavior showing little or no variation.

43. In general, given stable responding, the more data pointsthere are and the longer the time period in which they wereobtained, the more accurate the prediction will likely be.

44. Practice effects refer to improvements in performance re-sulting from opportunities to emit the behavior that mustbe provided to obtain repeated measures.

45. Extended baseline measurement is not necessary for be-haviors that have no logical opportunity to occur.

46. The inductive reasoning called affirmation of the conse-quent lies at the heart of baseline logic.

47. Although the logic of affirming the consequent is not com-pletely sound (some other event may have caused thechange in behavior), an effective experimental design con-firms several if-A-then-B possibilities, thereby eliminatingcertain other factors as responsible for the observedchanges in behavior.

48. Verification of prediction is accomplished by demonstrat-ing that the prior level of baseline responding would haveremained unchanged if the independent variable had notbeen introduced.

49. Replication within an experiment means reproducing apreviously observed behavior change by reintroducing theindependent variable. Replication within an experimentreduces the probability that a variable other than the inde-pendent variable was responsible for the behavior changeand demonstrates the reliability of the behavior change.

195

A-B-A designA-B-A-B designalternating treatments designB-A-B designDRI/DRA reversal technique

DRO reversal techniqueirreversibilitymultielement designmultiple treatment interferencemultiple treatment reversal design

(NCR) reversal techniquereversal designsequence effectswithdrawal design

Behavior Analyst Certification Board® BCBA® & BCABA®Behavior Analyst Task List©, Third Edition

Content Area 5: Experimental Evaluation of Interventions

5-1 Systematically manipulate independent variables to analyze their effects ontreatment.

(a) Use withdrawal designs.

(b) Use reversal designs.

(c) Use alternating treatments (i.e., multielement, simultaneous treatment, multipleor concurrent schedule) designs.

5-2 Identify and address practical and ethical considerations in using variousexperimental designs.

© 2006 The Behavior Analyst Certification Board, Inc.,® (BACB®) all rights reserved. A current version of thisdocument may be found at www.bacb.com. Requests to reprint, copy, or distribute this document and ques-tions about this document must be submitted directly to the BACB.

Reversal and Alternating Treatments Designs

Key Terms

From Chapter 8 of Applied Behavior Analysis, Second Edition. John O. Cooper, Timothy E. Heron, William L. Heward. Copyright © 2007by Pearson Education, Inc. All rights reserved.

196

This chapter describes the reversal and alter-nating treatments designs, two types of experi-mental analysis tactics widely used by applied

behavior analysts. In a reversal design, the effects of in-troducing, withdrawing (or “reversing” the focus of), andreintroducing an independent variable are observed onthe target behavior. In an alternating treatments analysis,two or more experimental conditions are rapidly alter-nated, and the differential effects on behavior are noted.We explain how each design incorporates the three ele-ments of steady state strategy—prediction, verification,and replication—and present representative examples il-lustrating the major variations of each. Considerationsfor selecting and using reversal and alternating treatmentsdesigns are also presented.

Reversal DesignAn experiment using a reversal design entails repeatedmeasures of behavior in a given setting that requires atleast three consecutive phases: (a) an initial baseline phasein which the independent variable is absent, (b) an inter-vention phase during which the independent variable isintroduced and remains in contact with the behavior, and(c) a return to baseline conditions accomplished by with-drawal of the independent variable. In the widely used no-tation system for describing experimental designs inapplied behavior analysis, the capital letters A and B de-note the first and second conditions, respectively, that areintroduced in a study. Typically baseline (A) data are col-lected until steady state responding is achieved. Next, anintervention (B) condition is applied that signifies the pres-ence of a treatment—the independent variable. An ex-periment entailing one reversal is described as an A-B-A design. Although studies using an A-B-A designare reported in the literature (e.g., Christle & Schuster,2003; Geller, Paterson, & Talbott, 1982; Jacobson,Bushell, & Risley, 1969; Stitzer, Bigelow, Liebson, &Hawthorne, 1982), an A-B-A-B design is preferred be-cause reintroducing the B condition enables the replicationof treatment effects, which strengthens the demonstrationof experimental control (see Figure 1).1

1Some authors use the term withdrawal design to describe experimentsbased on an A-B-A-B analysis and reserve the term reversal design forstudies in which the behavioral focus of the treatment variable is reversed(or switched to another behavior), as in the DRO and DRI/DRA reversaltechniques described later in this chapter (e.g., Leitenberg, 1973; Poling,Method, & LeSage, 1995). However, reversal design, as the term is usedmost often in the behavior analysis literature, encompasses both with-drawals and reversals of the independent variable, signifying the re-searcher’s attempt to demonstrate “behavioral reversibility” (Baer, Wolf, &Risley, 1968; Thompson & Iwata, 2005). Also, withdrawal design is some-times used to describe an experiment in which the treatment variable(s)are sequentially or partially withdrawn after their effects have been analyzedin an effort to promote maintenance of the target behavior (Rusch & Kazdin,1981).

The A-B-A-B reversal is the most straightforwardand generally most powerful within-subject design fordemonstrating a functional relation between an environ-mental manipulation and a behavior. When a functionalrelation is revealed with a reversal design, the data showhow the behavior works.

As explanations go, the one offered by the reversal de-sign was not at all a bad one. In answer to the question,“How does this response work?” we could point outdemonstrably that it worked like so [e.g., see Figure 1]. Of course, it might also work in other ways; but,we would wait until we had seen the appropriate graphsbefore agreeing to any other way. (Baer, 1975, p. 19)

Baer’s point must not be overlooked: Showing that abehavior works in a predictable and reliable way in thepresence and absence of a given variable provides onlyone answer to the question, How does this behaviorwork? There may be (and quite likely are) other control-ling variables for the targeted response class. Whetheradditional experimentation is needed to explore thoseother possibilities depends on the social and scientificimportance of obtaining a more complete analysis.

Operation and Logic of theReversal Design

Risley (2005) described the rationale and operation ofthe reversal design as follows:

The reversal or ABAB design that Wolf reinvented fromClaude Bernard’s early examples in experimental medi-cine entailed establishing a baseline of repeated quanti-fied observations sufficient to see a trend and forecastthat trend into the near future (A); to then alter conditionsand see if the repeated observations become differentthan they were forecast to be (B); to then change backand see if the repeated observations return to confirm theoriginal forecast (A); and finally, to reintroduce the al-tered conditions and see if the repeated observationsagain become different than forecast (B). (pp. 280–281)2

A brief review here of the roles of prediction, ver-ification, and replication in the reversal design will suf-fice. Figure 2 shows the same data from Figure 1 withthe addition of the open data points representing pre-dicted measures of behavior if conditions in the previ-ous phase had remained unchanged. After a stablepattern of responding, or a countertherapeutic trend, isobtained during Baseline 1, the independent variable is

2Risley (1997, 2005) credits Montrose Wolf with designing the first ex-periments using the reversal and multiple baseline designs. “The researchmethods that Wolf pioneered in these studies were groundbreaking. Thatmethodology came to define applied behavior analysis” (pp. 280–281).

Reversal and Alternating Treatments Designs

197

introduced. In our hypothetical experiment the measuresobtained during Treatment 1, when compared with thosefrom Baseline 1 and with the measures predicted byBaseline 1, show that behavior change occurred and thatthe change in behavior coincided with the intervention.After steady state responding is attained in Treatment 1,the independent variable is withdrawn and baseline con-ditions are reestablished. If the level of responding inBaseline 2 is the same as or closely approximates themeasures obtained during Baseline 1, verification of theprediction made for Baseline 1 data is obtained. Statedotherwise, had the intervention not been introduced andhad the initial baseline condition remained in effect, thepredicted data path would have appeared as shown inBaseline 2. When withdrawal of the independent vari-able results in a reversal of the behavior change associ-ated with its introduction, a strong case builds that theintervention is responsible for the observed behaviorchange. If reintroduction of the independent variable inTreatment 2 reproduces the behavior change observedduring Treatment 1, replication of effect has beenachieved, and a functional relation has been demon-strated. Again stated in other terms, had the interventioncontinued and had the second baseline condition not been

introduced, the predicted data path of the treatment wouldhave appeared as shown in Treatment 2.

Romaniuk and colleagues (2002) provided an ex-cellent example of the A-B-A-B design. Three studentswith developmental disabilities who frequently displayedproblem behaviors (e.g., hitting, biting, whining, crying,getting out of seat, inappropriate gestures, noises, andcomments) when given academic tasks participated inthe study. Prior to the experiment a functional analysishad shown that each student’s problem behaviors weremaintained by escape from working on the task (i.e.,problem behavior occurred most often when followed bybeing allowed to take a break from the task). The re-searchers wanted to determine whether providing stu-dents with a choice of which task to work on wouldreduce the frequency of their problem behavior, eventhough problem behavior, when it occurred, would stillresult in a break. The experiment consisted of two con-ditions: no choice (A) and choice (B). The same set ofteacher-nominated tasks was used in both conditions.

Each session during the no-choice condition beganwith the experimenter providing the student with a taskand saying, “This is the assignment you will be workingon today” or “It’s time to work on _____” (p. 353). Dur-

Beh

avio

r

Time

Baseline 1 Treatment 1 Baseline 2 Treatment 2Figure 2 Illustration of A-B-A-Breversal design. Open data pointsrepresent data predicted if conditionsfrom previous phase remained in effect.Data collected during Baseline 2 (withinshaded box) verify the prediction fromBaseline 1. Treatment 2 data (cross-hatched shading) replicate the experi-mental effect.

Beh

avio

r

Baseline 1 Treatment 1 Baseline 2 Treatment 2

Time

Figure 1 Graphic prototype of the A-B-A-B reversal design.

Reversal and Alternating Treatments Designs

198

ing the choice condition (B), the experimenter placed thematerials for four to six tasks on the table before the stu-dent and said, “Which assignment would you like to workon today?” (p. 353). The student was also told that he orshe could switch tasks at any time during the session byrequesting to do so. Occurrences of problem behavior inboth conditions resulted in the experimenter stating, “Youcan take a break now” and giving a 10-second break.

Figure 3 shows the results of the experiment. Thedata reveal a clear functional relation between the op-portunity to choose which tasks to work on and reducedoccurrence of problem behavior by all three students. Thepercentage of session time in which each student exhib-ited problem behavior (reported as a total duration mea-sure obtained by recording the second of onset and offsetas shown by the VCR timer) decreased sharply from theno-choice (baseline) levels when the choice conditionwas implemented, returned (reversed) to baseline levelswhen choice was withdrawn, and decreased again whenchoice was reinstated. The A-B-A-B design enabled Ro-maniuk and colleagues to conduct a straightforward, un-

ambiguous demonstration that significant reductions inproblem behavior exhibited by each student were a func-tion of being given a choice of tasks.

In the 1960s and early 1970s applied behavior ana-lysts relied almost exclusively on the A-B-A-B reversaldesign. The straightforward A-B-A-B design played sucha dominant role in the early years of applied behavioranalysis that it came to symbolize the field (Baer, 1975).This was no doubt due, at least in part, to the rever-sal design’s ability to expose variables for what they are—strong and reliable or weak and unstable. An-other reason for the reversal design’s dominance may havebeen that few alternative analytic tactics were available at that time that effectively combined the intrasubject experimental elements of prediction, verification, and replication. Although the reversal design is just one of many experimental designs available to applied behavior analysts today, the simple, unadorned A-B-A-Bdesign continues to play a major role in the behavior analysis literature (e.g., Anderson & Long, 2002; Ash-baugh & Peck, 1998; Cowdery, Iwata, & Pace, 1990;

100

80

60

40

20

0

Pro

ble

m B

ehav

ior

(% o

f Se

ssio

n T

ime)

1 5 9 13 17 21 25

No Choice(No Ch)

Choice(Ch)

No Ch Ch

Brooke

100

80

60

40

20

0

Pro

ble

m B

ehav

ior

(% o

f Se

ssio

n T

ime)

No Ch Ch No Ch

1 5 9 13 17 21

Ch No Ch Ch

Maggie

100

80

60

40

20

0

Pro

ble

m B

ehav

ior

(% o

f Se

ssio

n T

ime)

No Ch Ch No Ch

1

Ch

Gary

5 9 13 17 21 25 29

Sessions

Figure 3 An A-B-A-B reversal design.From “The Influence of Activity Choice on Problems Behaviors Maintainedby Escape versus Attention” by C. Romaniuk, R. Miltenberger, C. Conyers,N. Jenner, M. Jurgens, and C. Ringenberg, 2002, Journal of AppliedBehavior Analysis, 35, p. 357. Copyright 2002 by the Society for theExperimental Analysis of Behavior, Inc. Reprinted by permission.

Reversal and Alternating Treatments Designs

199

Reversal and Alternating Treatments Designs

Deaver, Miltenberger, & Stricker, 2001; Gardner, Heward,& Grossi, 1994; Levondoski & Cartledge, 2000; Lindberg,Iwata, Kahng, & DeLeon, 1999; Mazaleski, Iwata,Rodgers, Vollmer, & Zarcone, 1994; Taylor & Alber, 2003;Umbreit, Lane, & Dejud, 2004).

Variations of the A-B-A-B Design

Many applied behavior analysis studies use variations orextensions of the A-B-A-B design.

Repeated Reversals

Perhaps the most obvious variation of the A-B-A-B re-versal design is a simple extension in which the inde-pendent variable is withdrawn and reintroduced a secondtime; A-B-A-B-A-B (see the graph for Maggie in Figure3). Each additional presentation and withdrawal that reproduces the previously observed effects on behaviorincreases the likelihood that the behavior changes are theresult of manipulating the independent variable. All otherthings being equal, an experiment that incorporates mul-tiple reversals presents a more convincing and compellingdemonstration of a functional relation than does an ex-periment with one reversal (e.g., Fisher, Lindauer, Alter-son, & Thompson, 1998; Steege et al., 1990). That said,it is also possible to reach a point of redundancy beyondwhich the findings of a given analysis are no longer en-hanced significantly by additional reversals.

B-A-B Design

The B-A-B design begins with the application of the in-dependent variable: the treatment. After stable respondinghas been achieved during the initial treatment phase (B),the independent variable is withdrawn. If the behaviorworsens in the absence of the independent variable (the A condition), the treatment variable is reintroduced in anattempt to recapture the level of responding obtained dur-ing the first treatment phase, which would verify the pre-diction based on the data path obtained during the initialtreatment phase.

Compared to the A-B-A design, the B-A-B design ispreferable from an applied sense in that the study endswith the treatment variable in effect. However, in termsof demonstrating a functional relation between the inde-pendent variable and dependent variable, the B-A-B de-sign is the weaker of the two because it does not enablean assessment of the effects of the independent variable onthe preintervention level of responding. The noninter-vention (A) condition in a B-A-B design cannot verify aprediction of a previous nonexistent baseline. This weak-

ness can be remedied by withdrawing and then reintro-ducing the independent variable, as in a B-A-B-A-Bdesign (e.g., Dixon, Benedict, & Larson, 2001.

Because the B-A-B design provides no data to de-termine whether the measures of behavior taken duringthe A condition represent preintervention performance,sequence effects cannot be ruled out: The level of be-havior observed during the A condition may have been in-fluenced by the fact that the treatment condition precededit. Nevertheless, there are exigent situations in which ini-tial baseline data cannot be collected. For instance, the B-A-B design may be appropriate with target behaviorsthat result in physical harm or danger to the participantor to others. In such instances, withholding a possibly effective treatment until a stable pattern of baseline re-sponding can be obtained may present ethical problems.For example, Murphy, Ruprecht, Baggio, and Nunes(1979) used a B-A-B design to evaluate the effectivenessof mild punishment combined with reinforcement on thenumber of self-choking responses by a 24-year-old manwith profound mental retardation. After the treatmentwas in effect for 24 sessions, it was withdrawn for threesessions, during which an immediate and large increasein self-choking was recorded (see Figure 4). Reintro-duction of the treatment package reproduced behaviorlevels noted during the first treatment phase. The averagenumber of self-chokes during each phase of the B-A-Bstudy was 22, 265, and 24, respectively.

Despite the impressive reduction of behavior, the re-sults of Murphy and colleagues’ study using a B-A-B de-sign may have been enhanced by gathering and reportingobjectively measured data on the level of behavior prior tothe first intervention. Presumably, Murphy and colleagueschose not to collect an initial baseline for ethical and prac-tical reasons. They reported anecdotally that self-chokesaveraged 434 per day immediately prior to their interven-tion when school staff had used a different procedure to re-duce the self-injurious behavior. This anecdotal informationincreased the believability of the functional relation sug-gested by the experimental data from the B-A-B design.

At least two other situations exist in which a B-A-Bdesign might be warranted instead of the more conven-tional A-B-A-B design. These include (a) when a treat-ment is already in place (e.g., Marholin, Touchette, &Stuart, 1979; Pace & Troyer, 2000) and (b) when the be-havior analyst has limited time in which to demonstratepractical and socially significant results. For instance,Robinson, Newby, and Ganzell (1981) were asked to de-velop a behavior management system for a class of 18hyperactive boys with the stipulation that the program’seffectiveness be demonstrated within 4 weeks. Given “thestipulation of success in 4 weeks, a B-A-B design wasused” (pp. 310–311).

200

Reversal and Alternating Treatments Designs

300

275

250

225

200

175

150

125

100

75

50

25

0

Nu

mb

er o

f Se

lf-C

ho

kin

g R

esp

on

ses

5 10 15 20 25 30 35 40 45 50 55 60

Treatment Rev Treatment

Sessions

Figure 4 A B-A-B reversal design.From “The Use of Mild Punishment in Combination withReinforcement of Alternate Behaviors to Reduce the Self-Injurious Behavior of a Profoundly Retarded Individual” by R. J. Murphy, M. J. Ruprecht, P. Baggio, and D. L. Nunes, 1979, AAESPH Review, 4, p. 191. Copyright 1979 by theAAESPH Review. Reprinted by permission.

Multiple Treatment Reversal Designs

Experiments that use the reversal design to compare theeffects of two or more experimental conditions to base-line and/or to one another are said to use a multiple treat-ment reversal design. The letters C, D, and so on, denoteadditional conditions, as in the A-B-C-A-C-B-C designused by Falcomata, Roane, Hovanetz, Kettering, andKeeney (2004); the A-B-A-B-C-B-C design used by Free-land and Noell (1999); the A-B-C-B-C-B-C design usedby Lerman, Kelley, Vorndran, Kuhn, and LaRue (2002);the A-B-A-C-A-D-A-C-A-D design used by Weeks andGaylord-Ross (1981); and the A-B-A-B-B+C-B-B+C de-sign of Jason and Liotta (1982). As a whole, these de-signs are considered variations of the reversal designbecause they embody the experimental method and logicof the reversal tactic: Responding in each phase providesbaseline (or control condition) data for the subsequentphase (prediction), independent variables are withdrawnin an attempt to reproduce levels of behavior observedin a previous condition (verification), and each indepen-dent variable that contributes fully to the analysis isintroduced at least twice (replication). Independent vari-ables can be introduced, withdrawn, changed in value,combined, and otherwise manipulated to produce an end-less variety of experimental designs.

For example, Kennedy and Souza (1995) used an A-B-C-B-C-A-C-A-C design to analyze and compare the

effects of two kinds of competing sources of stimulationon eye poking by a 19-year-old student with profounddisabilities. Geoff had a 12-year history of poking hisforefinger into his eyes during periods of inactivity, suchas after lunch or while waiting for the bus. The two treat-ment conditions were music (B) and a video game (C).During the music condition, Geoff was given a SonyWalkman radio with headphones. The radio was tunedto a station that his teacher and family thought he pre-ferred. Geoff had continuous access to the music duringthis condition, and he could remove the headphones atany time. During the video game condition, Geoff wasgiven a small handheld video game on which he couldobserve a variety of visual patterns and images on thescreen with no sound. As with the music condition, Geoffhad continuous access to the video game and could dis-continue using it at any time.

Figure 5 shows the results of the study. Following an initial baseline phase (A) in which Geoff averaged 4eye pokes per hour, the music condition (B) was intro-duced and eye pokes decreased to a mean of 2.8 per hour.The video game (C) was implemented next, and eyepokes decreased further to 1.1 per hour. Measures ob-tained during the next two phases—a reintroduction ofmusic (B) followed by a second phase of the video game(C)—replicated previous levels of responding under eachcondition. This B-C-B-C portion of the experiment

201

Reversal and Alternating Treatments Designs

6

5

4

3

2

1

0

5 10 15 20 25 30

Baseline (BL) Video Game Video VideoMusic Music BL BL

Video

School Days

Nu

mb

er o

f Ey

e-Po

kes

per

Ho

ur

Figure 5 Example of multiple-treatmentreversal design (A-B-C-B-C-A-C-A-C).From “Functional Analysis and Treatment of Eye Poking” by C. H.Kennedy and G. Souza, 1995, Journal of Applied Behavior Analysis,28, p. 33. Copyright 1995 by the Society for the Experimental Analysisof Behavior, Inc. Reprinted by permission.

revealed a functional relation between video game con-dition and lower frequency of eye pokes compared tomusic. The final five phases of the experiment (C-A-C-A-C) provided an experimental comparison of the videogame and baseline (no-treatment) condition.

In most instances, extended designs involving mul-tiple independent variables are not preplanned. Insteadof following a predetermined, rigid structure that dictateswhen and how experimental manipulations must bemade, the applied behavior analyst makes design deci-sions based on ongoing assessments of the data.

In this sense, a single experiment may be viewed as anumber of successive designs that are collectively neces-sary to clarify relations between independent and depen-dent variables. Thus, some design decisions might bemade in response to the data unfolding as the investiga-tion progresses. This sense of design encourages the ex-perimenter to pursue in more dynamic fashion thesolutions to problems of experimental control immedi-ately upon their emergence. (Johnston & Pennypacker,1980, pp. 250–251)

Students of applied behavior analysis should not in-terpret this description of experimental design as a rec-ommendation for a completely free-form approach to themanipulation of independent variables. The researchermust always pay close attention to the rule of changingonly one variable at a time and must understand the op-portunities for legitimate comparisons and the limitationsthat a given sequence of manipulations places on the con-clusions that can be drawn from the results.

Experiments that use the reversal design to comparetwo or more treatments are vulnerable to confounding bysequence effects. Sequence effects are the effects on asubject’s behavior in a given condition that are the resultof the subject’s experience with a prior condition. For ex-ample, caution must be used in interpreting the resultsfrom the A-B-C-B-C design that results from the fol-lowing fairly common sequence of events in practice:After baseline (A), an initial treatment (B) is implemented

and little or no behavioral improvements are noted. Asecond treatment (C) is then tried, and the behavior im-proves. A reversal is then conducted by reintroducing thefirst treatment (B), followed by reinstatement of the sec-ond treatment (C) (e.g., Foxx & Shapiro, 1978). In thiscase, we can only speak knowingly about the effects ofC when it follows B. Recapturing the original baselinelevels of responding before introducing the second treat-ment condition (i.e., an A-B-A-C-A-C sequence) reducesthe threat of sequence effects (or helps to expose themfor what they are).

An A-B-A-B-C-B-C design, for instance, enables di-rect comparisons of B to A and C to B, but not of C to A.An experimental design consisting of A-B-A-B-B+C-B-B+C (e.g., Jason & Liotta, 1982) permits an evaluation ofthe additive or interactive effects of B+C, but does notreveal the independent contribution of C. And in both ofthese examples, it impossible to determine what effects,if any, C may have had on the behavior if it had been im-plemented prior to B. Manipulating each condition sothat it precedes and follows every other condition in theexperiment (e.g., A-B-A-B-C-B-C-A-C-A-C) is the onlyway to know for sure. However, manipulating multipleconditions requires a large amount of time and resources,and such extended designs become more susceptible toconfounding by maturation and other historical variablesnot controlled by the experimenter.

NCR Reversal Technique

With interventions based on positive reinforcement, itcan be hypothesized that observed changes in behaviorare the result of the participant’s feeling better about him-self because of the improved environment created by thereinforcement, not because a specific response class hasbeen immediately followed by contingent reinforcement.This hypothesis is most often advanced when interven-tions consisting of social reinforcement are involved. Forexample, a person may claim that it doesn’t matter how

202

Reversal and Alternating Treatments Designs

the teacher’s praise and attention were given; the stu-dent’s behavior improved because the praise and attentioncreated a warm and supporting environment. If, however,the behavioral improvements observed during a contin-gent reinforcement condition are lost during a conditionwhen equal amounts of the same consequence are deliv-ered independent of the occurrence of the target behav-ior, a functional relation between the reinforcementcontingency and behavior change is demonstrated. Inother words, such an experimental control technique canshow that behavior change is the result of contingent re-inforcement, not simply the presentation of or contactwith the stimulus event (Thompson & Iwata, 2005).

A study by Baer and Wolf (1970a) on the effects ofteachers’ social reinforcement on the cooperative play ofa preschool child provides an excellent example of theNCR reversal technique (Figure 6). The authors de-scribed the use and purpose of the design as follows:

[The teachers first collected] baselines of cooperativeand other related behaviors of the child, and of their owninteraction with the child. Ten days of observation indi-cated that the child spent about 50% of each day in prox-imity with other children (meaning within 3 feet of themindoors, or 6 feet outdoors). Despite this frequent prox-imity, however, the child spent only about 2% of her dayin cooperative play with these children. The teachers, itwas found, interacted with this girl about 20% of the

day, not all of it pleasant. The teachers, therefore, set upa period of intense social reinforcement, offered not forcooperative play but free of any response requirement atall: the teachers took turns standing near the girl, attend-ing closely to her activities, offering her materials, andsmiling and laughing with her in a happy and admiringmanner. The results of 7 days of this noncontingent ex-travagance of social reinforcement were straightforward:the child’s cooperative play changed not at all, despitethe fact that the other children of the group were greatlyattracted to the scene, offering the child nearly doublethe chance to interact with them cooperatively. These 7 days having produced no useful change, the teachersthen began their planned reinforcement of cooperativebehavior. . . . Contingent social reinforcement, used inamounts less than half that given during the noncontin-gent period, increased the child’s cooperative play fromits usual 2% to a high of 40% in the course of 12 days ofreinforcement. At that point, in the interests of certainty,the teachers discontinued contingent reinforcement infavor of noncontingent. In the course of 4 days, they lostvirtually all of the cooperative behavior they had gainedduring the reinforcement period of the study, the childshowing about a 5% average of cooperative play overthat period of time. Naturally, the study concluded with areturn to the contingent use of social reinforcement, a re-covery of desirable levels of cooperative play, and agradual reduction of the teacher’s role in maintainingthat behavior. (pp. 14–15)

50

40

30

20

10

0

Per

cen

tage

of

Sess

ion

in C

oo

per

ativ

e P

lay

1 5 10 15 20 25 30 35 40Days

Baseline

No

nco

nti

nge

nt

Rei

nfo

rcem

ent 1 Contingent

Reinforcement1

No

nco

nti

nge

nt

Rei

nfo

rcem

ent 2 Contingent

Reinforcement2

Figure 6 Reversal design using noncontin-gent reinforcement (NCR) as a control technique.From “Recent Examples of Behavior Modification in Pre-SchoolSettings” by D. M. Baer and M. M. Wolf in Behavior Modification inClinical Psychology, pp. 14–15, edited by C. Neuringer and J. L.Michael, 1970, Upper Saddle River, NJ: Prentice Hall. Copyright 1970by Prentice Hall. Adapted by permission.

203

Reversal and Alternating Treatments Designs

Using NCR as a control conditions to demonstrate afunctional relation is advantageous when it is not possi-ble or appropriate to eliminate completely the event oractivity used as a contingent reinforcement. For exam-ple, Lattal (1969) employed NCR as a control conditionto “reverse” the effects of swimming as reinforcementfor tooth brushing by children in a summer camp. In thecontingent reinforcement condition, the campers couldgo swimming only if they had brushed their teeth; in theNCR condition, swimming was available whether or nottooth brushing occurred. The campers brushed their teethmore often in the contingent reinforcement condition.

The usual procedure is to deliver NCR on a fixed orvariable time schedule independent of the subject’s be-havior. A potential weakness of the NCR control proce-dure becomes apparent when a high rate of the desiredbehavior has been produced during the preceding con-tingent reinforcement phase. It is probable in such situ-ations that at least some instances of NCR, deliveredaccording to a predetermined time schedule, will followoccurrences of the target behavior closely in time, andthereby function as adventitious, or “accidental rein-forcement” (Thompson & Iwata, 2005). In fact, an inter-mittent schedule of reinforcement might be createdinadvertently that results in even higher levels of perfor-mance than those obtained under contingent reinforce-ment. In such cases the investigator might consider usingone of the two control techniques described next, both ofwhich involve “reversing” the behavioral focus of thecontingency.3

DRO Reversal Technique

One way to ensure that reinforcement will not immedi-ately follow the target behavior is to deliver reinforce-ment immediately following the subject’s performanceof any behavior other than the target behavior. With aDRO reversal technique, the control condition consistsof delivering the event suspected of functioning as rein-forcement following the emission of any behavior otherthan the target behavior (e.g., Baer, Peterson, & Sher-man, 1967; Osbourne, 1969; Poulson, 1983). For exam-ple, Reynolds and Risley (1968) used contingent teacherattention to increase the frequency of talking in a 4-year-old girl enrolled in a preschool program for disadvan-

3Strictly speaking, using NCR as an experimental control technique todemonstrate that the contingent application of reinforcement is requisite toits effectiveness is not a separate variation of the A-B-A reversal design.Technically, the NCR reversal technique, as well as the DRO and DRI/DRAreversal techniques described next, is a multiple treatment design. For ex-ample, the Baer and Wolf (1970a) study of social reinforcement shown inFigure 6 used an A-B-C-B-C design, with B representing the NCR con-ditions and C representing the contingent reinforcement conditions.

taged children. After a period of teacher attention contin-gent on verbalization, in which the girl’s talking increasedfrom a baseline average of 11% of the intervals observedto 75%, a DRO condition was implemented during whichthe teachers attended to the girl for any behavior excepttalking. During the 6 days of DRO, the girl’s verbalizationdropped to 6%. Teacher attention was then delivered con-tingent on talking, and the girl’s verbalization “immedi-ately increased to an average of 51%” (p. 259).

DRI/DRA Reversal Technique

During the control condition in a DRI/DRA reversaltechnique, occurrences of a specified behavior that is ei-ther incompatible with the target behavior (i.e., the twobehaviors cannot possibly be emitted at the same time) oran alternative to the target behavior are immediately fol-lowed by the same consequence previously delivered ascontingent reinforcement for the target behavior. Goetzand Baer’s (1973) investigation of the effects of teacherpraise on preschool children’s creative play with buildingblocks illustrates the use of a DRI control condition.Figure 7 shows the number of different block forms (e.g., arch, tower, roof, ramp) constructed by the threechildren who participated in the study. During baseline(data points indicated by the letter N), “the teacher sat bythe child as she built with the blocks, watching closely butquietly, displaying neither criticism nor enthusiasm aboutany particular use of the blocks” (p. 212). During the nextphase (the D data points), “the teacher remarked with in-terest, enthusiasm, and delight every time that the childplaced and/or rearranged the blocks so as to create a formthat had not appeared previously in that session’s con-struction(s). . . . ‘Oh, that’s very nice—that’s different!’”(p. 212). Then, after increasing form diversity was clearlyestablished, instead of merely withdrawing verbal praiseand returning to the initial baseline condition, the teacherprovided descriptive praise only when the children hadconstructed the same forms (the S data points). “Thus,for the next two to four sessions, the teacher continued todisplay interest, enthusiasm, and delight, but only at thosetimes when the child placed and/or rearranged a blockso as to create a repetition of a form already apparent inthat session’s construction(s). . . . Thus, no first usage ofa form in a session was reinforced, but every secondusage of that form and every usage thereafter within thesession was. . . . ‘How nice—another arch!’” (p. 212).The final phase of the experiment entailed a return to de-scriptive praise for different forms. Results show that theform diversity of children’s block building was a func-tion of teacher praise and comments. The DRI reversaltactic allowed Goetz and Baer to determine that it wasnot just the delivery of teacher praise and comment that

204

Reversal and Alternating Treatments Designs

20

10

0

Sally

N N

NN

D S

DD

DD

S

D

D

D

20

10

0

Kathy

ND

S

N N

DDD

DDD

DD

S SS

20

10

0

Mary

ND

S

N

NNN

DD

D

D DD

DD

SS

Constructions

N—no reinforcementD—reinforce only different formsS—reinforce only same forms

Form

Div

ersi

ty(n

um

ber

of

dif

fere

nt

form

s p

er c

on

stru

ctio

n)

5 10 15

Figure 7 Reversal design using a DRI controltechnique.From “Social Control of Form Diversity and the Emergence of New Formsin Children’s Blockbuilding” by E. M. Goetz and D. M. Baer, 1973, Journalof Applied Behavior Analysis, 6, p. 213. Copyright 1973 by the Society forthe Experimental Analysis of Behavior, Inc. Reprinted by permission.

resulted in more creative block building by the children;the praise and attention had to be contingent on differentforms to produce increasing form diversity.4

Considering the Appropriateness of the Reversal Design

The primary advantage of the reversal design is its abil-ity to provide a clear demonstration of the existence (orabsence) of a functional relation between the indepen-dent and dependent variables. An investigator who reli-ably turns the target behavior on and off by presentingand withdrawing a specific variable makes a clear and

4The extent to which the increased diversity of the children’s block build-ing can be attributed to the attention and praise (“That’s nice”) or the de-scriptive feedback (“. . . that’s different”) in the teacher’s comments cannotbe determined from this study because social attention and descriptivefeedback were delivered as a package.

5Additional manipulations in the form of the partial or sequential with-drawal of intervention components are made when it is necessary or de-sirable for the behavior to continue at its improved level in the absence ofthe complete intervention (cf., Rusch & Kazdin, 1981).

convincing demonstration of experimental control. In ad-dition, the reversal design enables quantification of theamount of behavior change over the preintervention levelof responding. And the return to baseline provides infor-mation on the need to program for maintenance. Fur-thermore, a complete A-B-A-B design ends with thetreatment condition in place.5

In spite of its strengths as a tool for analysis, the re-versal design entails some potential scientific and socialdisadvantages that should be considered prior to its use.The considerations are of two types: irreversibility, whichaffects the scientific utility of the design; and the social,educational, and ethical concerns related to withdrawinga seemingly effective intervention.

205

Reversal and Alternating Treatments Designs

Irreversibility: A Scientific Consideration

A reversal design is not appropriate in evaluating the ef-fects of a treatment variable that, by its very nature, can-not be withdrawn once it has been presented. Althoughindependent variables involving reinforcement and pun-ishment contingencies can be manipulated with somecertainty—the experimenter either presents or withholdsthe contingency—an independent variable such as pro-viding information or modeling, once presented, cannotsimply be removed. For example, a reversal designwould not be an effective element of an experiment in-vestigating the effects of attending an in-service trainingworkshop for teachers during which participants ob-served a master teacher use contingent praise and atten-tion with students. After the participants have listenedto the rationale for using contingent praise and attentionand observed the master teacher model it, the exposureprovided by that experience could not be withdrawn.Such interventions are said to be irreversible.

Irreversibility of the dependent variable must also beconsidered in determining whether a reversal would be aneffective analytic tactic. Behavioral irreversibility meansthat a level of behavior observed in an earlier phase cannotbe reproduced even though the experimental conditions arethe same as they were during the earlier phase (Sidman,1960). Once improved, many target behaviors of interestto the applied behavior analyst remain at their newly en-hanced level even when the intervention responsible for thebehavior change is removed. From a clinical or educationalstandpoint, such a state of affairs is desirable: The behav-ior change is shown to be durable, persisting even in theabsence of continued treatment. However, irreversibility isa problem if demonstration of the independent variable’srole in the behavior change depends on verification by re-capturing baseline levels of responding.

For example, baseline observations might reveal verylow, almost nonexistent, rates of talking and social inter-action for a young child. An intervention consisting ofteacher-delivered social reinforcement for talking and in-teracting could be implemented, and after some time thegirl might talk to and interact with her peers at a frequencyand in a manner similar to that of her classmates. The in-dependent variable, teacher-delivered reinforcement,could be terminated in an effort to recapture baseline ratesof talking and interacting. But the girl might continue totalk to and interact with her classmates even though theintervention, which may have been responsible for theinitial change in her behavior, is withdrawn. In this casea source of reinforcement uncontrolled by the experi-menter—the girl’s classmates talking to and playing withher as a consequence of her increased talking and inter-acting with them—could maintain high rates of behaviorafter the teacher-delivered reinforcement is no longer pro-

vided. In such instances of irreversibility, an A-B-A-Bdesign would fail to reveal a functional relation betweenthe independent variable and the target behavior.

Nonetheless, one of the major objectives of appliedbehavior analysis is establishing socially important behavior through experimental treatments so that the be-havior will contact natural “communities of reinforce-ment” to maintain behavioral improvements in the absenceof treatment (Baer & Wolf, 1970b). When irreversibilityis suspected or apparent, in addition to considering DROor DRI/DRA conditions as control techniques, investiga-tors can consider other experimental tactics.

Withdrawing an Effective Intervention:A Social, Educational, and EthicalConsideration

Although it can yield an unambiguous demonstration ofexperimental control, withdrawing a seemingly effectiveintervention to evaluate its role in behavior change pre-sents a legitimate cause for concern. One must questionthe appropriateness of any procedure that allows (indeed,seeks) an improved behavior to deteriorate to baselinelevels of responding. Various concerns have been voicedover this fundamental feature of the reversal design. Al-though there is considerable overlap among the concerns,they can be classified as having primarily a social, edu-cational, or ethical basis.

Social Concerns. Applied behavior analysis is, bydefinition, a social enterprise. Behaviors are selected,defined, observed, measured, and modified by and forpeople. Sometimes the people involved in an appliedbehavior analysis—administrators, teachers, parents,and participants—object to the withdrawal of an inter-vention they associate with desirable behavior change.Even though a reversal may provide the most unquali-fied picture of the behavior–environment relation understudy, it may not be the analytic tactic of choice be-cause key participants do not want the intervention tobe withdrawn. When a reversal design offers the bestexperimental approach scientifically and poses no ethi-cal problems, the behavior analyst may choose to ex-plain the operation and purpose of the tactic to thosewho do not favor it. But it is unwise to attempt a rever-sal without the full support of the people involved, es-pecially those who will be responsible for withdrawingthe intervention (Tawney & Gast, 1984). Without theircooperation the procedural integrity of the experimentcould easily be compromised. For example, people whoare against the withdrawal of treatment might sabotagethe return to baseline conditions by implementing the

206

Reversal and Alternating Treatments Designs

intervention, or at least those parts of it that they con-sider the most important.

Ethical Concerns. A serious ethical concern mustbe addressed when the use of a reversal design is con-

sidered for evaluating a treatment for self-injurious ordangerous behaviors. With mild self-injurious or ag-gressive behaviors, short reversal phases consisting ofone or two baseline probes can sometimes provide theempirical evidence needed to reveal a functional rela-tion (e.g., Kelley, Jarvie, Middlebrook, McNeer, &Drabman, 1984; Luce, Delquadri, & Hall, 1980; Mur-phy et al., 1979 [Figure 4]). For example, in their study evaluating a treatment for elopement (i.e., run-ning away from supervision) by a child with attention-deficient/hyperactivity disorder, Kodak, Grow, andNorthrup (2004) returned to baseline conditions for asingle session (see Figure 8).

Nonetheless, with some behaviors it may be deter-mined that withdrawing an intervention associated withimprovement for even a few one-session probes wouldbe inappropriate for ethical reasons. In such cases ex-perimental designs that do not rely on the reversal tacticmust be used.

Alternating TreatmentsDesignAn important and frequently asked question by teachers,therapists, and others who are responsible for changingbehavior is, Which of these treatments will be most effective with this student or client? In many situations,the research literature, the analyst’s experience, and/orlogical extensions of the principles of behavior point to

100

90

80

70

60

50

40

30

20

10

0

Perc

enta

ge o

f Tim

e El

op

ing

Baseline BaselineTreatment Treatment

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23Sessions

Figure 8 Reversal design with a single-session return-to-baseline probe to evaluate andverify effects of treatment for a potentially dangerous behavior.From “Functional Analysis and Treatment of Elopement for a Child with Attention Deficit Hyperactivity Disorder” by T. Kodak, L. Grow,and J. Northrup, 2004, Journal of Applied Behavior Analysis, 37, p. 231. Copyright 2004 by the Society for the Experimental Analysisof Behavior, Inc. Reprinted by permission.

Educational and Clinical Issues. Educational orclinical issues concerning the reversal design are oftenraised in terms of instructional time lost during the re-versal phases, as well as the possibility that the behav-ioral improvements observed during intervention maynot be recaptured when treatment is resumed after a re-turn to baseline conditions. We agree with Stolz (1978)that “extended reversals are indefensible.” If preinter-vention levels of responding are reached quickly, rever-sal phases can be quite short in duration. Sometimesonly three or four sessions are needed to show that ini-tial baseline rates have been reproduced (e.g., Ash-baugh & Peck, 1998; Cowdery Iwata, & Pace, 1990).Two or three brief reversals can provide an extremelyconvincing demonstration of experimental control.Concern that the improved levels of behavior will notreturn when the treatment variable is reintroduced,while understandable, has not been supported by em-pirical evidence. Hundreds of published studies haveshown that behavior acquired under a given set of envi-ronmental conditions can be reacquired rapidly duringsubsequent reapplication of those conditions.

207

Reversal and Alternating Treatments Designs

several possible interventions. Determining which of sev-eral possible treatments or combination of treatments willproduce the greatest improvement in behavior is a pri-mary task for applied behavior analysts. As describedearlier, although a multiple treatment reversal design (e.g.,A-B-C-B-C) can be used to compare the effects of two ormore treatments, such designs have some inherent limi-tations. Because the different treatments in a multipletreatment reversal design are implemented during sepa-rate phases that occur in a particular order, the design isparticularly vulnerable to confounding because of se-quence effects (e.g., Treatment C may have produced itseffect only because it followed Treatment B, not becauseit was more robust in its own right). A second disadvan-tage of comparing multiple treatments with the reversaltactic is the extended time required to demonstrate dif-ferential effects. Most behaviors targeted for change byteachers and therapists are selected because they needimmediate improvement. An experimental design thatwill quickly reveal the most effective treatment amongseveral possible approaches is important for the appliedbehavior analyst.

The alternating treatments design provides an exper-imentally sound and efficient method for comparing theeffects of two or more treatments. The term alternatingtreatments design, proposed by Barlow and Hayes (1979),accurately communicates the operation of the design.Other terms used in the applied behavior analysis litera-ture to refer to this analytic tactic include multielementdesign (Ulman & Sulzer-Azaroff, 1975), multiple sched-ule design (Hersen & Barlow, 1976), concurrent sched-ule design (Hersen & Barlow, 1976), and simultaneoustreatment design (Kazdin & Hartmann, 1978).6

Operation and Logic of theAlternating Treatments Design

The alternating treatments design is characterized bythe rapid alternation of two or more distinct treatments(i.e., independent variables) while their effects on the tar-get behavior (i.e., dependent variable) are measured. Incontrast to the reversal design in which experimental ma-nipulations are made after steady state responding isachieved in a given phase of an experiment, the different

Treatment A

Treatment B

Sessions

Res

po

nse

Mea

sure

Figure 9 Graphic prototype of an alternating treat-ments design comparing the differential effects of twotreatments (A and B).

6A design in which two or more treatments are concurrently or simultane-ously presented, and in which the subject chooses between treatments, iscorrectly termed a concurrent schedule or simultaneous treatment design.Some published studies described by their authors as using a simultaneoustreatment design have, in fact, employed an alternating treatments design.Barlow and Hayes (1979) could find only one true example of a simulta-neous treatment design in the applied literature: a study by Browning (1967)in which three techniques for reducing the bragging of a 10-year-old boywere compared.

interventions in an alternating treatments design are ma-nipulated independent of the level of responding. The de-sign is predicated on the behavioral principle of stimulusdiscrimination. To aid the subject’s discrimination ofwhich treatment condition is in effect during a given ses-sion, a distinct stimulus (e.g., a sign, verbal instructions,different colored worksheets) is often associated witheach treatment.

The data are plotted separately for each intervention toprovide a ready visual representation of the effects ofeach treatment. Because confounding factors such astime of administration have been neutralized (presum-ably) by counterbalancing, and because the two treat-ments are readily discriminable by subjects throughinstructions or other discriminative stimuli, differencesin the individual plots of behavior change correspondingwith each treatment should be attributable to the treat-ment itself, allowing a direct comparison between two(or more) treatments. (Barlow & Hayes, 1979, p. 200)

Figure 9 shows a graphic prototype of an alternat-ing treatments design comparing the effects of two treat-ments, A and B, on some response measure. In analternating treatments design, the different treatments canbe alternated in a variety of ways. For example, the treat-ments might be (a) alternated across daily sessions, onetreatment in effect each day; (b) administered in separatesessions occurring within the same day; or (c) imple-mented each during a portion of the same session. Coun-terbalancing the days of the week, times of day, sequencein which the different treatments occur (e.g., first or sec-ond each day), persons delivering the different treatments,and so forth, reduces the probability that any observeddifferences in behavior are the result of variables otherthan the treatments themselves. For example, assume thatTreatments A and B in Figure 9 were each administered

208

Reversal and Alternating Treatments Designs

for a single 30-minute session each day, with the dailysequence of the two treatments determined by a coin flip.

The data points in Figure 9 are plotted on the hor-izontal axis to reflect the actual sequence of treatmentseach day. Thus, the horizontal axis is labeled Sessions,and each consecutive pair of sessions occurred on a sin-gle day. Some published reports of experiments using analternating treatments design in which two or more treat-ments were presented each day (or session) plot the mea-sures obtained during each treatment above the samepoint on the horizontal axis, thus implying that the treat-ments were administered simultaneously. This practicemasks the temporal order of events and has the unfortu-nate consequence of making it difficult for the researcheror reader to discover potential sequence effects.

The three components of steady state strategy—pre-diction, verification, and replication—are found in thealternating treatments design. However, each componentis not readily identified with a separate phase of the de-sign. In an alternating treatments design, each successivedata point for a specific treatment plays all three roles: Itprovides (a) a basis for the prediction of future levels ofresponding under that treatment, (b) potential verificationof the previous prediction of performance under that treat-ment, and (c) the opportunity for replication of previouseffects produced by that treatment.

To see this logic unfold, the reader should place apiece of paper over all the data points in Figure 9 ex-cept those for the first five sessions of each treatment.The visible portions of the data paths provide the basisfor predicting future performance under each respectivetreatment. Moving the paper to the right reveals the twodata points for the next day, each of which provides a de-gree of verification of the previous predictions. As moredata are recorded, the predictions of given levels of re-sponding within each treatment are further strengthenedby continued verification (if those additional data con-form to the same level and/or trend as their predecessors).Replication occurs each time Treatment A is reinstatedand measurement reveals responding similar to previousTreatment A measures and different from those obtainedwhen Treatment B is in effect. Likewise, another mini-replication is achieved each time a reintroduction ofTreatment B results in measures similar to previous Treat-ment B measures and different from Treatment A levelsof responding. A consistent sequence of verification andreplication is evidence of experimental control andstrengthens the investigator’s confidence of a functionalrelation between the two treatments and different levelsof responding.

The presence and degree of experimental control inan alternating treatments design is determined by visualinspection of the differences between (or among) the datapaths representing the different treatments. Experimental

control is defined in this instance as objective, believableevidence that different levels of responding are pre-dictably and reliably produced by the presence of the dif-ferent treatments. When the data paths for two treatmentsshow no overlap with each other and either stable levelsor opposing trends, a clear demonstration of experimen-tal control has been made. Such is the case in Figure 9,in which there is no overlap of data paths and the pictureof differential effects is clear. When some overlap of datapaths occurs, a degree of experimental control over thetarget behavior can still be demonstrated if the majorityof data points for a given treatment fall outside the rangeof values of the majority of data points for the contrast-ing treatment.

The extent of any differential effects produced bytwo treatments is determined by the vertical distance—or fractionation—between their respective data paths andquantified by the vertical axis scale. The greater the ver-tical distance, the greater the differential effect of the twotreatments on the response measure. It is possible for ex-perimental control to be shown between two treatmentsbut for the amount of behavior change to be socially in-significant. For instance, experimental control may bedemonstrated for a treatment that reduces a person’s se-vere self-injurious behavior from 10 occurrences per hourto 2 per hour, but the participant is still engaged in self-mutilation. However, if the vertical axis is scaled mean-ingfully, the greater the separation of data paths on thevertical axis, the higher the likelihood that the differencerepresents a socially significant effect.

Data from an experiment that compared the effectsof two types of group-contingent rewards on the spellingaccuracy of fourth-grade underachievers (Morgan, 1978)illustrate how the alternating treatments design revealsexperimental control and the quantification of differentialeffects. The six children in the study were divided intotwo equally skilled teams of three on the basis of pretestscores. Each day during the study the students took a five-word spelling test. The students received a list of thewords the day before, and a 5-minute study period wasprovided just prior to the test. Three different conditionswere used in the alternating treatments design: (a) nogame, in which the spelling tests were graded immedi-ately and returned to the students, and the next scheduledactivity in the school day was begun; (b) game, in whichtest papers were graded immediately, and each memberof the team who had attained the highest total score re-ceived a mimeographed Certificate of Achievement andwas allowed to stand up and cheer; and (c) game plus,consisting of the same procedure as the game condition,plus each student on the winning team also received asmall trinket (e.g., a sticker or pencil).

The results for Student 3 (see Figure 10) show thatexperimental control over spelling accuracy was obtained

209

Reversal and Alternating Treatments Designs

5

4

3

2

1

0

Nu

mb

er o

f W

ord

sSp

elle

d C

orr

ectl

y

5 10 15 20

game (4.16)game plus (4.14)

no game (2.16)

no game

game

game plus

Days

Student 3Figure 10 Alternating treatments designcomparing the effects of three different treat-ments on the spelling accuracy of a fourth-grade student.From Comparison of Two “Good Behavior Game” Group Contin-gencies on the Spelling Accuracy of Fourth-Grade Students by Q. E. Morgan, 1978, unpublished master’s thesis, The Ohio StateUniversity. Reprinted by permission.

between the no-game condition and both the game andthe game-plus conditions. Only the first two no-gamedata points overlap the lower range of scores obtainedduring the game or the game-plus conditions. However,the data paths for the game and game-plus conditionsoverlap completely and continuously throughout thestudy, revealing no difference in spelling accuracy be-tween the two treatments. The vertical distance betweenthe data paths represents the amount of improvement inspelling accuracy between the no-game condition and thegame and the game-plus conditions. The mean differencebetween the two game conditions and the no-game con-dition was two words per test. Whether such a differencerepresents a significant improvement is an educationalquestion, not a mathematical or statistical one, but mosteducators and parents would agree that an increase oftwo words spelled correctly out of five is socially signif-icant, especially if that gain can be sustained from weekto week. The cumulative effect over a 180-day schoolyear would be impressive. There was virtually no differ-ence in Student 3’s spelling performance between thegame and game-plus conditions. However, even a largermean difference would not have contributed to the con-clusions of the study because of the lack of experimen-

tal control between the game and the game-plus treat-ments.

Student 6 earned consistently higher spelling scoresin the game-plus condition than he did in the game or no-game conditions (see Figure 11). Experimental controlwas demonstrated between the game-plus and the othertwo treatments for Student 6, but not between the no-game and game conditions. Again, the difference in re-sponding between treatments is quantified by the verticaldistance between the data paths. In this case there was amean difference of 1.55 correctly spelled words per testbetween the game-plus and no-game conditions.

Figures 10 and 11 illustrate two other importantpoints about the alternating treatments design. First, thetwo graphs show how an alternating treatments designenables a quick comparison of interventions. Althoughthe study would have been strengthened by the collectionof additional data, after 20 sessions the teacher had suf-ficient empirical evidence for selecting the most effectiveconsequences for each student. If only two conditions hadbeen compared, even fewer sessions may have been re-quired to identify the most effective intervention. Second,these data underscore the importance of evaluating treat-ment effects at the level of the individual subject. All six

5

4

3

2

1

0

Nu

mb

er o

f W

ord

sSp

elle

d C

orr

ectl

y

5 10 15 20

game (3.40)no game (3.30)

game plus (4.85)

no game

game

game plus

Days

Student 6Figure 11 Alternating treatments designcomparing the effects of three different treat-ments on the spelling accuracy of a fourth-grade student.From Comparison of Two “Good Behavior Game” Group Contin-gencies on the Spelling Accuracy of Fourth-Grade Students by Q. E. Morgan, 1978, unpublished master’s thesis, The Ohio StateUniversity. Reprinted by permission.

210

Reversal and Alternating Treatments Designs

children spelled more words correctly under one or bothof the game conditions than they did under the no-gamecondition. However, Student 3’s spelling accuracy wasequally enhanced by either the game or the game-pluscontingency, whereas Student 6’s spelling scores im-proved only when a tangible reward was available.

Variations of the AlternatingTreatments Design

The alternating treatments design can be used to com-pare one or more treatments to a no-treatment or baselinecondition, assess the relative contributions of individualcomponents of a package intervention, and perform parametric investigations in which different values of anindependent variable are alternated to determine differ-ential effects on behavior change. Among the most com-mon variations of the alternating treatments design arethe following:

• Single-phase alternating treatments design withouta no-treatment control condition

• Single-phase design in which two or more condi-tions, one of which is a no-treatment control condi-tion, are alternated

• Two-phase design consisting of an initial baselinephase followed by a phase in which two or moreconditions (one of which may be a no-treatmentcontrol condition) are alternated

• Three-phase design consisting of an initial base-line, a second phase in which two or more condi-tions (one of which may be a no-treatment controlcondition) are alternated, and a final phase inwhich only the treatment that proved most effec-tive is implemented

Alternating Treatments Design without a No-Treatment Control Condition

One application of the alternating treatments design con-sists of a single-phase experiment in which the effects oftwo or more treatment conditions are compared (e.g.,Barbetta, Heron, & Heward, 1993; McNeish, Heron, &Okyere, 1992; Morton, Heward, & Alber, 1998). A studyby Belfiore, Skinner, and Ferkis (1995) provides an ex-cellent example of this design. They compared the ef-fects of two instructional procedures—trial-repetition andresponse-repetition—on the acquisition of sight wordsby three elementary students with learning disabilities inreading. An initial training list of five words for each con-dition was created by random selection from a pool ofunknown words (determined by pretesting each student).Each session began with a noninstructional assessment of

unknown and training words, followed by both condi-tions. The order of instructional conditions was counter-balanced across sessions. Words spoken correctly on threeconsecutive noninstructional assessments were consid-ered mastered and replaced as training words with un-known words.

The trial-repetition condition consisted of one re-sponse opportunity within each of five interspersed prac-tice trials per word. The experimenter placed a word cardon the table and said, “Look at the word, and say theword.” If the student made a correct response within 3seconds, the experimenter said, “Yes, the word is _____.”(p. 347). If the student’s initial response was incorrect, orthe student made no response within 3 seconds, the ex-perimenter said, “No, the word is _____,” and the studentrepeated the word. The experimenter then presented thenext word card and the procedure repeated until five prac-tice trials (antecedent-response-feedback) were providedwith each word.

The response-repetition condition also consisted offive response opportunities per word, but all five re-sponses occurred within a single practice trial for eachword. The experimenter placed a word card on the tableand said, “Look at the word, and say the word.” If thestudent made a correct response within 3 seconds, theexperimenter said, “Yes, the word is _____, please repeatthe word four more times” (p. 347). If the student madean incorrect response or no response within 3 seconds, theexperimenter said, “No, the word is _____.” The studentthen repeated the word and was instructed to repeat itfour more times.

Figure 12 shows the cumulative number of wordsmastered by each student under both conditions. Eventhough the number of correct responses per word duringinstruction was identical in both conditions, all three stu-dents had higher rates of learning new words in the trial-repetition condition than in the response-repetitioncondition. These results obtained within the simple al-ternating treatments design enabled Belfiore and col-leagues (1995) to conclude that, “Response repetitionoutside the context of the learning trial (i.e., of the three-term contingency) was not as effective as repetition thatincluded antecedent and consequent stimuli in relationto the accurate response” (p. 348).

Alternating Treatments Design with No-Treatment Control Condition

Although not a requirement of the design, a no-treatmentcondition is often incorporated into the alternating treat-ments design as one of the treatments to be compared.For example, the no-game condition in the Morgan(1978) study served as a no-treatment control conditionagainst which the students’ spelling scores in the game

211

Reversal and Alternating Treatments Designs

262422201816141210

8642

Cu

mu

lati

ve N

um

ber

of

Wo

rds

Mas

tere

d

Intervention

3 6 9 12 15 18 21 24

Maint

TrialRepetition

ResponseRepetition

Lou

262422201816141210

8642

Intervention

3 6 9 12 15 18 21 24

Maint

TrialRepetition

ResponseRepetition

Carl

Session

16

14

12

10

8

6

4

2

Intervention

3 6 9 12 15 18 21 24

Maint

TrialRepetition

ResponseRepetition

Hal

Figure 12 Single-phase alternating treatments design without a no-treatment controlcondition.From “Effects of Response and Trial Repetition on Sight-Word Training for Students with Learning Disabilities” by P. J. Belfiore, C. H.Skinner, and M. A. Ferkis, 1995, Journal of Applied Behavior Analysis, 28, p. 348. Copyright 1995 by the Society for the ExperimentalAnalysis of Behavior, Inc. Reprinted by permission.

and game-plus conditions were compared (see Figures10 and 11).

Including a no-treatment control condition as one ofthe experimental conditions in an alternating treatmentsdesign provides valuable information on any differencesin responding under the intervention treatment(s) and notreatment. However, the measures obtained during theno-treatment control condition should not be consideredrepresentative of an unknown preintervention level of re-sponding. It may be that the measures obtained in the no-treatment condition represent only the level of behaviorunder a no-treatment condition when it is interspersedwithin an ongoing series of treatment condition(s), and donot represent the level of behavior that existed before thealternating treatments design was begun.

Alternating Treatments Design with Initial Baseline

Investigators using the alternating treatments tactic oftenuse a two-phase experimental design in which baselinemeasures are collected until a stable level of respondingor countertherapeutic trend is obtained prior to the alter-nating treatments phase (e.g., Martens, Lochner, & Kelly,1992 [see Figure 6 of Chapter entitled “Schedules of Reinforcement”]). Sometimes the baseline condition iscontinued during the alternating treatments phase as ano-treatment control condition.

A study by J. Singh and N. Singh (1985) providesan excellent example of an alternating treatments designincorporating an initial baseline phase. The experimentevaluated the relative effectiveness of two procedures forreducing the number of oral reading errors by studentswith mental retardation. The first phase of the study con-sisted of a 10-day baseline condition in which each stu-dent was given a new 100-word passage three times eachday and told, “Here is the story for this session. I wantyou to read it. Try your best not to make any errors”

(p. 66). The experimenter sat nearby but did not assistthe student, correct any errors, or attend to self-corrections.If a student requested help with new or difficult words, hewas prompted to continue reading.

During the alternating treatments phase of the study,three different conditions were presented each day in sep-arate sessions of about 5 minutes each: control (the sameprocedures as during baseline), word supply, and wordanalysis. To minimize any sequence or carryover effectsfrom one condition to another, the three conditions werepresented in random order each day, each condition waspreceded with specific instructions identifying the pro-cedure to be implemented, and an interval of at least 5minutes separated consecutive sessions. During the word-supply condition, each student was instructed, “Here isthe story for this session. I want you to read it. I will helpyou if you make a mistake. I will tell you the correct wordwhile you listen and point to the word in the book. Afterthat, I want you to repeat the word. Try your best not tomake any errors” (p. 67). The experimenter supplied thecorrect word when an oral reading error was made, hadthe child repeat the correct word once, and instructed thechild to continue reading. During the word-analysis con-dition, each student was instructed, “Here is the story forthis session. I want you to read it. I will help you if youmake a mistake. I will help you sound out the word andthen you can read the word correctly before you carry onreading the rest of the story. Try your best not to make anyerrors” (p. 67). When errors were made in this condition,the experimenter directed the child’s attention to the pho-netic elements of the word and coaxed the child to soundout correctly each part of the word. Then the experi-menter had the student read the entire word at the nor-mal speed and instructed him or her to continue readingthe passage.

The results for the four students who participated inthe study are shown in Figure 13. Each baseline data

212

Reversal and Alternating Treatments Designs

8

6

4

2

0

14

12

10

8

6

4

2

0

Michael

Jane

12

10

8

6

4

2

0

Nu

mb

er o

f O

ral R

ead

ing

Erro

rs

Leanne

10

8

6

4

2

05 10 15 20 25 30

Days

Rex

Baseline Alternating TreatmentsControlWord SupplyWord Analysis

Figure 13 Alternating treatments design with an initial baseline.From “Comparison of Word-Supply and Word-Analysis Error-CorrectionProcedures on Oral Reading by Mentally Retarded Children” by J. Singh and N. Singh, 1985, American Journal of Mental Deficiency, 90, p. 67. Copyright1985 by the American Journal of Mental Deficiency. Reprinted by permission.

point is the mean number of errors for the three daily ses-sions. Although the data in each condition are highly vari-able (perhaps because of the varied difficulty of thedifferent passages used), experimental control is evident.All four students committed fewer errors during the word-supply and the word-analysis conditions than they didduring the control condition. Experimental control of oralreading errors, although not complete because of someoverlap of the data paths, is also demonstrated betweenthe word-supply and word-analysis conditions, withall four students making fewer errors during the word-analysis condition.

By beginning the study with a baseline phase,J. Singh and N. Singh (1985) were able to compare thelevel of responding obtained during each of the treat-ments to the natural level of performance uncontami-nated by the introduction of either error-correction

7The mand is one of six types of elementary verbal operants identified bySkinner (1957). Chapter entitled “Verbal Behavior” describes Skinner’sanalysis of verbal behavior and its importance to applied behavior analysis.8Stimulus preference assessment procedures are described in chapter en-titled “Positive Reinforcement.”

intervention. In addition, the initial baseline served asthe basis for predicting and assessing the measures ob-tained during the control sessions of the alternating treat-ments phase of the study. The measures obtained in thealternating control condition matched the relatively highfrequency of errors observed during the initial baselinephase, providing evidence that (a) the vertical distancebetween the data paths for the word-supply and word-analysis conditions and the data path for the control con-dition represents the true amount of improvementproduced by each treatment and (b) the frequency of er-rors during the control condition was not influenced byreduced errors during the other two treatments (i.e., nogeneralized reduction in oral reading errors from thetreated passages to untreated passages occurred).

Alternating Treatments Design with InitialBaseline and Final Best Treatment Phase

A widely used variation of the alternating treatments de-sign consists of three sequential phases: an initial base-line phase, a second phase comparing alternatingtreatments, and a final phase in which only the most ef-fective treatment is administered (e.g., Heckaman, Alber,Hooper, & Heward, 1998; Kennedy & Souza, 1995,Study 4; Ollendick, Matson, Esvelt-Dawson, & Shapiro,1980; N. Singh, 1990; N. Singh & J. Singh, 1984; N.Singh & Winton, 1985). Tincani (2004) used an alter-nating treatments design with an initial baseline and finalbest treatment phase to investigate the relative effective-ness of sign language and picture exchange training onthe acquisition of mands (requests for preferred items)by two children with autism.7 A related research ques-tion was whether a relation existed between students’ pre-existing motor imitation skills and their abilities to learnmands through sign language or by picture exchange.Two assessments were conducted for each student priorto baseline. A stimulus preference assessment (Pace,Ivancic, Edwards, Iwata, & Page, 1985) was conductedto identify a list of 10 to 12 preferred items (e.g., drinks,edibles, toys), and each student’s ability to imitate 27hand, arm, and finger movements similar to those re-quired for sign language was assessed.8

The purpose of baseline was to ensure that the par-ticipants were not able to request preferred items withpicture exchange, sign language, or speech prior to train-ing. Baseline trials consisted of giving the student 10 to20 seconds of noncontingent access to a preferred item,removing the item briefly, and then placing it out of the

213

Reversal and Alternating Treatments Designs

student’s reach. A laminated 2-inch-by-2-inch picture ofthe item was placed in front of the student. If the studentplaced the picture symbol in the experimenter’s hand,signed the name of the item, or spoke the name of theitem within 10 seconds, the experimenter provided ac-cess to the item. If not, the item was removed and thenext item on the list was presented. Following a three-session baseline, during which neither participant emit-ted an independent mand in any modality, the alternatingtreatments phase was begun.

The sign language training procedures were adaptedfrom Sundberg and Partington’s (1998) Teaching Lan-guage to Children with Autism or Other DevelopmentalDisabilities. The simplest sign from American Sign Lan-guage for each item was taught. Procedures used in thePECS training condition were adapted from Bondy andFrost’s (2002) The Picture Exchange Communication

System Training Manual. In both conditions, training oneach preferred item continued for five to seven trials persession, or until the participant showed no interest in theitem. At that time training then began on the next itemand continued until all 10 or 12 items on the participant’slist of preferred items had been presented. During thestudy’s final phase, each participant received either signlanguage or PECS training only, depending on whichmethod had been most successful during the alternatingtreatments phase.

The percentage of independent mands by the twostudents throughout the study is shown in Figures 14(Jennifer) and 15 (Carl). Picture exchange training wasclearly more effective than sign language for Jennifer.Jennifer demonstrated weak motor imitation skills in theprebaseline assessment, correctly imitating 20% of themotor movements attempted in the prebaseline imitation

100

90

80

70

60

50

40

30

20

10

0

Per

cen

tage

of

Ind

epen

den

t M

and

s

5 10 15 20 25 30

Jennifer

Sessions

Baseline PECS IIPECS I and Sign Language PECS IIIbPECS IIIa

PECS

Sign Language

Figure 14 Alternating treatmentsdesign with an initial baseline and afinal best-treatment-only condition.From “Comparing the Picture ExchangeCommunication System and Sign Language Training forChildren with Autism” by M. Tincani, 2004, Focus onAutism and Other Developmental Disabilities, 19,p. 160. Copyright 2004 by Pro-Ed. Used by permission.

100

90

80

70

60

50

40

30

20

10

0

Per

cen

tage

of

Ind

epen

den

t M

and

s

5 10 15 20 25 30Sessions

Baseline

Modified Sign Training

PECS I and Sign Language Sign Language“Best Treatment”

PECS

Sign Language

Carl

Figure 15 Alternating treat-ments design with an initial baselineand a final best-treatment-onlycondition.From “Comparing the Picture ExchangeCommunication System and Sign LanguageTraining for Children with Autism” by M. Tincani,2004, Focus on Autism and Other DevelopmentalDisabilities, 19, p. 159. Copyright 2004 by Pro-Ed.Used by permission.

214

Reversal and Alternating Treatments Designs

assessment. After a slight modification in sign languagetraining procedures was implemented to eliminate Carl’sprompt dependency, he emitted independent mands moreoften during sign language training than with picture ex-change training. Carl’s preexisting motor imitation skillswere better than Jennifer’s. He imitated correctly 43% ofthe attempted motor movements in the prebaseline imi-tation assessment.

This study highlights the importance of individualanalyses and exploring the possible influence of variablesnot manipulated during the study. In discussing thestudy’s results, Tincani (2004) noted that

For learners without hand-motor imitation skills, includ-ing many children with autism, PECS training may bemore appropriate, at least in terms of initial mand acqui-sition. Jennifer had weak hand-motor imitation skillsprior to intervention and learned picture exchange morerapidly than sign language. For learners who have mod-erate hand-motor imitation skills, sign language trainingmay be equally, if not more, appropriate. Carl had mod-erate hand-motor imitation skills prior to interventionand learned sign language more rapidly than picture ex-change. (p. 160)

Advantages of the AlternatingTreatments Design

The alternating treatments design offers numerous ad-vantages for evaluating and comparing two or more in-dependent variables. Most of the benefits cited here weredescribed by Ulman and Sulzer-Azaroff (1975), who arecredited with first bringing the rationale and possibilitiesof the alternating treatments design to the attention of theapplied behavior analysis community.

Does Not Require Treatment Withdrawal

A major advantage of the alternating treatments design isthat it does not require the investigator to withdraw aseemingly effective treatment to demonstrate a functionalrelation. Reversing behavioral improvements raises eth-ical issues that can be avoided with the alternating treat-ments design. Regardless of ethical concerns, however,administrators and teachers may be more likely to acceptan alternating treatments design over a reversal design evenwhen one of the alternating treatments is a no-treatmentcontrol condition. “It would appear that a return to base-line conditions every other day or every third day is notas disagreeable to a teacher as is first establishing a highlevel of desirable behavior for a prolonged period, andthen reinstating the baseline behaviors” (Ulman & Sulzer-Azaroff, 1975, p. 385).

Speed of Comparison

The experimental comparison of two or more treatmentscan often be made quickly with the alternating treatmentsdesign. In one study an alternating treatments design en-abled the superiority of one treatment over another in in-creasing the cooperative behavior of a 6-year-old boy tobe determined after only 4 days (McCullough, Cornell,McDaniel, & Mueller, 1974). The alternating treatmentsdesign’s ability to produce useful results quickly is amajor reason that it is the basic experimental tactic usedin functional behavior analysis.

When the effects of different treatments become ap-parent early in an alternating treatments design, the in-vestigator can then switch to programming only the mosteffective treatment. The efficiency of the alternating treat-ments design can leave a researcher with meaningful dataeven when an experiment must be terminated early(Ulman & Sulzer-Azaroff, 1975). A reversal or multiplebaseline design, on the other hand, must be carriedthrough to completion to show a functional relation.

Minimizes Irreversibility Problem

Some behaviors, even though they have been broughtabout or modified by the application of the intervention,do not return to baseline levels when the intervention iswithdrawn and thereby resist analysis with an A-B-A-Bdesign. However, rapidly alternating treatment and no-treatment (baseline) conditions may reveal differences inresponding between the two conditions, especially earlyin an experiment before responding in the no-treatmentcondition begins to approximate the level of respondingin the treatment condition.

Minimizes Sequence Effects

An alternating treatments design, when properly con-ducted, minimizes the extent to which an experiment’sresults are confounded by sequence effects. Sequence ef-fects pose a threat to the internal validity of any experi-ment, but especially to those involving multipletreatments. The concern over sequence effects can besummed up by this simple question: Would the resultshave been the same if the sequence of treatments hadbeen different? Sequence effects can be extremely diffi-cult to control in experiments using reversal or multipletactics to compare two or more independent variablesbecause each experimental condition must remain ineffect for a fairly long period of time, thereby produc-ing a specific sequence of events. However, in an al-ternating treatments design, the independent variables

215

Reversal and Alternating Treatments Designs

are rapidly alternated with one another in a random fash-ion that produces no particular sequence. Also, each treat-ment is in effect for short periods of time, reducing thelikelihood of carryover effects (O’Brien, 1968). The abil-ity to minimize sequence effects makes the alternatingtreatments design a powerful tool for achieving complexbehavior analyses.

Can Be Used with Unstable Data

Determining functional behavior–environment relationsin the presence of unstable data presents a serious prob-lem for the applied behavior analyst. Using steady stateresponding to predict, verify, and replicate behavioralchanges is the foundation of experimental reasoning inbehavior analysis (Sidman, 1960). Obtaining stable base-line responding, however, is extremely difficult withmany socially important behaviors of interest to appliedbehavior analysts. Merely providing a subject with re-peated opportunities to emit a target response can resultin gradually improved performance. Although practiceeffects are worthy of empirical investigation because oftheir applied and scientific importance (Greenwood,Delquadri, & Hall, 1984; Johnston & Pennypacker,1993a), the unstable baselines they create pose problemsfor the analysis of intervention variables. The changinglevels of task difficulty inherent in moving through a cur-riculum of progressively more complex material alsomake obtaining steady state responding for many acade-mic behaviors difficult.

Because the different treatment conditions are alter-nated rapidly in an alternating treatments design, becauseeach treatment is presented many times throughout eachtime period encompassed by the study, and because nosingle condition is present for any considerable length oftime, it can be presumed that any effects of practice,change in task difficulty, maturation, or other historicalvariables will be equally represented in each treatmentcondition and therefore will not differentially affect anyone condition more or less than the others. For example,even though each of two data paths representing a stu-dent’s reading performance under two different teachingprocedures shows variable and ascending trends thatmight be due to practice effects and uneven curriculummaterials, any consistent separation and vertical distancebetween the data paths can be attributed to differences inthe teaching procedures.

Can Be Used to Assess Generalization of Effects

By alternating various conditions of interest, an experi-menter can continually assess the degree of generalizationof behavior change from an effective treatment to other

conditions of interest. For example, by alternating dif-ferent therapists in the final phase of their study of picabehavior, N. Singh and Winton (1985) were able to de-termine the extent to which the overcorrection treatmentwas effective when presented by different persons.

Intervention Can Begin Immediately

Although determining the preintervention level of re-sponding is generally preferred, the clinical necessity ofimmediately attempting to change some behaviors pre-cludes repeated measurement in the absence of interven-tion. When necessary, an alternating treatments designcan be used without an initial baseline phase.

Considering the Appropriateness of the Alternating TreatmentsDesign

The advantages of the alternating treatments design aresignificant. As with any experimental tactic, however, thealternating treatments design presents certain disadvan-tages and leaves unanswered certain questions that can beaddressed only by additional experimentation.

Multiple Treatment Interference

The fundamental feature of the alternating treatments de-sign is the rapid alternation of two or more independentvariables irrespective of the behavioral measures obtainedunder each treatment. Although the rapid alternation min-imizes sequence effects and reduces the time required tocompare treatments, it raises the important question ofwhether the effects observed under any of the alternatedtreatments would be the same if each treatment were im-plemented alone. Multiple treatment interference refersto the confounding effects of one treatment on a subject’sbehavior being influenced by the effects of another treat-ment administered in the same study.

Multiple treatment interference must always be sus-pected in the alternating treatments design (Barlow &Hayes, 1979; McGonigle, Rojahn, Dixon, & Strain,1987). However, by following the alternating treatmentsphase with a phase in which only the most effective treat-ment condition is in effect, the experimenter can assess theeffects of that treatment when administered in isolation.

Unnatural Nature of RapidlyAlternating Treatments

The rapid back-and-forth switching of treatments doesnot reflect the typical manner in which clinical and edu-cational interventions are applied. From an instructional

216

Reversal and Alternating Treatments Designs

perspective, rapid switching of treatments can be viewedas artificial and undesirable. In most instances, however,the quick comparison of treatments offered by the alter-nating treatments design compensates for concerns aboutits contrived nature. The concern of whether participantsmight suffer detrimental effects from the rapid alternationof conditions is an empirical question that can be deter-mined only by experimentation. Also, it is helpful forpractitioners to remember that one purpose of the alter-nating treatments design is to identify an effective inter-vention as quickly as possible so that the participant doesnot have to endure ineffective instructional approachesor treatments that would delay progress toward educa-tional goals. On balance, the advantages of rapidlyswitching treatments to identify an efficacious interven-tion outweigh any undesirable effects that such manipu-lation may cause.

Limited Capacity

Although the alternating treatments design enables an el-egant, scientifically sound method for comparing the dif-ferential effects of two or more treatments, it is not anopen-ended design in which an unlimited number of treat-ments can be compared. Although alternating treatmentsdesigns with up to five conditions have been reported(e.g., Didden, Prinson, & Sigafoos, 2000), in most situ-ations a maximum of four different conditions (one ofwhich may be a no-treatment control condition) can becompared effectively within a single phase of an alter-nating treatments design, and in many instances only twodifferent treatments can be accommodated. To separatethe effects of each treatment condition from any effectsthat may be caused by aspects of the alternating treat-ments design, each treatment must be carefully counter-balanced across all potentially relevant aspects of itsadministration (e.g., time of day, order of presentation,settings, therapists). In many applied settings the logisticsof counterbalancing and delivering more than two or threetreatments would be cumbersome and would cause theexperiment to require too many sessions to complete.Also, too many competing treatments can decrease thesubject’s ability to discriminate between treatments,thereby reducing the design’s effectiveness.

Selection of Treatments

Theoretically, although an alternating treatments designcan be used to compare the effects of any two discretetreatments, in reality the design is more limited. To en-hance the probability of discrimination between condi-tions (i.e., obtaining reliable, measurable differences in

behavior), the treatments should embody significant dif-ferences from one to the other. For example, an investi-gator using an alternating treatments design to study theeffects of group size on students’ academic performanceduring instruction might include conditions of 4, 10, and20 students. Alternating conditions of 6, 7, and 8 stu-dents, however, is less likely to reveal a functional rela-tion between group size and performance. However, atreatment condition should not be selected for inclusionin an alternating treatments design only because it mightyield a data path that is easily differentiated from that ofanother condition. The applied in applied behavior analy-sis encompasses the nature of treatment conditions aswell as the nature of behaviors investigated (Wolf, 1978).An important consideration in selecting treatment con-ditions should be the extent to which they are represen-tative of current practices or practices that couldconceivably be implemented. For example, although anexperiment comparing the effects of 5 minutes, 10 min-utes, and 30 minutes of math homework per school nighton math achievement might be useful, a study comparingthe effects of 5 minutes, 10 minutes, and 3 hours of mathhomework per night probably would not be. Even if sucha study found 3 hours of nightly math homework ex-tremely effective in raising students’ achievement in math,few teachers, parents, administrators, or students wouldcarry out a program of 3 hours of nightly homework fora single content area.

Another consideration is that some interventions maynot produce important behavior change unless and untilthey have been implemented consistently over a contin-uous period of time.

When a multielement baseline design is employed, over-lapping data do not necessarily rule out the possible effi-cacy of an experimental procedure. The session-by-session alternation of conditions might obscure effectsthat could be observed if the same condition was pre-sented during several consecutive sessions. It is there-fore possible that a given treatment may prove to beeffective with a reversal or multiple baseline design,but not with a multielement baseline design. (Ulman& Sulzer-Azaroff, 1975, p. 382)

The suspicion that a given treatment may be effectiveif it is presented in isolation for an extended period is anempirical question that can be explored properly onlythrough experimentation. At one level, if extended ap-plication of a single treatment results in behavioral im-provement, the practitioner might be satisfied, and nofurther action would be needed. However, the prac-titioner-researcher who is interested in determiningexperimental control might return to an alternatingtreatments design and compare the performance of thesingle treatment with that of another intervention.

217

Reversal and Alternating Treatments Designs

SummaryReversal Design

1. The reversal tactic (A-B-A) entails repeated measurementof behavior in a given setting during three consecutivephases: (a) a baseline phase (absence of the independentvariable), (b) a treatment phase (introduction of the inde-pendent variable), and (c) a return to baseline conditions(withdrawal of the independent variable).

2. The reversal design is strengthened tremendously by rein-troducing the independent variable in the form of an A-B-A-B design. The A-B-A-B design is the most straight-forward and generally most powerful intrasubject designfor demonstrating functional relations.

Variations of the A-B-A-B Design

3. Extending the A-B-A-B design with repeated reversalsmay provide a more convincing demonstration of a func-tional relation than a design with one reversal.

4. The B-A-B reversal design can be used with target be-haviors for which an initial baseline phase is inappropri-ate or not possible for ethical or practical reasons.

5. Multiple treatment reversal designs use the reversal tacticto compare the effects of two or more experimental con-ditions to baseline and/or to one another.

6. Multiple treatment reversal designs are particularly sus-ceptible to confounding by sequence effects.

7. The NCR reversal technique enables the isolation andanalysis of the contingent aspect of reinforcement.

8. Reversal techniques incorporating DRO and DRI/DRAcontrol conditions can also be used to demonstrate the ef-fects of contingent reinforcement.

Considering the Appropriateness of the Reversal Design

9. An experimental design based on the reversal tactic is in-effective in evaluating the effects of a treatment variablethat, by its very nature, cannot be withdrawn once it hasbeen presented (e.g., instruction, modeling).

10. Once improved, some behaviors will not reverse to base-line levels even though the independent variable has beenwithdrawn. Such behavioral irreversibility precludes ef-fective use of the reversal design.

11. Legitimate social, educational, and ethical concerns areoften raised over withdrawing a seemingly effective treat-ment variable to provide scientific verification of its func-tion in changing behavior.

12. Sometimes very brief reversal phases, or even one-sessionbaseline probes, can demonstrate believable experimentalcontrol.

Alternating Treatments Design

13. The alternating treatments design compares two or moredistinct treatments (i.e., independent variables) while theireffects on the target behavior (i.e., dependent variable)are measured.

14. In an alternating treatments design, each successive datapoint for a specific treatment plays three roles: it provides(a) a basis for the prediction of future levels of respondingunder that treatment, (b) potential verification of the pre-vious prediction of performance under that treatment, and(c) the opportunity for replication of previous effects pro-duced by that treatment.

15. Experimental control is demonstrated in the alternatingtreatments design when the data paths for two differenttreatments show little or no overlap.

16. The extent of any differential effects produced by two treat-ments is determined by the vertical distance between their re-spective data paths and quantified by the vertical axis scale.

Variations of the Alternating Treatments Design

17. Common variations of the alternating treatments designinclude the following:

• Single-phase alternating treatments design without a no-treatment control condition

• Single-phase design with a no-treatment control condition

• Two-phase design: initial baseline phase followed bythe alternating treatments phase

• Three-phase design: initial baseline phase followedby the alternating treatments phase and a final besttreatment phase

Advantages of the Alternating Treatments Design

18. Advantages of the alternating treatments design includethe following:

• Does not require treatment withdrawal.

• Quickly compares the relative effectiveness of treatments.

• Minimizes the problem of irreversibility.

• Minimizes sequence effects.

• Can be used with unstable data patterns.

• Can be used to assess generalization of effects.

• Intervention can begin immediately.

Considering the Appropriateness of the AlternatingTreatments Design

19. The alternating treatments design is susceptible to multi-ple treatment interference. However, by following the al-ternating treatments phase with a phase in which only one

218

22. The alternating treatments design is most effective in re-vealing the differential effects of treatment conditions thatdiffer significantly from one another.

23. The alternating treatments design is not effective for as-sessing the effects of an independent variable that pro-duces important changes in behavior only when it isconsistently administered over a continuous period of time.

Reversal and Alternating Treatments Designs

treatment is administered, the experimenter can assess theeffects of that treatment in isolation.

20. The rapid back-and-forth switching of treatments does notreflect the typical manner in which interventions are ap-plied and may be viewed as artificial and undesirable.

21. An alternating treatments phase is usually limited to amaximum of four different treatment conditions.

219

changing criterion designdelayed multiple baseline designmultiple baseline across

behaviors design

multiple baseline across settingsdesign

multiple baseline across subjectsdesign

multiple baseline designmultiple probe design

Behavior Analyst Certification Board® BCBA® & BCABA®Behavior Analyst Task List©, Third Edition

Content Area 5: Experimental Evaluation of Interventions

5-1 Systematically manipulate independent variables to analyze their effects ontreatment.

(d) Use changing criterion design.

(e) Use multiple baseline designs.

5-2 Identify and address practical and ethical considerations in using variousexperimental designs.

© 2006 The Behavior Analyst Certification Board, Inc.,® (BACB®) all rights reserved. A current version of thisdocument may be found at www.bacb.com. Requests to reprint, copy, or distribute this document and ques-tions about this document must be submitted directly to the BACB.

Multiple Baseline and Changing Criterion Designs

Key Terms

From Chapter 9 of Applied Behavior Analysis, Second Edition. John O. Cooper, Timothy E. Heron, William L. Heward. Copyright © 2007by Pearson Education, Inc. All rights reserved.

220

This chapter describes two additional experi-mental tactics for analyzing behavior–environ-ment relations—the multiple baseline design and

the changing criterion design. In a multiple baseline de-sign, after collecting initial baseline data simultaneouslyacross two or more behaviors, settings, or people, the be-havior analyst then applies the treatment variable sequen-tially across these behaviors, settings, or people and notesthe effects. The changing criterion design is used to ana-lyze improvements in behavior as a function of stepwise,incremental criterion changes in the level of respondingrequired for reinforcement. In both designs, experimentalcontrol and a functional relation are demonstrated whenthe behaviors change from a steady state baseline to a newsteady state after the introduction of the independent vari-able is applied, or a new criterion established.

Multiple Baseline DesignThe multiple baseline design is the most widely used ex-perimental design for evaluating treatment effects in ap-plied behavior analysis. It is a highly flexible tactic thatenables researchers and practitioners to analyze the ef-fects of an independent variable across multiple behav-iors, settings, and/or subjects without having to withdrawthe treatment variable to verify that the improvements inbehavior were a direct result of the application of thetreatment. As you recall, the reversal design by its verynature requires that the independent variable be with-drawn to verify the prediction established in baseline.This is not so with the multiple baseline design.

Operation and Logic of the MultipleBaseline Design

Baer, Wolf, and Risley (1968) first described the multiplebaseline design in the applied behavior analysis litera-ture. They presented the multiple baseline design as an al-ternative to the reversal design for two situations:(a) when the target behavior is likely to be irreversible or(b) when it is undesirable, impractical, or unethical to re-verse conditions. Figure 1 illustrates Baer and col-leagues’ explanation of the basic operation of the multiplebaseline design.

In the multiple baseline technique, a number of re-sponses are identified and measured over time to pro-vide baselines against which changes can be evaluated.With these baselines established, the experimenter thenapplies an experimental variable to one of the behaviors,produces a change in it, and perhaps notes little or nochange in the other baselines. If so, rather than reversingthe just-produced change, he instead applies the experi-mental variable to one of the other, as yet unchanged, re-sponses. If it changes at that point, evidence is accruing

1Although most of the graphic displays created or selected for this text asexamples of experimental design tactics show data plotted on noncumula-tive vertical axes, the reader is reminded that repeated measurement datacollected within any type of experimental design can be plotted on bothnoncumulative and cumulative graphs. For example, Lalli, Zanolli, andWohn (1994) and Mueller, Moore, Doggett, and Tingstrom (2000) usedcumulative graphs to display the data they collected in multiple baseline de-sign experiments; and Kennedy and Souza (1995) and Sundberg, Endicott,and Eigenheer (2000) displayed the data they obtained in reversal designson cumulative graphs. Students of applied behavior analysis should becareful not to confuse the different techniques for graphically displayingdata with tactics for experimental analysis.

that the experimental variable is indeed effective, andthat the prior change was not simply a matter of coinci-dence. The variable then may be applied to still anotherresponse, and so on. The experimenter is attempting toshow that he has a reliable experimental variable, in thateach behavior changes maximally only when the experi-mental variable is applied to it. (p. 94)

The multiple baseline design takes three basic forms:

• The multiple baseline across behaviors design,consisting of two or more different behaviors ofthe same subject

• The multiple baseline across settings design, con-sisting of the same behavior of the same subject intwo or more different settings, situations, or timeperiods

• The multiple baseline across subjects design, con-sisting of the same behavior of two or more differ-ent participants (or groups)

Although only one of the multiple baseline design’sbasic forms is called an “across behaviors” design, allmultiple baseline designs involve the time-lagged appli-cation of a treatment variable across technically different(meaning independent) behaviors. That is, in the multi-ple baseline across settings design, even though the sub-ject’s performance of the same target behavior ismeasured in two or more settings, each behavior–settingcombination is conceptualized and treated as a differentbehavior for analysis. Similarly, in a multiple baselineacross subjects design, each subject–behavior combina-tion functions as a different behavior in the operation ofthe design.

Figure 2 shows the same data set displayed inFigure 1 with the addition of data points representing predicted measures if baseline conditions were notchanged and shaded areas illustrating how the three ele-ments of baseline logic—prediction, verification, andreplication—are operationalized in the multiple baselinedesign.1 When stable baseline responding has beenachieved for Behavior 1, a prediction is made that if theenvironment were held constant, continued measurement

Multiple Baseline and Changing Criterion Designs

221

Beh

avio

r 1

Baseline Treatment

Beh

avio

r 2

Beh

avio

r 3

Res

po

nse

Mea

sure

Sessions5 10 15 20 25 30

Figure 1 Graphic prototype of amultiple baseline design.

would reveal similar levels of responding. When the re-searcher’s confidence in such a prediction is justifiablyhigh, the independent variable is applied to Behavior 1.The open data points in the treatment phase for Behavior1 represent the predicted level of responding. The soliddata points show the actual measures obtained for Be-havior 1 during the treatment condition. These data showa discrepancy with the predicted level of responding ifno changes had been made in the environment, therebysuggesting that the treatment may be responsible for thechange in behavior. The data collected for Behavior 1 ina multiple baseline design serve the same functions asthe data collected during the first two phases of an A-B-A-B reversal design.

Continued baseline measures of the other behaviorsin the experiment offer the possibility of verifying theprediction made for Behavior 1. In a multiple baselinedesign, verification of a predicted level of responding forone behavior (or tier) is obtained if little or no change isobserved in the data paths of the behaviors (tiers) that are

still exposed to the conditions under which the predic-tion was made. In Figure 2 those portions of the base-line condition data paths for Behaviors 2 and 3 withinthe shaded boxes verify the prediction for Behavior 1. Atthis point in the experiment, two inferences can be made:(a) The prediction that Behavior 1 would not change ina constant environment is valid because the environmentwas held constant for Behaviors 2 and 3 and their levelsof responding remained unchanged; and (b) the observedchanges in Behavior 1 were brought about by the inde-pendent variable because only Behavior 1 was exposed tothe independent variable and only Behavior 1 changed.

In a multiple baseline design, the independent vari-able’s function in changing a given behavior is inferredby the lack of change in untreated behaviors. However,verification of function is not demonstrated directly as itis with the reversal design, thereby making the multiplebaseline design an inherently weaker tactic (i.e., less con-vincing from the perspective of experimental control) forrevealing a functional relation between the independent

Multiple Baseline and Changing Criterion Designs

222

Multiple Baseline and Changing Criterion Designs

Beh

avio

r 1

Baseline Treatment

Beh

avio

r 2

Beh

avio

r 3

Res

po

nse

Mea

sure

Sessions5 10 15 20 25 30

A

Figure 2 Graphic prototype of amultiple baseline design with shadingadded to show elements of baselinelogic. Open data points representpredicted measures if baselineconditions were unchanged. Baselinedata points for Behaviors 2 and 3within shaded areas verify of theprediction made for Behavior 1.Behavior 3 baseline data withinBracket A verify the prediction madefor Behavior 2. Data obtained duringthe treatment condition for Behaviors2 and 3 (cross-hatched shading)provide replications of the experi-mental effect.

variable and a target behavior. However, the multiplebaseline design compensates somewhat for this weak-ness by providing the opportunity to verify or refute a se-ries of similar predictions. Not only is the prediction forBehavior 1 in Figure 2 verified by continued stable baselines for Behavior 2 and 3, but the bracketed portionof the baseline data for Behavior 3 also serves as verifi-cation of the prediction made for Behavior 2.

When the level of responding for Behavior 1 underthe treatment condition has stabilized or reached a prede-termined performance criterion, the independent variableis then applied to Behavior 2. If Behavior 2 changes in amanner similar to the changes observed for Behavior 1,replication of the independent variable’s effect has beenachieved (shown by the data path shaded with cross-hatch-ing). After Behavior 2 has stabilized or reached a prede-termined performance criterion, the independent variableis applied to Behavior 3 to see whether the effect will bereplicated. The independent variable may be applied toadditional behaviors in a similar manner until a convinc-

ing demonstration of the functional relation has been es-tablished (or rejected) and all of the behaviors targetedfor improvement have received treatment.

As with verification, replication of the independentvariable’s specific effect on each behavior in a multiplebaseline design is not manipulated directly. Instead, thegenerality of the independent variable’s effect across thebehaviors comprising the experiment is demonstrated byapplying it to a series of behaviors. Assuming accuratemeasurement and proper experimental control of rele-vant variables (i.e., the only environmental factor thatchanges during the course of the experiment should be thepresence—or absence—of the independent variable),each time a behavior changes when, and only when, theindependent variable is introduced, confidence in the ex-istence of a functional relation increases.

How many different behaviors, settings, or subjectsmust a multiple baseline design include to provide a be-lievable demonstration of a functional relation? Baer,Wolf, and Risley (1968) suggested that the number of

223

Multiple Baseline and Changing Criterion Designs

replications needed in any design is ultimately a matterto be decided by the consumers of the research. In thissense, an experiment using a multiple baseline designmust contain the minimum number of replications nec-essary to convince those who will be asked to respond tothe experiment and to the researcher’s claims (e.g., teach-ers, administrators, parents, funding sources, journal ed-itors). A two-tier multiple baseline design is a completeexperiment and can provide strong support for the effec-tiveness of the independent variable (e.g., Lindberg,Iwata, Roscoe, Worsdell, & Hanley, 2003; McCord,Iwata, Galensky, Ellingson, & Thomson, 2001; New-strom, McLaughlin, & Sweeney, 1999; Test, Spooner,Keul, & Grossi, 1990). McClannahan, McGee, MacDuff,and Krantz (1990) conducted a multiple baseline designstudy in which the independent variable was sequentiallyimplemented in an eight-tier design across 12 partici-pants. Multiple baseline designs of three to five tiers aremost common. When the effects of the independent vari-able are substantial and reliably replicated, a three- orfour-tier multiple baseline design provides a convincingdemonstration of experimental effect. Suffice it to saythat the more replications one conducts, the more con-vincing the demonstration will be.

Some of the earliest examples of the multiple base-line design in the applied behavior analysis literature werestudies by Risley and Hart (1968); Barrish, Saunders, andWolf (1969); Barton, Guess, Garcia, and Baer (1970);Panyan, Boozer, and Morris (1970); and Schwarz andHawkins (1970). Some of the pioneering applications ofthe multiple baseline technique are not readily apparentwith casual examination: The authors may not have iden-tified the experimental design as a multiple baseline de-sign (e.g., Schwarz & Hawkins, 1970), and/or thenow-common practice of stacking the tiers of a multiplebaseline design one on the other so that all of the datacan be displayed graphically in the same figure was notalways used (e.g., Maloney & Hopkins, 1973; McAllis-ter, Stachowiak, Baer, & Conderman, 1969; Schwarz &Hawkins, 1970).

In 1970, Vance Hall, Connie Cristler, SharonCranston, and Bonnie Tucker published a paper that de-scribed three experiments, each an example of one of thethree basic forms of the multiple baseline design: acrossbehaviors, across settings, and across subjects. Hall andcolleagues’ paper was important not only because it pro-vided excellent illustrations that today still serve as mod-els of the multiple baseline design, but also because thestudies were carried out by teachers and parents, indi-cating that practitioners “can carry out important and sig-nificant studies in natural settings using resourcesavailable to them” (p. 255).

Multiple Baseline across Behaviors Design

The multiple baseline across behaviors design beginswith the concurrent measurement of two or more behav-iors of a single participant. After steady state respondinghas been obtained under baseline conditions, the investi-gator applies the independent variable to one of the be-haviors while maintaining baseline conditions for theother behavior(s). When steady state or criterion-levelperformance has been reached for the first behavior, theindependent variable is applied to the next behavior, andso on (e.g., Bell, Young, Salzberg, & West, 1991; Gena,Krantz, McClannahan, & Poulson, 1996; Higgins,Williams, & McLaughlin, 2001 [see Figure 26.8]).

Ward and Carnes (2002) used a multiple baselineacross behaviors design to evaluate the effects of self-set goals and public posting on the execution of threeskills by five linebackers on a college football team: (a)reads, in which the linebacker positions himself to covera specified area on the field on a pass play or from theline of scrimmage on a run; (b) drops, in which the line-backer moves to the correct position depending on the of-fensive team’s alignment; and (c) tackles. A videocamera recorded the players’ movements during all prac-tice sessions and games. Data were collected for the first10 opportunities each player had with each skill. Readsand drops were recorded as correct if the player movedto the zone identified in the coaches’ playbook; tackleswere scored as correct if the offensive ball carrier wasstopped.

Following baseline, each player met with one of theresearchers, who described the player’s mean baselineperformance for a given skill. Players were asked to seta goal for their performances during practice sessions; nogoals were set for games. The correct performances dur-ing baseline for all five players ranged from 60 to 80%,and all players set goals of 90% correct performance. Theplayers were informed that their performance in eachday’s practice would be posted on a chart prior to the nextpractice session. A Y (yes) or an N (no) was placed nextto each player’s name to indicate whether he had met hisgoal. A player’s performance was posted on the chart onlyfor the skill(s) in intervention. The chart was mounted ona wall in the locker room where all players on the teamcould see it. The head coach explained the purpose of thechart to other players on the team. Players’ performancesduring games were not posted on the chart.

The results for one of the players, John, are shown inFigure 3. John met or exceeded his goal of 90% correctperformance during all practices for each of the threeskills. Additionally, his improved performance general-ized to games. The same pattern of results was obtainedfor each of the other four players in the study, illustrating

224

Multiple Baseline and Changing Criterion Designs

100

80

60

0

John

Baseline Public Posting

Game

Practice

Reads

100

80

60

0Drops

100

80

60

0Tackles

5 10 15 20 25 30 35 40Sessions

Per

cen

tage

of

Co

rrec

t P

erfo

rman

ces

Figure 3 A multiple baselineacross behaviors design showingpercentage of correct reads,drops, and tackles by a collegefootball player during practicesand games.From “Effects of Posting Self-Set Goals onCollegiate Football Players’ Skill ExecutionDuring Practice and Games” by P. Ward and M. Carnes, 2002, Journal of Applied BehaviorAnalysis, 35, p. 5. Copyright 2002 by the Societyfor the Experimental Analysis of Behavior, Inc.Reprinted by permission.

that the multiple baseline across behaviors design is asingle-subject experimental strategy in which each sub-ject serves as his own control. Each player constituted acomplete experiment, replicated in this case with fourother participants.

Multiple Baseline across Settings Design

In the multiple baseline across settings design, a singlebehavior of a person (or group) is targeted in two or moredifferent settings or conditions (e.g., locations, times ofday). After stable responding has been demonstratedunder baseline conditions, the independent variable is in-troduced in one of the settings while baseline conditions

remain in effect in the other settings. When maximumbehavior change or criterion-level performance has beenachieved in the first setting, the independent variable isapplied in the second setting, and so on.

Roane, Kelly, and Fisher (2003) employed a multi-ple baseline across settings design to evaluate the effectsof a treatment designed to reduce the rate at which an 8-year-old boy put inedible objects in his mouth. Jason,who had been diagnosed with autism, cerebral palsy, andmoderate mental retardation, had a history of putting ob-jects such as toys, cloth, paper, tree bark, plants, and dirtinto his mouth.

Data on Jason’s mouthing were obtained concur-rently in a classroom, a playroom, and outdoors—threesettings that contained a variety of inedible objects and

225

Multiple Baseline and Changing Criterion Designs

where caretakers had reported Jason’s mouthing to beproblematic. Observers in each setting unobtrusively tal-lied the number of times Jason inserted an inedible objectpast the plane of his lips during 10-minute sessions. Theresearchers reported that Jason’s object mouthing usu-ally consisted of a series of discrete episodes, rather thanan extended, continuous event, and that he often placedmultiple objects (inedible objects and food) in his mouthsimultaneously.

Roane and colleagues (2003) described the baselineand treatment conditions for Jason as follows:

The baseline condition was developed based on thefunctional analysis results, which showed that mouthingwas maintained by automatic reinforcement and oc-curred independent of social consequences. Duringbaseline, a therapist was present (approximately 1.5 to 3 m from Jason), but all occurrences of mouthing wereignored (i.e., no social consequences were arranged formouthing, and Jason was allowed to place items in hismouth). No food items were available during baseline.The treatment condition was identical to baseline exceptthat Jason had continuous access to foods that had been

2Functional analysis and automatic reinforcement are described in chaptersentitled “Functional Behavior Assessment” and “Positive Reinforcement”,respectively.

previously identified to compete with the occurrence ofobject mouthing: chewing gum, marshmallows, and hardcandy. Jason wore a fanny pack containing these itemsaround his waist. (pp. 580–581)2

The staggered sequence in which the treatment wasimplemented in each setting and the results are shownin Figure 4. During baseline, Jason’s mouthed objects at mean rates of 0.9, 1.1, and 1.2 responses per minute inthe classroom, a playroom, and outdoor settings, re-spectively. Introduction of the fanny pack with food ineach setting produced an immediate drop to a zero ornear zero rate of mouthing. During treatment, Jason putitems of food from the fanny pack into his mouth at meanrates of 0.01, 0.01, and 0.07 responses per minute in theclassroom, a playroom, and outdoor settings, respec-tively. The multiple baseline across settings design re-vealed a clear functional relation between the treatmentand the frequency of Jason’s object mouthing. No mea-sures obtained during the treatment condition were as

2

1.5

1

0.5

0

Baseline Fanny Pack (Food)

Classroom

5 10 15 202

1.5

1

0.5

0

Playroom

5 10 15 202

1.5

1

0.5

0

Outdoor

5 10 15 20Sessions

Res

po

nse

s p

er M

inu

te (

Mo

uth

ing)

Figure 4 A multiple baseline acrosssettings design showing the number ofobject mouthing responses per minuteduring baseline and treatment conditions.From “The Effects of Noncontingent Access to Food on theRate of Object Mouthing across Three Settings” by H. S.Roane, M. L. Kelly, and W. W. Fisher, 2003, Journal of AppliedBehavior Analysis, 36, p. 581. Copyright 2003 by the Societyfor the Experimental Analysis of Behavior, Inc. Reprinted bypermission.

226

Multiple Baseline and Changing Criterion Designs

high as the lowest measures in baseline. During 22 of 27treatment sessions across the three settings, Jason put noinedible objects in his mouth.

As was done in the study by Roane and colleagues(2003), the data paths that comprise the different tiers ina multiple baseline across settings design are typicallyobtained in different physical environments (e.g., Cush-ing & Kennedy, 1997; Dalton, Martella, & Marchand-Martella, 1999). However, the different “settings” in amultiple baseline across settings design may exist in thesame physical location and be differentiated from oneanother by different contingencies in effect, the presenceor absence of certain people, and/or the different times ofthe day. For example, in a study by Parker and colleagues(1984), the presence or absence of other people in thetraining room constituted the different settings (environ-ments) in which the effects of the independent variablewere evaluated. The attention, demand, and no-attentionconditions (i.e., contingencies in effect) defined the dif-ferent settings in a multiple baseline design study byKennedy, Meyer, Knowles, and Shukla (2000). The afternoon and the morning portions of the school dayfunctioned as different settings in the multiple baselineacross settings design used by Dunlap, Kern-Dunlap,Clarke, and Robbins (1991) to analyze the effects of cur-ricular revisions on a student’s disruptive and off-task behaviors.

In some studies using a multiple baseline across set-tings design, the participants are varied, changing, andperhaps even unknown to the researchers. For example,Van Houten and Malenfant (2004) used a multiple base-line design across two crosswalks on busy streets to eval-uate the effects of an intensive driver enforcement programon the percentage of drivers yielding to pedestrians and thenumber of motor vehicle–pedestrian conflicts. Watson(1996) used a multiple baseline design across men’s restrooms on a college campus to assess the effectiveness ofposting signs in reducing bathroom graffiti.

Multiple Baseline across Subjects Design

In the multiple baseline across subjects design, one tar-get behavior is selected for two or more subjects (orgroups) in the same setting. After steady state respondinghas been achieved under baseline conditions, the inde-pendent variable is applied to one of the subjects whilebaseline conditions remain in effect for the other sub-jects. When criterion-level or stable responding has beenattained for the first subject, the independent variable isapplied to another subject, and so on. The multiple base-line across subjects design is the most widely used of allthree forms of the design, in part because teachers, clin-icians, and other practitioners are commonly confronted

by more than one student or client needing to learn thesame skill or eliminate the same problem behavior (e.g.,Craft, Alber, & Heward, 1998; Kahng, Iwata, DeLeon,& Wallace, 2000; Killu, Sainato, Davis, Ospelt, & Paul,1998; Kladopoulos & McComas, 2001). Sometimes amultiple baseline design is conducted across “groups” ofparticipants (e.g., Dixon & Holcomb, 2000; Lewis, Pow-ers, Kelk, & Newcomer, 2002; White & Bailey, 1990).

Krantz and McClannahan (1993) used a multiplebaseline across subjects design to investigate the effectsof introducing and fading scripts to teach children withautism to interact with their peers. The four participants,ages 9 to 12, had severe communication deficits and min-imal or absent academic, social, leisure skills. Prior tothe study each of the children had learned to follow firstphotographic activity schedules (Wacker & Berg, 1983)and later written activity schedules that prompted themthrough chains of academic, self-care, and leisure activ-ities. Although their teachers modeled social interactions,verbally prompted the children to interact, and providedcontingent praise and preferred snacks and activities fordoing so, the children consistently failed to initiate in-teractions without adult prompts.

Each session consisted of a continuous 10-minuteinterval in which observers recorded the number of timeseach child initiated and responded to peers while engagedin three art activities—drawing, coloring, and painting—that were rotated across sessions throughout the study.Krantz and McClannahan (1993) described the depen-dent variables as follows:

Initiation to peers was defined as understandable state-ments or questions that were unprompted by an adult,that were directed to another child by using his or hername or by facing him or her, and that were separatedfrom the speaker’s previous vocalizations by a change intopic or a change in recipient of interaction. . . . Scriptedinteractions were those that matched the written script, . . .e.g., “Ross, I like your picture.” Unscripted interactionsdiffered from the script by more than changes in con-junctions, articles, prepositions, pronouns, or changes inverb tense; the question, “Would you like some morepaper?” was scored as an unscripted initiation becausethe noun “paper” did not occur in the script. A responsewas defined as any contextual utterance (word, phrase,or sentence) that was not prompted by the teacher andthat occurred within 5 s of a statement or question di-rected to the target child. . . . Examples of responseswere “what?” “okay,” and “yes, I do.” (p. 124)

During baseline, each child found art materials at hisor her place and a sheet of paper with the written in-structions, “Do your art” and “Talk a lot.” The teacherprompted each child to read the written instructions, thenmoved away. During the script condition, the two written

227

Multiple Baseline and Changing Criterion Designs

instructions in baseline were supplemented by scripts con-sisting of 10 statements and questions such as, “{Name},did you like to {swing/rollerskate/ride the bike} outsidetoday?” “{Name}, do you want to use one of my pen-cils/crayons/brushes}?” (p. 124). Immediately before eachsession, the teacher completed blank portions of thescripts so that they reflected activities the children hadcompleted or were planning and objects in the classroomenvironment. Each child’s script included the three otherchildren’s names, and the order of the questions or state-ments varied across sessions and children.

The script condition was implemented with one childat a time, in staggered fashion (see Figure 5). Initially the teacher manually guided the child through the script,prompting him or her to read the statement to anotherchild and to pencil a check mark next to it after doing so.

Krantz and McClannahan (1993) described the prompt-ing and script-fading procedures as follows:

Standing behind a participant, the teacher manuallyguided him or her to pick up a pencil, point to an in-struction or a scripted statement or question, and movethe pencil along below the text. If necessary, the teacheralso manually guided the child’s head to face anotherchild to whom a statement or question was addressed. Ifthe child did not verbalize the statement or questionswithin 5 s, the manual guidance procedure was repeated.If the child read or said a statement or read or asked aquestion, the teacher used the same type of manual guid-ance to ensure that the child placed a check mark to theleft of that portion of the script.

Manual prompts were faded as quickly as possible;no prompts were delivered to Kate, Mike, Walt, and

25

20

15

10

5

0

Baseline Script

KateScripted

Unscripted

1 2 3 4 5

25

20

15

10

5

0Mike

1 2 3 4 5

25

20

15

10

5

0Walt

1 2 3 4 5

25

20

15

10

5

0Ross

1 2 3 4 5

Sessions

0 5 10 15 20 25 30 35 40

2 MonthFollow-Up

Nu

mb

er o

f Sc

rip

ted

an

d U

nsc

rip

ted

In

itia

tio

ns

Figure 5 A multiple baseline across subjects design showing thenumber of scripted and unscriptedinitiations to peers and responses by four children with autism duringbaseline, script, and follow-upsessions. Arrows indicate whenfading steps occurred.From “Teaching Children with Autism to Initiate toPeers: Effects of a Script-Fading Procedure” by P. J.Krantz and L. E. McClannahan, 1993, Journal ofApplied Behavior Analysis, 26, p. 129. Copyright 1993by the Society for the Experimental Analysis ofBehavior, Inc. Reprinted by permission.

228

Multiple Baseline and Changing Criterion Designs

Ross after Sessions 15, 18, 23, and 27, respectively, andthe teacher remained at the periphery of the classroomthroughout subsequent sessions. After manual guidancehad been faded for a target child, fading of the scriptbegan. Scripts were faded from end to beginning in fivephases. For example, the fading steps for the question“Mike, what do you like to do best on Fun Friday?” were(a) “Mike, what do you like to do best,” (b) “Mike, whatdo you,” (c) “Mike, what,” (d) “M,” and (e) “.” (p. 125)

Kate and Mike, who never initiated during baseline,had mean initiations per session of 15 and 13, respectively,during the script condition. Walt’s initiations increasedfrom a baseline mean of 0.1 to 17 during the script condi-tion, and Ross averaged 14 initiations per session duringscript compared to 2 during baseline. As the scripts werefaded, each child’s frequency of unscripted initiations in-creased. After the scripts were faded, the four participants’frequency of initiations were within the same range as thatof a sample of three typically developing children. The re-searchers implemented the script-fading steps with eachparticipant in response to his or her performance, not ac-cording to a predetermined schedule, thereby retaining theflexibility needed to pursue the behavior–environment re-lations that are the focus of the science of behavior.

However, because each subject did not serve as hisor her own control, this study illustrates that the multiplebaseline across subjects design is not a true single-subjectdesign. Instead, verification of predictions based on thebaseline data for each subject must be inferred from therelatively unchanging measures of the behavior of othersubjects who are still in baseline, and replication of ef-fects must be inferred from changes in the behavior ofother subjects when they come into contact with the in-dependent variable. This is both a weakness and a po-tential advantage of the multiple baseline across subjectsdesign (Johnston & Pennypacker, 1993a), discussed laterin the chapter.

Variations of the Multiple Baseline Design

Two variations of the multiple baseline design are the mul-tiple probe design and the delayed multiple baseline de-sign. The multiple probe design enables the behavioranalyst to extend the operation and logic of the multiplebaseline tactic to behaviors or situations in which con-current measurement of all behaviors comprising the de-sign is unnecessary, potentially reactive, impractical, ortoo costly. The delayed multiple baseline technique can beused when a planned reversal design is no longer possibleor proves ineffective; it can also add additional tiers to analready operational multiple baseline design, as would bethe case if new subjects were added to an ongoing study.

Multiple Probe Design

The multiple probe design, first described by Horner andBaer (1978), is a method of analyzing the relation betweenthe independent variable and the acquisition of a succes-sive approximation or task sequence. In contrast to themultiple baseline design—in which data are collected si-multaneously throughout the baseline phase for each be-havior, setting, or subject in the experiment—in themultiple probe design intermittent measures, or probes,provide the basis for determining whether behavior changehas occurred prior to intervention. According to Hornerand Baer, when applied to a chain or sequence of relatedbehaviors to be learned, the multiple probe design pro-vides answers to four questions: (a) What is the initiallevel of performance on each step (behavior) in the se-quence? (b) What happens when sequential opportunitiesto perform each step in the sequence are provided prior totraining on that step? (c) What happens to each step astraining is applied? and (d) What happens to the perfor-mance of untrained steps in the sequence as criterion-levelperformance is reached on the preceding steps?

Figure 6 shows a graphic prototype of the multipleprobe design. Although researchers have developed manyvariations of the multiple probe technique, the basic de-sign has three key features: (a) An initial probe is takento determine the subject’s level of performance on eachbehavior in the sequence; (b) a series of baseline mea-sures is obtained on each step prior to training on thatstep; and (c) after criterion-level performance is reachedon any training step, a probe of each step in the sequenceis obtained to determine whether performance changeshave occurred in any other steps.

Thompson, Braam, and Fuqua (1982) used a multi-ple probe design to analyze the effects of an instructionalprocedure composed of prompts and token reinforcementon the acquisition of a complex chain of laundry skills bythree students with developmental disabilities. Observa-tions of people doing laundry resulted in a detailed taskanalysis of 74 discrete responses that were organized intoseven major components (e.g., sorting, loading washer).Each student’s performance was assessed via probe andbaseline sessions that preceded training on each compo-nent. Probe and baseline sessions began with instructionsto the student to do the laundry. When an incorrect re-sponse was emitted or when no response occurred within5 seconds of a prompt to continue, the student was seatedaway from the laundry area. The trainer then performedthe correct response and called the student back to thearea so that assessment of the rest of the laundry sequencecould continue.

Probe sessions differed from baseline sessions in twoways. First, a probe measured each response in the entire

229

Multiple Baseline and Changing Criterion Designs

100

0

Beh

avio

r 1

Baseline Instruction Follow-Up

100

0

Beh

avio

r 2

100

0

Beh

avio

r 3

100

0

Beh

avio

r 4

1 10 20 30 40 50 9174Sessions

Per

cen

t A

ccu

racy

Figure 6 Graphic prototype of a multiple probe design. Square data points represent results ofprobe sessions in which the entire sequence or set of behaviors (1–4) are tested.

chain and occurred immediately prior to baseline andtraining for every component. Baseline sessions occurredfollowing the probe and measured only previouslytrained components plus the component about to betrained. Baseline data were gathered on a variable num-ber of consecutive sessions immediately prior to trainingsessions. Second, no tokens or descriptive praise weredelivered during probes. During baseline, tokens weredelivered for previously trained responses only. . . . Fol-lowing baseline, each component was trained using agraduated 3-prompt procedure (Horner & Keilitz, 1975),consisting of verbal instruction, modeling, and graduatedguidance. If one prompt level failed to produce a correctresponse within 5 sec, the next level was introduced. . . .When the student performed a component at 100% accu-racy for two consecutive trials, he was required to per-form the entire laundry chain from the beginningthrough the component most recently mastered. The en-tire chain of previously mastered components was

trained (chain training condition) until it was performedwithout errors or prompts for two consecutive trials.(Thompson, Braam, & Fuqua, 1982, p. 179)

Figure 7 shows the results for Chester, one of the students. Chester performed a low percentage of correctresponses during the probe and baseline sessions, but per-formed with 100% accuracy after training was applied toeach component. During a generalization probe conductedat a community laundromat after training, Chester per-formed correctly 82% of the 74 total responses in thechain. Five additional training sessions were needed toretrain responses performed incorrectly during the gen-eralization probe and to train “additional responses ne-cessitated by the presence of coin slots and minordifferences between the training and laundromat equip-ment” (p. 179). On two follow-up sessions conducted 10months after training, Chester performed at 90% accu-

230

Multiple Baseline and Changing Criterion Designs

100

50

0100

50

0

Sort

ing

Load

Was

her

Chester

100

50

0Set

Was

her

Dia

ls

100

50

0

Emp

tyW

ash

er

100

50

0

Lin

t Sc

reen

100

50

0Set

Dry

erD

ials

100

50

0

Emp

ty

Dry

er

Per

cen

t C

orr

ect

Res

po

nse

s

Bas

elin

eT

rain

ing

Gen

eral

ize

Ch

ain

Tra

inin

g

Ch

ain

Tra

inin

g

Ch

ain

Tra

inin

g

Ch

ain

Tra

inin

g

Ch

ain

Tra

inin

g

Ch

ain

Tra

inin

g

Ch

ain

Tra

inin

g

Follo

w U

p

5 10 15 20 25 30 3540 45 50 55 60 65 70 75

Sessions

Ligh

t

Dar

k

Bo

th

Figure 7 A multiple probe design showing the percentage of correct responses for each trial on each component of a laundry task by a young adult male withmental retardation. Heavy vertical lines on the horizontal axis represent successivetraining sessions; lighter and shorter vertical lines indicate trials within a session.From “Training and Generalization of Laundry Skills: A Multiple-Probe Evaluation with Handicapped Persons” by T. J.Thompson, S. J. Braam, and R. W. Fuqua, 1982, Journal of Applied Behavior Analysis, 15, p. 180. Copyright 1982 by theSociety for the Experimental Analysis of Behavior, Inc. Reprinted by permission.

racy even though he had not performed the laundry taskfor the past 2 months. Similar results were obtained for theother two students who participated in the study.

Thompson and colleagues (1982) added the chaintraining condition to their study because they believedthat components trained as independent skills were un-likely to be emitted in correct sequence without suchpractice. It should be noted that the experimenters didnot begin training a new component until stable re-sponding had been achieved during baseline observations(see the baseline data for the bottom four tiers in Figure7). Delaying the training in this manner enabled a cleardemonstration of a functional relation between trainingand skill acquisition.

The multiple probe design is particularly appropri-ate for evaluating the effects of instruction on skill se-quences in which it is highly unlikely that the subject canimprove performance on later steps in the sequence with-

out acquiring the prior steps. For example, the repeatedmeasurement of the accuracy in solving division prob-lems of a student who possesses no skills in addition, sub-traction, and multiplication would add little to an analysis.Horner and Baer (1978) made this point exceedingly well:

The inevitable zero scores on the division baseline haveno real meaning: division could be nothing else thanzero (or chance, depending on the test format), and thereis no real point in measuring it. Such measures are proforma: they fill out the picture of a multiple baseline,true, but in an illusory way. They do not so much repre-sent zero behavior as zero opportunity for the behaviorto occur, and there is no need to document at the level ofwell-measured data that behavior does not occur when itcannot. (p. 190)

Thus, the multiple probe design avoids the necessityof collecting ritualistic baseline data when the perfor-mance of any component of a chain or sequence is

231

Multiple Baseline and Changing Criterion Designs

impossible or unlikely before acquisition of its precedingcomponents. In addition to the two uses already men-tioned—analysis of the effects of instruction on complexskill sequences and reduction in the amount of baselinemeasurement for behaviors that have no plausible op-portunity to occur—the multiple probe technique is alsoan effective experimental strategy for situations in whichextended baseline measurement may prove reactive, im-practical, or costly. The repeated measurement of a skillunder nontreatment conditions can prove aversive to somestudents; and extinction, boredom, or other undesirableresponses can occur. In his discussion of multiple base-line designs, Cuvo (1979) suggested that researchersshould recognize that “there is a trade-off between re-peatedly administering the dependent measure to estab-lish a stable baseline on one hand and risking impairedperformance by subjecting participants to a potentiallypunishing experience on the other hand” (pp. 222–223).Furthermore, complete assessment of all skills in a se-quence may require too much time that could otherwisebe spent on instruction.

Other examples of the multiple probe design can befound in Arntzen, Halstadtr, and Halstadtr (2003); Cole-man-Martin & Wolff Heller (2004); O’Reilly, Green, andBraunling-McMorrow, (1990); and Werts, Caldwell and

Delayed Multiple Baseline Design

The delayed multiple baseline design is an experimen-tal tactic in which an initial baseline and intervention arebegun, and subsequent baselines are added in a staggeredor delayed fashion (Heward, 1978). Figure 8 shows agraphic prototype of the delayed multiple baseline de-sign. The design employs the same experimental reason-ing as a full-scale multiple baseline design with theexception that data from baselines begun after the inde-pendent variable has been applied to previous behaviors,settings, or subjects cannot be used to verify predictionsbased on earlier tiers of the design. In Figure 8 baselinemeasurement of Behaviors 2 and 3 was begun early

Baseline Treatment

Beh

avio

r 1

Beh

avio

r 2

Beh

avio

r 3

Beh

avio

r 4

Res

po

nse

Mea

sure

5 10 15 20 25Sessions

Figure 8 Graphic prototype of adelayed multiple baseline design.

Wolery.

232

Multiple Baseline and Changing Criterion Designs

enough for those data to be used to verify the predictionmade for Behavior 1. The final four baseline data pointsfor Behavior 3 also verify the prediction for Behavior 2.However, baseline measurement of Behavior 4 beganafter the independent variable had been applied to eachof the previous behaviors, thus limiting its role in the de-sign to an additional demonstration of replication.

A delayed multiple baseline design may allow thebehavior analyst to conduct research in certain environ-ments in which other experimental tactics cannot be implemented. Heward (1978) suggested three such situations.

• A reversal design is no longer desirable or possible. Inapplied settings the research environment may shift,negating the use of a previously planned reversal de-sign. Such shifts may involve changes in the subject’senvironment that make the target behavior no longerlikely to reverse to baseline levels, or changes in thebehavior of parents, teachers, administrators, the sub-ject/client, or the behavior analyst that, for any numberof reasons, make a previously planned reversal designno longer desirable or possible. . . . If there are otherbehaviors, settings, or subjects appropriate for applica-tion of the independent variable, the behavior analystcould use a delayed multiple baseline technique andstill pursue evidence of a functional relation.

• Limited resources, ethical concerns, or practical diffi-culties preclude a full-scale multiple baseline design.This situation occurs when the behavior analyst onlycontrols resources sufficient to initially record and in-tervene with one behavior, setting, or subject, and an-other research strategy is inappropriate. It may be thatas a result of the first intervention, more resources be-come available for gathering additional baselines. Thismight occur following the improvement of certain be-haviors whose pretreatment topography and/or rate re-quired an inordinate expenditure of staff resources. Or,it could be that a reluctant administrator, after seeingthe successful results of the first intervention, providesthe resources necessary for additional analysis. Ethicalconcerns may preclude extended baseline measure-ment of some behaviors (e.g., Linscheid, Iwata, Rick-etts, Williams, & Griffin, 1990). Also under thisheading would fall the “practical difficulties” cited byHobbs and Holt (1976) as a reason for delaying base-line measurement in one of three settings.

• A “new” behavior, setting, or subject becomes avail-able. A delayed multiple baseline technique might beemployed when another research design was originallyplanned but a multiple baseline analysis becomes thepreferred approach due to changes in the environment(e.g., the subject begins to emit another behavior ap-propriate for intervention with the experimental vari-able, the subject begins to emit the original targetbehavior in another setting, or additional subjects dis-playing the same target behavior become available.)(adapted from pp. 5–6)

Researchers have used the delayed multiple baselinetechnique to evaluate the effects of a wide variety of in-terventions (e.g., Baer, Williams, Osnes, & Stokes, 1984;Copeland, Brown, & Hall, 1974; Hobbs & Holt, 1976;Jones, Fremouw, & Carples, 1977; Linscheid et al., 1990;Risley & Hart, 1968; Schepis, Reid, Behrmann, & Sutton,1998; White & Bailey, 1990. Poche, Brouwer, andSwearingen (1981) used a delayed multiple baseline de-sign to evaluate the effects of a training program designedto prevent children from being abducted by adults. Threetypically developing preschool children were selected assubjects because, during a screening test, each readilyagreed to leave with an adult stranger. The dependentvariable was the level of appropriateness of self-protec-tive responses emitted by each child when an adult sus-pect approached the child and attempted to lure her awaywith a simple lure (“Would you like to go for a walk?”),an authoritative lure (“Your teacher said it was alright foryou to come with me.”), or an incentive lure (“I’ve got anice surprise in my car. Would you like to come with meand see it?”).

Each session began with the child’s teacher bringingthe child outdoors, then pretending to have to return to thebuilding for some reason. The adult suspect (a confeder-ate of the experimenters but unknown to the child) thenapproached the child and offered one of the lures. Theconfederate also served as observer, scoring the child’s re-sponse on a 0 to 6 scale, with a score of 6 representing thedesired response (saying, “No, I have to go ask myteacher” and moving at least 20 feet away from the sus-pect within 3 seconds) and a score of 0 indicating thatthe child moved some distance away from the schoolbuilding with the suspect. Training consisted of model-ing, behavioral rehearsal, and social reinforcement forcorrect responses.

Figure 9 shows the results of the training program.During baseline, all three children responded to the lureswith safety ratings of 0 or 1. All three children masteredcorrect responses to the incentive lure in one to threetraining sessions, with one or two more sessions requiredfor each child to master correct responses to the othertwo lures. Overall, training took approximately 90 min-utes per child distributed over five or six sessions. Allthree children responded correctly when the lures wereadministered in generalization probes on sidewalk loca-tions 150 to 400 feet from the school.

Although each baseline in this study was of equallength (i.e., had an equal number of data points), contra-dicting the general rule that the baselines in a multiple base-line design should vary significantly in length, there aretwo good reasons that Poche and colleagues began train-ing when they did with each subject. First, the nearly totalstability of the baseline performance of each child pro-vided an ample basis for evaluating the training program

233

Multiple Baseline and Changing Criterion Designs

6

5

4

3

2

1

0

Baseline Training Generality Follow-Up

Patty

SchoolGrounds

Community

6

5

4

3

2

1

0

Stan

6

5

4

3

2

1

0

John

5 10 15 20 2 5

Leve

l o

f A

pp

rop

riat

enes

s o

f Se

lf-P

rote

ctiv

e R

esp

on

ses

DaysType of Lure

Simple

Authority

Incentive

Figure 9 A delayed multiple baseline design showing the level of appropriatenessof self-protective responses during baseline,training, and generality probes in school andcommunity settings. Closed symbols indicatedata gathered near the school; open symbols,in a location away from the school.From “Teaching Self-Protection to Young Children” by C. Poche,R. Brouwer, and M. Swearingen, 1981, Journal of Applied Be-havior Analysis, 14, p. 174. Copyright 1981 by the Society for theExperimental Analysis of Behavior, Inc. Reprinted by permission.

(the only exception to complete susceptibility to the adultsuspect’s lures occurred when Stan stayed near the sus-pect instead of actually going away with him on his fourthbaseline observation). Second, and more important, thenature of the target behavior required that it be taught toeach child as soon as possible. Although continuing base-line measurement for varying lengths across the differenttiers of any multiple baseline design is good practice froma purely experimental viewpoint, the ethics of such apractice in this instance would be highly questionable,given the potential danger of exposing the children toadult lures repeatedly while withholding training.

The delayed multiple baseline design presents severallimitations (Heward, 1978). First, from an applied stand-point the design is not a good one if it requires the be-havior analyst to wait too long to modify importantbehaviors, although this problem is inherent in all multi-ple baseline designs. Second, in a delayed multiple base-line design there is a tendency for the delayed baselinephases to contain fewer data points than are found in astandard multiple baseline design, in which all baselinesare begun simultaneously, resulting in baseline phases ofconsiderable and varying length. Long baselines, if sta-ble, provide the predictive power that permits convincingdemonstrations of experimental control. Behavior ana-lysts using any type of multiple baseline design must be

sure that all baselines, regardless of when they are begun,are of sufficient and varied length to provide a believablebasis for comparing experimental effects. A third limita-tion of the delayed multiple baseline design is that it canmask the interdependence of dependent variables.

The strength of any multiple baseline design is that littleor no change is noticed in the other, as yet untreated, be-haviors until, and only until, the experimenter appliesthe independent variable. In a delayed multiple baselinedesign, the “delayed baseline” data gathered for subse-quent behaviors may represent changed performancedue to the experimental manipulation of other behaviorsin the design and, therefore, may not be representativeof the true, preexperimental operant level. . . . In suchinstances, the delayed multiple baseline might result in a“false negative,” and the researcher may erroneouslyconclude that the intervention was not effective on thesubsequent behavior(s), when in reality the lack of si-multaneous baseline data did not permit the discoverythat the behaviors covaried. This is a major weakness ofthe delayed multiple baseline design and makes it a re-search tactic of second choice whenever a full-scalemultiple baseline can be employed. However, this limi-tation can and should be combated whenever possible bybeginning subsequent baselines at least several sessionsprior to intervention on previous baselines. (Heward,1978, pp. 8–9)

234

Multiple Baseline and Changing Criterion Designs

Both the multiple probe design and the delayed mul-tiple baseline design offer the applied behavior analystalternative tactics for pursuing a multiple baseline analy-sis when extended baseline measurement is unnecessary,impractical, too costly, or unavailable. Perhaps the mostuseful application of the delayed multiple baseline tech-nique is in adding tiers to an already operational multi-ple baseline design. Whenever a delayed baseline can besupplemented by probes taken earlier in the course of thestudy, experimental control is strengthened. As a generalrule, the more baseline data, the better.

Assumptions and Guidelines for Using Multiple Baseline Designs

Like all experimental tactics, the multiple baseline de-sign requires the researcher to make certain assumptionsabout how the behavior–environment relations under in-vestigation function, even though discovering the exis-tence and operation of those relations is the very reasonfor conducting the research. In this sense, the design ofbehavioral experiments resembles an empirical guessinggame—the experimenter guesses; the data answer. Theinvestigator makes assumptions, hypotheses in the in-formal sense, about behavior and its relation to control-ling variables and then constructs experiments designedto produce data capable of verifying or refuting thoseconjectures.3

Because verification and replication in the multiplebaseline design depends on what happens, or does nothappen, to other behaviors as a result of the sequentialapplication of the independent variable, the experimentermust be particularly careful to plan and carry out the de-sign in a manner that will afford the greatest degree ofconfidence in any relations suggested by the data. Al-though the multiple baseline design appears deceptivelysimple, its successful application entails much more thanselecting two or more behaviors, settings, or subjects,collecting some baseline data, and then introducing atreatment condition to one behavior after the other. We

3Hypothesis, as we are using the term here, should not be confused with theformal hypothesis testing models that use inferential statistics to confirmor reject a hypothesis deduced from a theory. As Johnston and Pennypacker(1993a) pointed out, “Researchers do not need to state hypotheses if theyare asking a question about nature. When the experimental question sim-ply asks about the relation between independent and dependent variables,there is no scientific reason to make a prediction about what will be learnedfrom the data” (p. 48). However, Johnston and Pennypacker (1980) also rec-ognized that “more modest hypotheses are constantly being subjected to ex-perimental tests, if only to establish greater confidence in the details of thesuspected controlling relations. Whenever an experimenter arranges to af-firm the consequent of a particular proposition, he or she is testing a hy-pothesis, although it is rare to encounter the actual use of such language [inbehavior analysis]. Hypothesis testing in this relatively informal senseguides the construction of experiments without blinding the researcher tothe importance of unexpected results” (pp. 38–39).

suggest the following guidelines for designing and con-ducting experiments using multiple baseline designs.

Select Independent, yet FunctionallySimilar, Baselines

Demonstration of a functional relation in a multiple base-line design depends on two occurrences: (a) the behav-ior(s) still in baseline showing no change in level,variability, or trend while the behavior(s) in contact withthe independent variable changes; and (b) each behaviorchanges when, and only when, the independent variablehas been applied to it. Thus, the experimenter must maketwo, at times seemingly contradictory, assumptions aboutthe behaviors targeted for analysis in a multiple baselinedesign. The assumptions are that the behaviors are func-tionally independent of one another (the behaviors willnot covary with one another), and yet the behaviors shareenough similarity that each will change when the same in-dependent variable is applied to it (Tawney & Gast,1984). An error in either assumption can result in a fail-ure to demonstrate a functional relation.

For example, let us suppose that the independentvariable is introduced with the first behavior, and changesin level and/or trend are noted, but the other behaviorsstill in baseline also change. Do the changes in the still-in-baseline behaviors mean that an uncontrolled variableis responsible for the changes in all of the behaviors andthat the independent variable is an effective treatment?Or do the simultaneous changes in the untreated behav-iors mean that the changes in the first behavior were af-fected by the independent variable and have generalizedto the other behaviors? Or, let us suppose instead that thefirst behavior changes when the independent variable isintroduced, but subsequent behaviors do not change whenthe independent variable is applied. Does this failure toreplicate mean that a factor other than the independentvariable was responsible for the change observed in thefirst behavior? Or does it mean only that the subsequentbehaviors do not operate as a function of the experimen-tal variable, leaving open the possibility that the changenoted in the first behavior was affected by the indepen-dent variable?

Answers to these questions can be pursued only byfurther experimental manipulations. In both kinds of fail-ure to demonstrate experimental control, the multiple base-line design does not rule out the possibility of a functionalrelation between the independent variable and the behav-ior(s) that did change when the variable was applied. Inthe first instance, the failure to demonstrate experimentalcontrol with the originally planned design is offset by theopportunity to investigate and possibly isolate the variablerobust enough to change multiple behaviors simul-taneously. Discovery of variables that reliably produce

235

Multiple Baseline and Changing Criterion Designs

generalized changes across behaviors, settings, and/or sub-jects is a major goal of applied behavior analysis; and if theexperimenter is confident that all other relevant variableswere held constant before, during, and after the observedbehavior changes, the original independent variable is thefirst candidate for further investigation.

In the second situation, with its failure to replicatechanges from one behavior to another, the experimentercan pursue the possibility of a functional relation betweenthe independent variable and the first behavior, perhapsusing a reversal technique, and seek to discover later an ef-fective intervention for the behavior(s) that did not change.Another possibility is to drop the original independentvariable altogether and search for another treatment thatmight be effective with all of the targeted behaviors.

Select Concurrent and Plausibly RelatedMultiple Baselines

In an effort to ensure the functional independence of be-haviors in a multiple baseline design, experimentersshould not select response classes or settings so unrelatedto one another as to offer no plausible means of compar-ison. For the ongoing baseline measurement of one be-havior to provide the strongest basis for verifying theprediction of another behavior that has been exposed toan independent variable, two conditions must be met: (a)The two behaviors must be measured concurrently, and(b) all of the relevant variables that influence one behav-ior must have an opportunity to influence the other be-havior. Studies that employ a multiple baseline approachacross subjects and settings often stretch the logic of thedesign beyond its capabilities. For example, using thestable baseline measures of one child’s compliance withparental requests as the basis for verifying the effect of in-tervention on the compliance behavior of another childliving with another family is questionable practice. Thesets of variables influencing the two children are surelydifferentiated by more than the presence or absence ofthe experimental variable.

There are some important limits to designating multiplebehavior/setting combinations that are intended to func-tion as part of the same experiment. In order for the useof multiple behaviors and settings to be part of the samedesign and thus augment experimental reasoning, thegeneral experimental conditions under which the two re-sponses (whether two from one subject or one from eachof two subjects) are emitted and measured must be on-going concurrently. . . . Exposure [to the independentvariable] does not have to be simultaneous for the differ-ent behavior/setting combinations, [but] it must be theidentical treatment conditions along with the associatedextraneous variables that impinge on the two responsesand/or settings. This is because the conditions imposed

4A related series of A-B designs across different behaviors, settings, and/orparticipants in which each A-B sequence is conducted at a different pointin time is sometimes called a nonconcurrent multiple baseline design (Wat-son & Workman, 1981). The absence of concurrent measurement, how-ever, violates and effectively neuters the experimental logic of the multiplebaseline design. Putting the graphs of three A-B designs on the same pageand tying them together with a dogleg dashed line might produce somethingthat “looks like” a multiple baseline design, but doing so is of questionablevalue and is likely to mislead readers by suggesting a greater degree of ex-perimental control than is warranted. We recommend describing such astudy as a series or collection of A-B designs and graphing the results in amanner that clearly depicts the actual time frame in which each A-B se-quence occurred with respect to the others (e.g., Harvey, May, & Kennedy,2004, Figure 2).

on one behavior/setting combination must have the op-portunity of influencing the other behavior/setting com-bination at the same time, regardless of the condition thatactually prevails for the second. . . . It follows that usingresponses of two subjects each responding in differentsettings would not meet the requirement that there be acoincident opportunity for detecting the treatment effect.A treatment condition [as well as the myriad other vari-ables possibly responsible for changes in the behavior ofone subject] could not then come into contact with theresponding of the other subject, because the second sub-ject’s responding would be occurring in an entirely dif-ferent location. . . . Generally, the greater the plausibilitythat the two responses would be affected by the singletreatment [and all other relevant variables], the morepowerful is the demonstration of experimental controlevidenced by data showing a change in only one behav-ior. (Johnston and Pennypacker, 1980, pp. 276–278)

The requirements of concurrency and plausible in-fluence must be met for the verification element of base-line logic to operate in a multiple baseline design.However, replication of effect is demonstrated each timea baseline steady state is changed by the introduction ofthe independent variable, more or less regardless of whereor when the variable is applied. Such nonconcurrentand/or unrelated baselines can provide valuable data onthe generality of a treatment’s effectiveness.4

This discussion should not be interpreted to meanthat a valid (i.e., logically complete) multiple baselinedesign cannot be conducted across different subjects, eachresponding in different settings. Numerous studies usingmixed multiple baselines across subjects, responsesclasses, and/or settings have contributed to the develop-ment of an effective technology of behavior change (e.g.,Dixon et al., 1998; Durand, 1999; Ryan, Ormond, Im-wold, & Rotunda, 2002).

Let us consider an experiment designed to analyzethe effects of a particular teacher training intervention,perhaps a workshop on using tactics to increase each stu-dent’s opportunity to respond during group instruction.Concurrent measurement is begun on the frequency ofstudent response opportunities in the classrooms of theteachers who are participating in the study. After stable

236

Multiple Baseline and Changing Criterion Designs

baselines have been established, the workshop is pre-sented first to one teacher (or group of teachers) andeventually, in staggered multiple baseline fashion, to allof the teachers.

In this example, even though the different subjects(teachers) are all behaving in different environments (dif-ferent classrooms), comparison of their baseline condi-tions is experimentally sound because the variables likelyto influence their teaching styles operate in the larger,shared environment in which they all behave (the schooland teaching community). Nevertheless, whenever ex-periments are proposed or published that involve differ-ent subjects responding in different settings, researchersand consumers should view the baseline comparisons witha critical eye toward their logical relation to one other.

Do Not Apply the Independent Variable to the Next Behavior Too Soon

To reiterate, for verification to occur in a multiple base-line design, it must be established clearly that as the in-dependent variable is applied to one behavior and changeis noted, little or no change is observed in the other, as-yet-untreated behaviors. The potential for a powerfuldemonstration of experimental control has been destroyedin many studies because the independent variable wasapplied to subsequent behaviors too soon. Although theoperational requirement of sequential application in themultiple baseline tactic is met by introduction of the in-dependent variable even in adjacent time intervals, theexperimental reasoning afforded by such closely spacedmanipulations is minimal.

The influence of unknown, concomitant, extraneousvariables that might be present could still be substantial,even a day or two later. This problem can be avoided bydemonstrating continued stability in responding for thesecond behavior/setting combination during and afterthe introduction of the treatment for the first combina-tion until a sufficient period of time has elapsed to detectany effect on the second combination that might appear.(Johnston & Pennypacker, 1980, p. 283)

Vary Significantly the Lengths of Multiple Baselines

Generally, the more the baseline phases in a multiple base-line design differ in length from one another, the strongerthe design will be. Baselines of significantly differentlengths allow the unambiguous conclusion (assuming aneffective treatment variable) that each behavior not onlychanges when the independent variable is applied, butalso that each behavior does not change until the inde-pendent variable has been applied. If the different base-lines are of the same or similar length, the possibility

exists that changes noted when the independent variableis introduced are the result of a confounding variable,such as practice or reactivity to observation and mea-surement, and not a function of the experimental variable.

Those effects . . . called practice, adaptation, warm-up,self-analysis, etc.; whatever they may be and whateverthey may be called, the multiple baseline design controlsfor them by systematically varying the length of time(sessions, days, weeks) in which they occur prior to theintroduction of the training package. . . . Such control is essential, and when the design consists of only twobaselines, then the number of data points in each priorto experimental intervention should differ as radically as possible, at least by a factor of 2. I cannot see not systematically varying lengths of baselines prior to in-tervention, and varying them as much as possible/practi-cal. Failure to do that . . . weakens the design too muchfor credibility. (D. M. Baer, personal communication,June 2, 1978)

Intervene on the Most Stable Baseline First

In the ideal multiple baseline design, the independent vari-able is not applied to any of the behaviors until steadystate responding has been achieved for each. However,the applied behavior analyst is sometimes denied the op-tion of delaying treatment just to increase the strength ofan experimental analysis. When intervention must beginbefore stability is evident across each tier of the design, theindependent variable should be applied to the behavior,setting, or subject that shows the most stable level of base-line responding. For example, if a study is designed toevaluate the effects of a teaching procedure on the rate ofmath computation of four students and there is no a pri-ori reason to teach the students in any particular sequence,instruction should begin with the student showing the moststable baseline. However, this recommendation should befollowed only when the majority of the baselines in the de-sign show reasonable stability.

Sequential application of the independent variableshould be made in the order of greatest stability at thetime of each subsequent application. Again, however, therealities of the applied world must be heeded. The socialsignificance of changing a particular behavior must some-times take precedence over the desire to meet the re-quirements of experimental design.

Considering the Appropriateness of Multiple Baseline Designs

The multiple baseline design offers significant advan-tages, which no doubt have accounted for its widespreaduse by researchers and practitioners. Those advantages,

237

Multiple Baseline and Changing Criterion Designs

however, must be weighed against the limitations andweaknesses of the design to determine its appropriate-ness in any given situation.

Advantages of the Multiple Baseline Design

Probably the most important advantage of the multiplebaseline design is that it does not require withdrawing aseemingly effective treatment to demonstrate experi-mental control. This is a critical consideration for targetbehaviors that are self-injurious or dangerous to others.This feature of the multiple baseline design also makes itan appropriate method for evaluating the effects of inde-pendent variables that cannot, by their nature, be with-drawn and for investigating target behaviors that are likelyor that prove to be irreversible (e.g., Duker & van Lent,1991). Additionally, because the multiple baseline designdoes not necessitate a reversal of treatment gains to base-line levels, parents, teachers, or administrators may acceptit more readily as a method of demonstrating the effectsof an intervention.

The requirement of the multiple baseline design tosequentially apply the independent variable across mul-tiple behaviors, settings, or subjects complements theusual practice of many practitioners whose goal is to de-velop multiple behavior changes. Teachers are chargedwith helping multiple students learn multiple skills to beused in multiple settings. Likewise, clinicians typicallyneed to help their clients improve more than one responseclass and emit more adaptive behavior in several settings.The multiple baseline design is ideally suited to the eval-uation of the progressive, multiple behavior changessought by many practitioners in applied settings.

Because the multiple baseline design entails con-current measurement of two or more behaviors, settings,or subjects, it is useful in assessing the occurrence of gen-eralization of behavior change. The simultaneous moni-toring of several behaviors gives the behavior analyst theopportunity to determine their covariation as a result ofmanipulations of the independent variable (Hersen &Barlow, 1976). Although changes in behaviors still underbaseline conditions eliminate the ability of the multiplebaseline design to demonstrate experimental control, suchchanges reveal the possibility that the independent vari-able is capable of producing behavioral improvementswith desirable generality, thereby suggesting an addi-tional set of research questions and analytic tactics (e.g.,Odom, Hoyson, Jamieson, & Strain, 1985).

Finally, the multiple baseline design has the advan-tage of being relatively easy to conceptualize, thereby of-fering an effective experimental tactic for teachers andparents who are not trained formally in research method-ology (Hall et al., 1970).

Limitations of the Multiple Baseline Design

The multiple baseline design presents at least three sci-entific limitations or considerations. First, a multiple base-line design may not allow a demonstration of experimentalcontrol even though a functional relation exists betweenthe independent variable and the behaviors to which it isapplied. Changes in behaviors still under baseline condi-tions and similar to concurrent changes in a behavior in thetreatment condition preclude the demonstration of a func-tional relation within the original design. Second, fromone perspective, the multiple baseline design is a weakermethod for showing experimental control than the rever-sal design. This is because verification of the baseline pre-diction made for each behavior within a multiple baselinedesign is not directly demonstrated with that behavior, butmust be inferred from the lack of change in other behav-iors. This weakness of the multiple baseline design, how-ever, should be weighed against the design’s advantageof providing multiple replications across different behav-iors, settings, or subjects. Third, the multiple baseline de-sign provides more information about the effectivenessof the treatment variable than it does about the function ofany particular target behavior.

Consistently [the] multiple baseline is less an experi-mental analysis of the response than of the techniqueused to alter the response. In the reversal design, the re-sponse is made to work again and again; in the multiple-baseline designs, it is primarily the technique that worksagain and again, and the responses either work onceeach [if different responses are used] or else a single re-sponse works once each per setting or once each persubject. Repetitive working of the same response in thesame subject or the same setting is not displayed. But,while repetitive working of the response is foregone,repetitive and diverse working of the experimental tech-nique is maximized, as it would not be in the reversaldesign. (Baer, 1975, p. 22)

Two important applied considerations that must beevaluated in determining the appropriateness of the mul-tiple baseline design are the time and resources requiredfor its implementation. Because the treatment variablecannot be applied to subsequent behaviors, settings, orsubjects until its effects have been observed on previousbehaviors, settings, or subjects, the multiple baseline de-sign requires that intervention be withheld for some be-haviors, settings, or subjects, perhaps for a long time. Thisdelay raises practical and ethical concerns. Treatment can-not be delayed for some behaviors; their importancemakes delaying treatment impractical. And as Stolz (1978)pointed out, “If the intervention is generally acknowl-edged to be effective, denying it simply to achieve amultiple-baseline design might be unethical” (p. 33). Sec-ond, the resources needed for the concurrent measure-

238

Multiple Baseline and Changing Criterion Designs

ment of multiple behaviors must be considered. Use of amultiple baseline design can be particularly costly whenbehavior must be observed and measured in several set-tings. However, when the use of intermittent probes dur-ing baseline can be justified in lieu of continuousmeasurement (Horner & Baer, 1978), the cost of concur-rently measuring multiple behaviors can be reduced.

Changing Criterion DesignThe changing criterion design can be used to evaluate theeffects of a treatment that is applied in a graduated orstepwise fashion to a single target behavior. The chang-ing criterion design was first described in the applied be-havior analysis literature in two papers coauthored byVance Hall (Hall & Fox, 1977; Hartmann & Hall, 1976).

Operation and Logic of the Changing Criterion Design

The reader can refer to Figure 10 before and after read-ing Hartmann and Hall’s (1976) description of thechanging criterion design.

The design requires initial baseline observations on asingle target behavior. This baseline phase is followedby implementation of a treatment program in each of aseries of treatment phases. Each treatment phase is asso-ciated with a step-wise change in criterion rate for thetarget behavior. Thus, each phase of the design providesa baseline for the following phase. When the rate of thetarget behavior changes with each stepwise change inthe criterion, therapeutic change is replicated and exper-imental control is demonstrated. (p. 527)

The operation of two elements of baseline logic—prediction and replication—is clear in the changing cri-

terion design. When stable responding is attained withineach phase of the design, a prediction of future respond-ing is made. Replication occurs each time the level of be-havior changes in a systematic way when the criterion ischanged. Verification of the predictions based on eachphase is not so obvious in this design but can be ap-proached in two ways. First, varying the lengths of phasessystematically enables a form of self-evident verification.The prediction is made that the level of responding willnot change if the criterion is not changed. When the cri-terion is not changed and stable responding continues,the prediction is verified. When it can be shown within thedesign that levels of responding do not change unless thecriterion is changed, regardless of the varied lengths ofphases, experimental control is evident. Hall and Fox(1977) suggested another possibility for verification: “Theexperimenter may return to a former criterion and if thebehavior conforms to this criterion level there is also a co-gent argument for a high degree of behavioral control”(p. 154). Such a reversed criterion is shown in the next-to-last phase of Figure 10. Although returning to an earlier criterion level requires a brief interruption of thesteady improvement in behavior, the reversal tacticstrengthens the analysis considerably and should be in-cluded in the changing criterion design unless other fac-tors indicate its inappropriateness.

One way to conceptualize the changing criterion de-sign is as a variation of the multiple baseline design. BothHartmann and Hall (1976, p. 530) and Hall and Fox(1977, p. 164) replotted data from changing criterion de-sign experiments in a multiple baseline format with eachtier of the multiple baseline showing the occurrence or nonoccurrence of the target behavior at one of the criterion levels used in the experiment. A vertical condi-tion change line doglegs through the tiers indicating whenthe criterion for reinforcement was raised to the level

50

40

30

20

10

0

Nu

mb

er o

f R

esp

on

ses

Baseline Treatment

5 10 15 20 25 30 35Sessions

Figure 10 Graphicprototype of a changingcriterion design.

239

Multiple Baseline and Changing Criterion Designs

represented by each tier. By graphing whether the targetbehavior was emitted during each session at or above thelevel represented on each tier both before and after thechange in criterion to that level, a kind of multiple base-line analysis is revealed. However, the strength of themultiple baseline argument is not quite so convincing be-cause the “different” behaviors represented by each tierare not independent of one another. For example, if a tar-get behavior is emitted 10 times in a given session, all ofthe tiers representing criteria below 10 responses wouldhave to show that the behavior occurred, and all of thetiers representing criteria of 11 or more would have toshow no occurrence of the behavior, or zero responding.The majority of the tiers that would appear to show ver-ification and replication of effect, in fact, could only showthese results because of the events plotted on another tier.A multiple baseline design provides its convincingdemonstration of experimental control because the mea-sures obtained for each behavior in the design are a func-tion of the controlling variables for that behavior, notartifacts of the measurement of another behavior. Thus,recasting the data from a changing criterion design intoa many-tiered multiple baseline format will often resultin a biased picture in favor of experimental control.

Even though the multiple baseline design is not com-pletely analogous, the changing criterion design can beconceptualized as a method of analyzing the develop-ment of new behaviors. As Sidman (1960) pointed out, “Itis possible to make reinforcement contingent upon a spec-ified value of some aspect of behavior, and to treat thatvalue as a response class in its own right” (p. 391). Thechanging criterion design can be an effective tactic forshowing the repeated production of new rates of behav-ior as a function of manipulations of the independent vari-able (i.e., criterion changes).

Other than the experiments included in the Hartmannand Hall (1976) and Hall and Fox (1977) papers, therehave been relatively few examples of pure changing cri-terion designs published in the applied behavior analysisliterature (e.g., DeLuca & Holborn, 1992; Foxx & Rubi-noff, 1979; Johnston & McLaughlin, 1982). Some re-searchers have employed a changing criterion tactic asan analytic element within a larger design (e.g., Martella,Leonard, Marchand-Martella, & Agran, 1993; Schleien,Wehman, & Kiernan, 1981).

Allen and Evans (2001) used a changing criteriondesign to evaluate the effects of an intervention to reducethe excessive checking of blood sugar levels by Amy, a15-year-old girl diagnosed with insulin-dependent dia-betes about 2 years prior to the study. Persons with thisform of diabetes must guard against hypoglycemia (i.e.,low blood sugar), a condition that produces a cluster ofsymptoms such as headaches, dizziness, shaking, im-paired vision, and increased heart rate, and can lead to

seizures and loss of consciousness. Because hypo-glycemic episodes are physically unpleasant and can bea source of social embarrassment, some patients becomehypervigilent in avoiding them, checking for low bloodsugar more often than is necessary and deliberately main-taining high blood glucose levels. This leads to poormetabolic control and increased risk of complicationssuch as blindness, renal failure, and heart disease.

At home Amy’s parents helped her monitor her bloodsugar levels and insulin injections; at school Amy checkedher blood glucose levels independently. Her physician rec-ommended that Amy keep her blood sugar levels between75 and 150 mg/dl, which required her to check her bloodsugar 6 to 12 times per day. Soon after she had been di-agnosed with diabetes, Amy experienced a single hypo-glycemic episode in which her blood sugar fell to 40mg/dl, and she experienced physical symptoms but noloss of consciousness. After that episode Amy beganchecking her glucose levels more and more often, until atthe time of her referral she was conducting 80 to 90 checksper day, which cost her parent approximately $600 perweek in reagent test strips. Amy was also maintaining herblood sugar level between 275 to 300 mg/dl, far abovethe recommended levels for good metabolic control.

Following a 5-day baseline condition, a treatmentwas begun in which Amy and her parents were exposedto a gradually decreasing amount of information abouther blood glucose level. Over a 9-month period Amy’sparents gradually reduced the number of test strips shewas given each day, beginning with 60 strips during thefirst phase of the treatment. Allen and Evans (2001) ex-plained the treatment condition and method for chang-ing criteria as follows:

The parents expressed fears, however, that regardless ofthe criterion level, Amy might encounter a situation inwhich additional checking would be necessary. Con-cerns about adherence to the exposure protocol by theparents resulted in a graduated protocol in which Amycould earn a small number of additional test strips aboveand beyond the limit set by the parents. One additionaltest strip could be earned for each half hour of engage-ment in household chores. Amy was allowed to earn amaximum of five additional tests above the criterionwhen the criterion was set at 20 test strips or higher.Amy was allowed two additional test strips when thecriterion was set below 20. Access to test strips was re-duced in graduated increments, with the parents settingcriteria to levels at which they were willing to adhere.Criteria changes were contingent upon Amy success-fully reducing total test strip use to below the criterionon 3 successive days. (p. 498)

Figure 11 shows the criterion changes and the num-ber of times Amy monitored her blood glucose level dur-ing the last 10 days of each criterion level. The results

240

Multiple Baseline and Changing Criterion Designs

90

80

70

60

50

40

30

20

10

0

Nu

mb

er o

f C

hec

ks

60

40

2018

1214

16

Baseline

5/16 5/29 7/22 8/29 10/16 11/15 12/19 3/3

Days

Exposure-Based Treatment

Figure 11 A changing criterion design showing the number of blood glucosemonitoring checks conducted during the last 10 days of each criterion level.Dashed lines and corresponding numbers indicate the maximum number of teststrips allotted at each level. Checks above the criterion levels were conductedwith additional test strips earned by Amy.From “Exposure-Based Treatment to Control Excessive Blood Glucose Monitoring” by K. D. Allen and J. H. Evans,2001, Journal of Applied Behavior Analysis, 12, p. 499. Copyright 2001 by the Society for the Experimental Analysisof Behavior, Inc. Reprinted by permission.

clearly show that Amy responded well to the treatmentand rarely exceeded the criterion. Over the course of the9-month treatment program, Amy reduced the number oftimes she monitored her blood sugar from 80 to 95 timesper day during baseline to fewer than 12 tests per day, alevel that she maintained at a 3-month follow-up. Amy’sparents indicated that they did not plan to decrease the cri-terion any further. A concern was that Amy might main-tain high blood sugar levels during treatment. The authorsreported that her blood sugar levels increased initiallyduring treatment, but gradually decreased over the treat-ment program to a range of 125 to 175 mg/dl, within ornear the recommended level.

Although the figure shows data only for the final 10days of each criterion level, it is likely that the phasesvaried in length.5 The study consisted of seven criterionchanges of two magnitudes, 20 and 2. Although greatervariation in the magnitude of criterion changes and a re-turn to a previously attained higher criterion level may

5Data on the number of checks by Amy throughout the intervention areavailable from Allen and Evans (2001).

have provided a more convincing demonstration of ex-perimental control, the practical and ethical considera-tions of doing so would be questionable. As always, theapplied behavior analyst must balance experimental con-cerns with the need to improve behavior in the most ef-fective, efficient, ethical manner.

This study illustrates very well the changing crite-rion design’s flexibility and is a good example of behav-ior analysts and clients working together. “Because theparents were permitted to regulate the extent of each cri-terion change, the intervention was quite lengthy. How-ever, by allowing the parents to adjust their own exposureto acceptable levels, adherence to the overall proceduremay have been improved.” (Allen & Evans, 2001, p. 500)

Guidelines for Using the ChangingCriterion Design

Proper implementation of the changing criterion designrequires the careful manipulation of three design factors:length of phases, magnitude of criterion changes, andnumber of criterion changes.

241

Length of Phases

Because each phase in the changing criterion designserves as a baseline for comparing changes in respondingmeasured in the next phase, each phase must be longenough to achieve stable responding. “Each treatmentphase must be long enough to allow the rate of the targetbehavior to restabilize at a new and changed rate; it isstability after change has been achieved, and before in-troduction of the next change in criterion, that is crucialto producing a convincing demonstration of control”(Hartmann & Hall, 1976, p. 531). Target behaviors thatare slower to change therefore require longer phases.

The length of phases in a changing criterion designshould vary considerably to increase the design’s valid-ity. For experimental control to be evident in a changingcriterion design, the target behavior not only must changeto the level required by each new criterion in a predictable(preferably immediate) fashion, but also must conformto the new criterion for as long as it is in effect. Whenthe target behavior closely follows successively more de-manding criteria that are held in place for varied periodsof time, the likelihood is reduced that the observedchanges in behavior are a function of factors other thanthe independent variable (e.g., maturation, practice ef-fects). In most situations, the investigator should not seta predetermined number of sessions for which each cri-terion level will remain in effect. It is best to let the dataguide ongoing decisions whether to extend the length ofa current criterion phase or introduce a new criterion.

Magnitude of Criterion Changes

Varying the size of the criterion changes enables a moreconvincing demonstration of experimental control. Whenchanges in the target behavior occur not only at the timea new criterion is implemented but also to the level spec-ified by the new criterion, the probability of a functionalrelation is strengthened. In general, a target behavior’simmediate change to meet a large criterion change ismore impressive than a behavior change in response to asmall criterion change. However, two problems arise ifcriterion changes are too large. First, setting aside prac-tical considerations, and speaking from a design stand-point only, large criterion changes may not permitinclusion of a sufficient number of changes in the design(the third design factor) because the terminal level of per-formance is reached sooner. The second problem is froman applied view: Criterion changes cannot be so largethat they conflict with good instructional practice. Crite-rion changes must be large enough to be detectable, butnot so large as to be unachievable. Therefore, the vari-ability of the data in each phase must be considered indetermining the size of criterion changes. Smaller crite-

rion changes can be employed with very stable levels ofresponding, whereas larger criterion changes are requiredto demonstrate behavior change in the presence of vari-ability (Hartmann & Hall, 1976).

When using a changing criterion design, behavioranalysts must guard against imposing artificial ceilings(or floors) on the levels of responding that are possible ineach phase. An obvious mistake of this sort would be togive a student only five math problems to complete whenthe criterion for reinforcement is five. Although the stu-dent could complete fewer than five problems, the pos-sibility of exceeding the criterion has been eliminated,resulting perhaps in an impressive-looking graph, but onethat is badly affected by poor experimental procedure.

Number of Criterion Changes

In general, the more times the target behavior changes tomeet new criteria, the more convincing the demonstra-tion of experimental control is. For example, eight crite-rion changes, one of which was a reversal to a previouslevel, were implemented in the changing design illus-trated in Figure 10, and Allen and Evans (2001) con-ducted seven criterion changes (Figure 11). In both ofthese cases, a sufficient number of criterion changes oc-curred to demonstrate experimental control. The experi-menter cannot, however, simply add any desired numberof criterion changes to the design. The number of crite-rion changes that are possible within a changing crite-rion design is interrelated with the length of phases andthe magnitude of criterion changes. Longer phases meanthat the time necessary to complete the analysis increases;with a limited time to complete the study, the greater thenumber of phases, the shorter each phase can be.

Considering the Appropriateness of the Changing Criterion Design

The changing criterion design is a useful addition to thebehavior analyst’s set of tactics for evaluating system-atic behavior change. Like the multiple baseline design,the changing criterion design does not require that im-provement in behavior be reversed. However, partial re-versals to earlier levels of performance enhance thedesign’s capability to demonstrate experimental control.Unlike the multiple baseline design, only one target be-havior is required.

Several characteristics of the changing criterion de-sign limit its effective range of applications. The designcan be used only with target behaviors that are already inthe subject’s repertoire and that lend themselves to step-wise modification. However, this is not as severe a limi-tation as it might seem. For example, students perform

Multiple Baseline and Changing Criterion Designs

242

many academic skills to some degree, but not at a usefulrate. Many of these skills (e.g., solving math problems,reading) are appropriate for analysis with a changing cri-terion design. Allowing students to progress as efficientlyas possible while meeting the design requirements ofchanging criterion analysis can be especially difficult.Tawney and Gast (1984) noted that “the challenge ofidentifying criterion levels that will permit the demon-stration of experimental control without impeding opti-mal learning rates” is problematic with all changingcriterion designs (p. 298).

Although the changing criterion design is sometimessuggested as an experimental tactic for analyzing the ef-fects of shaping programs, it is not appropriate for thispurpose. In shaping, a new behavior that initially is notin the person’s repertoire is developed by reinforcing re-

sponses that meet a gradually changing criterion, calledsuccessive approximations, toward the terminal behav-ior. However, the changing response criteria employedin shaping are topographical in nature, requiring differ-ent forms of behavior at each new level. The multipleprobe design (Horner & Baer, 1978), however, is an ap-propriate design for analyzing a shaping program becauseeach new response criterion (successive approximation)represents a different response class whose frequency ofoccurrence is not wholly dependent on the frequency ofbehaviors meeting other criteria in the shaping program.Conversely, the changing criterion design is best suitedfor evaluating the effects of instructional techniques onstepwise changes in the rate, frequency, accuracy, dura-tion, or latency of a single target behavior.

SummaryMultiple Baseline Design

1. In a multiple baseline design, simultaneous baseline mea-surement is begun on two or more behaviors. After stablebaseline responding has been achieved, the independentvariable is applied to one of the behaviors while baselineconditions remain in effect for the other behavior(s). Aftermaximum change has been noted in the first behavior, theindependent variable is then applied in sequential fashionto the other behaviors in the design.

2. Experimental control is demonstrated in a multiple base-line design by each behavior changing when, and onlywhen, the independent variable is applied.

3. The multiple baseline design takes three basic forms: (a)a multiple baseline across behaviors design consisting oftwo or more different behaviors of the same subject; (b) amultiple baseline across settings design consisting of thesame behavior of the same subject in two or more differ-ent settings; and (c) a multiple baseline across subjects de-sign consisting of the same behavior of two or moredifferent participants.

Variations of the Multiple Baseline Design

4. The multiple probe design is effective for evaluating theeffects of instruction on skill sequences in which it ishighly unlikely that the subject’s performance on latersteps in the sequence can improve without instruction ormastery of the earlier steps in the chain. The multipleprobe design is also appropriate for situations in whichprolonged baseline measurement may prove reactive, im-practical, or too costly.

5. In a multiple probe design, intermittent measurements, orprobes, are taken on all of the behaviors in the design at the

outset of the experiment. Thereafter, probes are taken eachtime the subject has achieved mastery of one of the be-haviors or skills in the sequence. Just prior to instructionon each behavior, a series of true baseline measures aretaken until stability is achieved.

6. The delayed multiple baseline design provides an analytictactic in situations in which (a) a planned reversal designis no longer desirable or possible; (b) limited resourcespreclude a full-scale multiple baseline design; or (c) a newbehavior, setting, or subject appropriate for a multiplebaseline analysis becomes available.

7. In a delayed multiple baseline design, baseline mea-surement of subsequent behaviors is begun sometimeafter baseline measurement was begun on earlier be-haviors in the design. Only baselines begun while ear-lier behaviors in the design are still under baselineconditions can be used to verify predictions made forthe earlier behaviors.

8. Limitations of the delayed multiple baseline design in-clude (a) having to wait too long to modify certain be-haviors, (b) a tendency for baseline phases to contain toofew data points, and (c) the fact that baselines begun afterthe independent variable has been applied to earlier be-haviors in the design can mask the interdependence (co-variation) of behaviors.

Assumptions and Guidelines for Using Multiple Baseline Designs

9. Behaviors comprising multiple baseline designs should befunctionally independent of one another (i.e., they do notcovary) and should share a reasonable likelihood that eachwill change when the independent variable is applied to it.

Multiple Baseline and Changing Criterion Designs

243

Multiple Baseline and Changing Criterion Designs

10. Behaviors selected for a multiple baseline design must bemeasured concurrently and must have an equal opportunityof being influenced by the same set of relevant variables.

11. In a multiple baseline design, the independent variableshould not be applied to the next behavior until the previ-ous behavior has changed maximally and a sufficient pe-riod of time has elapsed to detect any effects on behaviorsstill in baseline conditions.

12. The length of the baseline phases for the different behav-iors comprising a multiple baseline design should varysignificantly.

13. All other things being equal, the independent variableshould be applied first to the behavior showing the moststable level of baseline responding.

14. Conducting a reversal phase in one or more tiers of a mul-tiple baseline design can strengthen the demonstration ofa functional relation.

Considering the Appropriateness of Multiple Baseline Designs

15. Advantages of the multiple baseline design include the factthat (a) it does not require withdrawing a seemingly ef-fective treatment, (b) sequential implementation of the in-dependent variable parallels the practice of many teachersand clinicians whose task is to change multiple behaviorsin different settings and/or subjects, (c) the concurrent mea-surement of multiple behaviors allows direct monitoring ofgeneralization of behavior change, and (d) the design isrelatively easy to conceptualize and implement.

16. Limitations of the multiple baseline design include the factthat (a) if two or more behaviors in the design covary, themultiple baseline design may not demonstrate a functionalrelation even though one exists; (b) because verificationmust be inferred from the lack of change in other behav-iors, the multiple baseline design is inherently weaker thanthe reversal design in showing experimental control be-tween the independent variable and a given behavior; (c)the multiple baseline design is more an evaluation of the

independent variable’s general effectiveness than an analy-sis of the behaviors involved in the design; and (d) con-ducting a multiple baseline design experiment requiresconsiderable time and resources.

Changing Criterion Design

17. The changing criterion design can be used to evaluate theeffects of a treatment on the gradual or stepwise improve-ment of a behavior already in the subject’s repertoire.

18. After stable baseline responding has been achieved, thefirst treatment phase is begun, in which reinforcement (orpunishment) is usually contingent on the subject’s per-forming at a specified level (criterion). The design entailsa series of treatment phases, each requiring an improvedlevel of performance over the previous phase. Experi-mental control is demonstrated in the changing criteriondesign when the subject’s behavior closely conforms tothe gradually changing criteria.

19. Three features combine to determine the potential of achanging criterion design to demonstrate experimentalcontrol: (a) the length of phases, (b) the magnitude of cri-terion changes, and (c) the number of criterion changes.The believability of the changing criterion design is en-hanced if a previous criterion is reinstated and the sub-ject’s behavior reverses to the level previously observedunder that criterion.

Considering the Appropriateness of the Changing Criterion Design

20. The primary advantages of the changing criterion designare that (a) it does not require a withdrawal or reversal ofa seemingly effective treatment, and (b) it enables an ex-perimental analysis within the context of a gradually im-proving behavior, thus complementing the practice ofmany teachers.

21. Limitations of the changing criterion design are that thetarget behavior must already be in the subject’s repertoire,and that incorporating the necessary features of the designmay impede optimal learning rates.

244

component analysisdirect replicationdouble-blind controlplacebo control

procedural fidelityreplicationsystematic replicationtreatment drift

treatment integrityType I errorType II error

Behavior Analyst Certification Board® BCBA® & BCABA®Behavior Analyst Task List©, Third Edition

Content Area 1: Ethical Considerations

1-12 Give preference to assessment and intervention methods that have beenscientifically validated, and use scientific methods to evaluate those that havenot yet been scientifically validated.

Content Area 5: Experimental Evaluation of Interventions

5-1 Systematically manipulate independent variables to analyze their effects ontreatment.

5-3 Conduct a component analysis (i.e., determining effective component(s) of anintervention package).

Content Area 10: Systems Support

10-3 Design and use systems for monitoring procedural integrity.

© 2006 The Behavior Analyst Certification Board, Inc.,® (BACB®) all rights reserved. A current version of thisdocument may be found at www.bacb.com. Requests to reprint, copy, or distribute this document and ques-tions about this document must be submitted directly to the BACB.

Planning and Evaluating Applied Behavior Analysis Research

Key Terms

From Chapter 10 of Applied Behavior Analysis, Second Edition. John O. Cooper, Timothy E. Heron, William L. Heward. Copyright © 2007by Pearson Education, Inc. All rights reserved.

245

Planning and Evaluating Applied Behavior Analysis Research

Previously outlined were considerations andprocedures for selecting target behaviors, de-tailed strategies for designing measurement sys-

tems, presented guidelines for displaying and interpretingbehavioral data, and described experimental tactics forrevealing whether observed changes in a target behaviorcan be attributed to an intervention. This chapter supple-ments the information described thus far by examiningquestions and considerations that should be addressedwhen designing, replicating, and evaluating behavioralresearch. We begin by reviewing the central role of the in-dividual subject in behavioral research, and follow witha discussion of the value of flexibility in experimentaldesign.

Importance of theIndividual Subjectin Behavioral ResearchTo achieve maximum effectiveness, the research meth-ods of any science must respect the defining characteris-tics of that science’s subject matter. Behavior analysis—ascience devoted to discovering and understanding thecontrolling variables of behavior—defines its subjectmatter as the activity of living organisms, a dynamic phe-nomenon that occurs at the level of the individual organ-ism. It follows that the research methods most often usedby behavior analysts feature repeated measures of be-havior of individual organisms (the only place, by defi-nition, where behavior can be found). This focus on thebehavior of individual subjects has enabled applied be-havior analysts to discover and refine effective interven-tions for a wide range of socially significant behavior.

To further explain the importance that this focus on theindividual subject or client holds for applied behavior analy-sis, we will now contrast it with a research model that re-volves around comparisons of data representing theaggregate measures of different groups of subjects. Thisgroups-comparison approach to designing and evaluatingexperiments has predominated “behavioral research” in psy-chology, education, and other social sciences for decades.

Brief Outline of a Groups-Comparison Experiment

The basic format for a groups-comparison experimentcan be described as follows.1 A pool of subjects (e.g., 60first-grade nonreaders) is selected randomly from the pop-

1This brief sketch of the simplest form of a groups-comparison designstudy omits many important details and controls. Readers interested in athorough explication and examples of group research methods should con-sult an authoritative text such as Campbell and Stanley (1966).

2Perhaps in part because the researcher must attend to “the purely logisti-cal demands forced by the management of a battalion of subjects” (John-ston & Pennypacker, 1980, p. 256), group designs are characterized by fewmeasures of the dependent variable (often just two measures: pretest andposttest).

ulation (e.g., all first-grade nonreaders in a school dis-trict) relevant to the research question (e.g., Will the XYZintensive phonics program improve first-grade nonread-ers’ ability to decode unpredictable text?). The subjectsare divided randomly into two groups: the experimentalgroup and the control group. An initial measure (pretest)of the dependent variable (e.g., score on a test of decod-ing skills) is obtained for all subjects in the study, the in-dividual pretest scores for the subjects in each group arecombined, and the mean and standard deviation are cal-culated for each group’s performance on the pretest. Sub-jects in the experimental group are then exposed to theindependent variable (e.g., 6 weeks of the XYZ program),which is not provided to subjects in the control group.After the treatment program has been completed, aposttest measure of the dependent variable is obtained forall subjects, and the mean and standard deviation posttestscores for each group are computed.2 The researcher thencompares any changes in each group’s scores from pretestto posttest, applying various statistical tests to the datathat enable inferences regarding the likelihood that anydifferences between the two groups’ performances can beattributed to the independent variable. For example, as-suming that the mean pretest scores for the experimentaland control groups were similar, and the posttest measurerevealed an improved mean score for the experimentalgroup but not for the control group, statistical analyseswould indicate the mathematical probability that the dif-ference was due to chance. When a statistical test rules outchance as a likely factor, the researcher infers that thatindependent variable was responsible for effects on thedependent variable (e.g., the experimental group’s im-provement from pretest to posttest).

Researchers who combine and compare measures ofgroups of subjects in this way do so for two primary rea-sons. First, advocates of group designs assume that av-eraging the measures of many subjects’ performancecontrols for intersubject variability; thus, they assumethat any changes in performance are the work of theindependent variable. The second rationale for usinglarge groups of subjects is the assumption that increas-ing the number of subjects in a study increases the ex-ternal validity of the findings. That is, a treatmentvariable found effective with the subjects in the exper-imental group will also be effective with other subjectsin the population from which the sample subjects wereselected. The assumption of increased generality offindings is discussed later in this chapter in the sectionon replication. In the next section we comment on

246

Planning and Evaluating Applied Behavior Analysis Research

3A complete discussion of the many problems posed by commingling datafrom multiple subjects is beyond the scope of this text. Students wishingto obtain a more complete understanding of this important issue are en-couraged to read Johnston and Pennypacker (1980, 1993b) and Sidman(1960).

the first reason for the use of groups of subjects—thatdoing so controls for intersubject variability. Our dis-cussion identifies three fundamental concerns with typi-cal groups-comparison designs that bear heavily onexperimental reasoning.3

Group Data May Not Represent thePerformance of Individual Subjects

By definition, applied behavior analysis is concerned withimproving the behavior of the individual. Knowing thatthe average performance of a group of subjects changedmay not reveal anything about the performance of indi-vidual subjects. It is quite possible for the average per-formance of subjects in the experimental group to haveimproved, while the performance of some subjects stayedthe same, and the performance of others even deterio-rated. It is even possible for the majority of subjects toshow no improvement, for some subjects to get worse,and for a few subjects to improve sufficiently to yield anoverall average improvement of statistical significance.

In defense of the groups-comparison approach, itmight be said that it can show that a treatment is gener-ally effective, that no treatment works with everyone, thatpeople respond differently, and so on. But the fact that agroup’s average performance improves with a treatmentis insufficient reason to adopt it, particularly for peoplein dire need of help with academic, social, or other be-havioral challenges. General effectiveness is insufficient;the factors responsible for one subject’s improvementwith the treatment and another’s lack of improvementmust be discovered. To be most useful, a treatment mustbe understood at the level at which people come into con-tact with it and are affected by it: the individual level.

The two graphs in Figure 10.1 suggest some of themany faulty conclusions that are possible when an in-

4

of the man whose bare feet were in a bucket of ice while his head was onfire. When asked how he was feeling, the man replied, “On the average, Ifeel fine.”

vestigator’s interpretation of a study is based on groupmean scores. Each graph presents hypothetical data forthe individual and average performances of two groups,each group consisting of two subjects. The data show nochange in the mean response measure from pretest toposttest for either group. The pre- and posttest group datain both graphs in Figure1 would suggest that the in-dependent variable had no effect on the subjects’ behav-ior. However, the left-hand graph in Figure 1 shows that Subject A’s performance improved from pretest toposttest, and Subject B’s behavior deteriorated over thesame period of time.4 The right-hand graph shows thatalthough the pre- and posttest measures for Subjects Cand D were identical, if repeated measures of Subject C’sand Subject D’s behavior between the pretest and posttesthad been conducted, significant variability within and be-tween the two subjects would have been revealed.

Group Data Masks Variabilityin the Data

A second problem associated with the mean performanceof a group of subjects is that it hides variability in thedata. Even if repeated measures of Subject C’s and Sub-ject D’s behavior between the pre- and posttest had beenconducted as shown in Figure 1, a researcher who re-lied on the group’s mean performance as the primary in-dicator of behavior change would be ignorant of thevariability that occurred within and between subjects.

When repeated measurement reveals significant lev-els of variability, an experimental search with the goal ofidentifying and controlling the factors responsible for thevariability is in order. The widespread belief that the ef-fects of uncontrolled variables in a study can be somehowcontrolled by statistical manipulations of the dependentvariable is faulty.

Subject ASubject BGroup Mean

Res

po

nse

Mea

sure

Subject CSubject DGroup Mean

Res

po

nse

Mea

sure

Time Time

Figure 1 Hypothetical data showingthat the mean performance of a group ofsubjects may not represent an individual subject’s behavior.

The posttest data point in the left-hand graph of Figure 1 is reminiscent

247

Planning and Evaluating Applied Behavior Analysis Research

Statistical control is never a substitute for experimentalcontrol. . . . The only way to determine whether or notuncontrolled variables are influencing the data is to in-spect the data at the finest available level of decomposi-tion, usually point-by-point for each individual subject.No purpose is served by combining the data statisticallyto obscure such effects. (Johnston & Pennypacker, 1980,p. 371)

Instead of controlling its sources before the fact, the be-tween groups approach emphasizes controlling variabil-ity statistically after the fact. These two tactics do nothave the same effects on the database. Whereas efforts tocontrol actual variability lead to improved control overresponding and, thus, a clearer picture of the effects ofeach condition, statistical manipulation of variable datacannot remove the influences already represented in thedata. (Johnston & Pennypacker, 1993b, p. 184)

Attempting to “cancel out” variability through sta-tistical manipulation neither eliminates it from the datanor controls the variables responsible for it. And the re-searcher who attributes the effects of unknown or un-controlled variables to chance removes himself or herselfeven further from the identification and analysis of im-portant variables. In his monumental work, Tactics of Sci-entific Research, Sidman (1960) dealt repeatedly andforcefully with this critical issue.

To some experimenters, chance is simply a name for thecombined effects of uncontrolled variables. If such vari-ables are, in fact, controllable, then chance in this senseis simply an excuse for sloppy experimentation, and nofurther comment is required. If the uncontrolled variablesare actually unknown, then chance is, as Boring (1941)has pointed out, a synonym for ignorance. . . . One of themost discouraging and at the same time challenging as-pects of behavioral science is the sensitivity of behaviorto a tremendous array of variables. . . . But variables arenot canceled statistically. They are simply buried so thattheir effects cannot be seen. The rationale for statisticalimmobilization of unwanted variables is based on the as-sumed random nature of such variables. . . . Not only isthe assumption of randomness with respect to the uncon-trolled variables an untested one but it is also highly im-probable. There are few, if any, random phenomena inthe behavioral world. (pp. 45, 162–163)

Sidman (1960) also commented on an experimenter’suse of statistics in an attempt to deal with troublesome se-quence effects.

He has a neat trick up his sleeve. By averaging togetherthe data for both subjects under Condition A, and againunder Condition B, he “cancels out” the order effect,and completely bypasses the problem of irreversibility.By a simple arithmetical operation, two subjects havebecome one, and a variable has been eliminated.

It has not, in fact, gone anywhere. Numbers may bemade to disappear by adding and subtracting them from

each other. Five apples minus three apples are two ap-ples. The numbers are easily changed by a few strokesof the pen, but some eating has to be done before the ap-ples themselves will vanish. (p. 250)

The “eating” that must be done to control the effectsof any variable can be accomplished in only two ways: (a)holding the variable constant throughout the experiment,or (b) isolating the suspected factor as an independentvariable and manipulating its presence, absence, and/orvalue during the experiment.

Intrasubject Replication Is Absentfrom Group Designs

A third weakness of the groups-comparison statistical in-ference research model is that the power of replicatingeffects with individual subjects is lost. One of the greatstrengths of within-subject experimental designs is theconvincing demonstration of a functional relation madepossible by replication within the design itself. Eventhough multiple subjects are typically involved in appliedbehavior analysis research, each subject is always treatedas a separate experiment. Although behavior analystsoften display and describe the data for all subjects as agroup, data from individual subjects are used as the basisfor determining and interpreting experimental effects.Applied behavior analysts are wise to heed Johnston andPennypacker’s (1980) admonition, “An effect thatemerges only after individual data have been combined isprobably artifactual and not representative of any real be-havioral processes” (p. 257).

This discussion should not be interpreted to meanthat the overall performance of groups of subjects cannot,or should not, be studied with the strategies and tactics ofapplied behavior analysis. There are many applied situ-ations in which the overall performance of a group is so-cially significant. For example, Brothers, Krantz, andMcClannahan (1994) evaluated an intervention to in-crease the number of pounds of recyclable office paper re-cycled by 25 staff members at a school. Still, it isimportant to remember that group data may not repre-sent the performance of individual participants, and viceversa. For example, Lloyd, Eberhardt, and Drake (1996)compared the effects of group versus individual rein-forcement contingencies within the context of collabo-rative group study conditions on quiz scores by studentsin a Spanish language class. The results showed that thegroup contingencies resulted in higher mean quiz scoresfor the class as a whole compared to the individual con-tingencies condition. However, overall benefits at theclass level were mitigated by differential results for in-dividual students. When group results do not representindividual performances, researchers should supplementgroup data with individual results, ideally in the form of

248

Planning and Evaluating Applied Behavior Analysis Research

graphic displays (e.g., Lloyd et al., 1996; Ryan &Hemmes, 2005).

In some instances, however, the behavior analyst maynot be able to control the access of subjects to the exper-imental setting and contingencies or even be able to iden-tify who the subjects are (e.g., Van Houten & Malenfant,2004; Watson, 1996). The dependent variable must thenconsist of all of the responses made by individuals whoenter the experimental setting. This approach is used fre-quently in community-based behavior analysis research.For example, group data have been collected and ana-lyzed on such dependent variables as litter control on auniversity campus (Bacon-Prue, Blount, Pickering, &Drabman, 1980), car pooling by university students (Ja-cobs, Fairbanks, Poche, & Bailey, 1982), drivers’ com-pliance and caution at stop signs (Van Houten &Malenfant, 2004), the use of child safety belts in shoppingcarts (Barker, Bailey, & Lee, 2004), and reducing graffition restroom walls (Watson, 1996).

Importance of Flexibilityin Experimental DesignOn one level, an effective experimental design is anyarrangement of type and sequence of independent vari-able manipulations that produces data that are interestingand convincing to the researcher and the audience. In thiscontext the word design is particularly appropriate as averb as well as a noun; the effective behavioral researchermust actively design each experiment so that eachachieves its own unique design. There are no ready-madeexperimental designs awaiting selection. The prototypedesigns are examples of analytic tactics that afford a formof experimental reasoning and control that has proven effective in advancing our understanding of a wide rangeof phenomena of interest to applied behavior analysts.Johnston and Pennypacker (1980, 1993a) have been clearand consistent in stating that the “suspicion some mayhold that generic categories of design types exist andshould be botanized” (1980, p. 293) is counterproductiveto the practice of the science of behavior.

In order to explain how to design and interpret withinsubject comparisons, it is tempting to develop cate-gories of similar arrangements or designs. This almostrequires giving each category a label, and the labeledcategories then imply that there is something impor-tantly different that distinguishes each from the others.(1993a, p. 267)

The requirements for creating useful comparisonscannot be reduced to a cookbook of simple rules or for-mulas. . . . It misleads students by suggesting that partic-ular types of arrangements have specific functions and

5The analytic tactics presented in Chapters entitled “Reversal and Alter-nating Treatments Designs” and “Multiple Baseline and Changing Crite-rion Designs” should not be considered soley as experimental designs foranother reason: All experiments incorporate design elements in addition tothe type and sequence of independent variable manipulations (e.g., sub-jects, setting, dependent variable, measurement system).

by failing to encourage them to understand the underly-ing considerations that open up unlimited experimentaloptions. (1993a, p. 285)

Sidman (1960) was even more adamant in his warn-ing regarding the undesirable effects of researchers’ be-lieving in the existence of a given set of rules forexperimental design.

The examples may be accepted as constituting a set ofrules that must be followed in the design of experiments.I cannot emphasize too strongly that this would be disas-trous. I could make the trite statement that every rule hasits exception, but this is not strong enough. Nor is themore relaxed statement that the rules of experimentaldesign are flexible, to be employed only where appropri-ate. The fact is that there are no rules of experimentaldesign. (p. 214)

We agree with Sidman. The student of applied be-havior analysis should not be led to believe that any of theanalytic tactics constitute experimental designs per se.5

Still, we believe that it is useful to present the most com-monly used analytic tactics in design form for two rea-sons. First, the vast majority of studies that have advancedthe field of applied behavior analysis have used experi-mental designs that incorporated one or more of the an-alytic tactics. Second, we believe that the beginningstudent of behavior analysis benefits from an examinationof specific examples of isolated experimental tactics andtheir application; it is one step in learning the assump-tions and strategic principles that guide the selection andarrangement of analytic tactics into an experimental de-sign that effectively and convincingly addresses the re-search question(s) at hand.

Experimental Designs ThatCombine Analytic Tactics

Combining multiple baseline and reversal tactics mayallow a more convincing demonstration of experimentalcontrol than either tactic alone. For example, by with-drawing the treatment variable (a return to baseline) andthen reapplying it within one or more tiers in a multiplebaseline design, researchers are able to determine theexistence of a functional relation between the indepen-dent variable and each behavior, setting, or subject ofthe multiple baseline element and also to analyze the ef-fectiveness of the independent variable across the tiers

249

Planning and Evaluating Applied Behavior Analysis Research

(e.g., Alexander, 1985; Ahearn, 2003; Barker, Bailey, &Lee, 2004; Blew, Schwartz, & Luce, 1985; Bowers,Woods, Carlyon, & Friman, 2000; Heward & Eachus,

To investigate the research questions of interest, in-vestigators often build experimental designs that entail acombination of analytic tactics. For example, it is not un-common for experimenters to evaluate multiple treat-ments by sequentially applying each in a multiplebaseline fashion (e.g., Bay-Hinitz, Peterson, & Quilitch,1994; Iwata, Pace, Cowdery, & Miltenberger, 1994; VanHouten, Malenfant, & Rolider, 1985; Wahler & Fox,1980; Yeaton & Bailey, 1983). Experimental designs thatcombine multiple baseline, reversal, and/or alternatingtreatments tactics can also provide the basis for compar-ing the effects of two or more independent variables orconducting a component analysis of elements of a treat-ment package. For example, the experimental designsused by L. J. Cooper and colleagues’ (1995) used alter-nating treatments comparisons within a sequence of mul-tiple treatment reversals to identify the active variables intreatment packages for children with feeding disorders.

Haring and Kennedy (1990) used multiple baselineacross settings and reversal tactics in their experimentaldesign that compared the effectiveness of time-out anddifferential reinforcement of other behavior (DRO) onthe frequency of problem behaviors by two secondarystudents with severe disabilities (see Figure 2).6 San-dra and Raff each frequently engaged in repetitive, stereo-typic problem behaviors (e.g., body rocking, loudvocalizations, hand flapping, spitting) that interfered withclassroom and community activities. In addition to as-sessing the effects of the time-out and DRO interventionsagainst a no-treatment baseline condition, the design alsoenabled the researchers to conduct two comparisons ofthe relative effects of each treatment during an instruc-tional task and leisure context. The design enabled Har-ing and Kennedy to discover that the time-out and DROinterventions produced different outcomes depending onthe activity context in which they were applied. For bothstudents, DRO was more effective than time-out in sup-pressing problem behavior in the task context; the oppo-site results were obtained in the leisure context, wheretime-out suppressed problem behavior and DRO provedineffective.

Experimenters have also incorporated alternatingtreatments into experimental designs containing multi-ple baseline elements. For example, Ahearn, Kerwin,

6Time-out and differential reinforcement of other behavior (DRO) are ex-plained in Chapters entitled “Punishment by Removal of a Stimulus” and“Differential Reinforcement,” respectively.

Eicher, Shantz, and Swearingin (1996) evaluated the rel-ative effects of two treatments for food refusal in an al-ternating treatments design implemented in a multiplebaseline across subjects format. Likewise, McGee,Krantz, and McClannahan (1985) evaluated the effectsof several procedures for teaching language to autisticchildren with an experimental design that incorporatedalternating treatments within a multiple baseline acrossbehaviors component that was, in turn, nested within anoverall multiple baseline across subjects format. Zanolliand Daggett (1998) investigated the effects of reinforce-ment rate on the spontaneous social initiations of sociallywithdrawn preschoolers with an experimental design con-sisting of multiple baseline, alternating treatments, and re-versal tactics.

Figure 3 shows how Sisson and Barrett (1984) in-corporated a multiple probe across behaviors component,an alternating treatments analysis, and a multiple base-line across behaviors element in a design comparing theeffects of two language-training procedures. The designenabled the investigators to discover the superiority of thetotal communication method for these two children, aswell as the fact that direct application of the treatment wasrequired for learning to occur on specific sentences. Re-sults for a third subject revealed a functional relation of thesame form and direction as that found for the two chil-dren whose results are shown in Figure 3, but one not so strongly in favor of the total communication procedure.

Our intent in describing several experiments that com-bined analytic tactics is not to offer any of these examplesas model designs. They are presented instead as illustra-tions of the infinite number of experimental designs thatare possible by arranging different combinations and se-quences of independent variable manipulations. In everyinstance the most effective (i.e., convincing) experimentaldesigns are those that use an ongoing evaluation of datafrom individual subjects as the basis for employing the threeelements of baseline logic—prediction, verification, andreplication.

Internal Validity:Controlling PotentialSources of Confounding in Experimental DesignAn experiment is interesting and convincing, and yieldsthe most useful information for application, when it pro-vides an unambiguous demonstration that the indepen-dent variable was solely responsible for the observedbehavior change. Experiments that demonstrate a clearfunctional relation are said to have a high degree of in-ternal validity. The strength of an experimental design is

1979; Miller & Kelley, 1994; Zhou, Goff, & Iwata,2000).

250

Planning and Evaluating Applied Behavior Analysis Research

100

50

0

Tas

k

Baseline DRO T.O. DRO T.O. DRO

100

50

0

Leis

ure

Baseline T.O.

Sandra

10 20 30

100

50

0

Tas

k

BL. T.O. DRO T.O. DRO T.O. DRO

100

50

0

Leis

ure

Baseline T.O.

Raff

Per

cen

tage

of

Inte

rval

s w

ith

Ste

reo

typ

y

10 20Sessions

Figure 2 Experimental design employingmultiple baselines across settings and re-versal tactics counterbalanced across twosubjects to analyze the effects of time-out(TO) and differential reinforcement of otherbehavior (DRO) treatment conditions.From “Contextual Control of Problem Behavior” by T. G. Haringand C. H. Kennedy, 1990, Journal of Applied Behavior Analysis,23, pp. 239–240. Copyright 1990 by the Society for the Experi-mental Analysis of Behavior, Inc. Reprinted by permission.

determined by the extent to which it (a) demonstrates a re-liable effect (i.e., repeated manipulation of the indepen-dent variable produces a consistent pattern of behaviorchange) and (b) eliminates or reduces the possibility thatfactors other than the independent variable produced thebehavior change (i.e., controls for confounding variables).

Implicit in the term experimental control, which isoften used to signify a researcher’s ability to reliably pro-duce a specified behavior change by manipulating an in-dependent variable, is the idea that the researcher controlsthe subject’s behavior. However, “control of behavior”is inaccurate because the experimenter can control onlysome aspect of the subject’s environment. Therefore, thelevel of experimental control obtained by a researcher

refers to the extent to which she controls all relevant vari-ables in a given experiment. The researcher exerts thiscontrol within the context of an experimental design that,even though carefully planned at the outset, takes its ul-timate form from the researcher’s ongoing examinationand response to the data.

An effective experimental design simultaneouslyreveals a reliable functional relation between indepen-dent and dependent variables (if one exists) and mini-mizes the likelihood that the observed behavior changesare the result of unknown or uncontrolled variables. An experiment has high internal validity when changesin the dependent variable are demonstrated to be a function only of the independent variable. When

251

Planning and Evaluating Applied Behavior Analysis Research

4

3

2

1

0Sen

ten

ce 1

& 1

a

EliTotal Communication

Oral

Baseline Alternating Treatments Total Communication

4

3

2

1

0Sen

ten

ce 2

& 2

a

4

3

2

1

0Sen

ten

ce 3

& 3

a

1 5 10 15 20 25 30 35 40 45 50 55 60 65 70M

ean

Nu

mb

er o

f Se

nte

nce

Par

tsSessions

43210

Sen

ten

ce 1

& 1

a

Mick

Baseline Alternating Treatments Total Communication

43210

Sen

ten

ce 2

& 2

a

43210

Sen

ten

ce 3

& 3

a

1 5 10 15 20 25 30 35 40 45 50 55 60 65 95

Mea

n N

um

ber

of

Sen

ten

ce P

arts

Sessions70 75 80 85 90

Figure 3 Experimental design employingan alternating treatments tactic, a multiple probe, and a multiple baselineacross behaviors analysis.From “Alternating Treatments Comparison of Oral and TotalCommunication Training with Minimally Verbal RetardedChildren” by L. A. Sisson and R. P. Barrett, 1984, Journal ofApplied Behavior Analysis, 17, p. 562. Copyright 1984 by the Society for the Experimental Analysis of Behavior, Inc.Reprinted by permission.

planning an experiment and later when examining the ac-tual data from an ongoing study, the investigator must al-ways be on the lookout for threats to internal validity.Uncontrolled factors known or suspected to have exertedinfluence on the dependent variable are called confound-ing variables. Much of a researcher’s efforts during thecourse of a study are aimed at eliminating or controllingconfounding variables.

The attainment of steady state responding is the pri-mary means by which applied behavior analysts assessthe degree of experimental control. Separating the effectsof the independent variable from the effects of a poten-tially confounding variable requires clear, empirical ev-idence that the potentially confounding variable is nolonger present, has been held constant across experi-mental conditions, or has been isolated for manipulationas an independent variable. Any experiment can be af-

fected by a virtually unlimited number of potential con-founds; and as with every aspect of experimental design,there are no set rules for identifying and controlling con-founding variables to which researchers can turn. How-ever, some common and likely sources of confoundingcan be identified as well as tactics that can be consideredto control them. Confounding variables can be viewed asrelated primarily to one of four elements of an experi-ment: subject(s), setting, measurement of the dependentvariable, and independent variable.

Subject Confounds

A variety of subject variables can confound the results ofa study. Maturation, which refers to changes that takeplace in a subject over the course of an experiment, is apotential confounding variable. For example, a subject’s

252

Planning and Evaluating Applied Behavior Analysis Research

improved performance during the later phases of a studymay be the result of physical growth or the acquisition ofacademic, social, or other behaviors and be unrelated tomanipulations of the independent variable. Experimen-tal designs that incorporate rapidly changing conditionsor multiple introductions and withdrawals of the inde-pendent variable over time usually control for matura-tion effectively.

In most applied behavior analysis research, a subjectis in the experimental setting and contacting the contin-gencies implemented by the investigator for only a por-tion of the day. As it is in any study, the assumption ismade that each subject’s behavior during each sessionwill be primarily a function of the experimental condi-tions in effect. In reality, however, each subject’s behav-ior may also be influenced by events that have occurredoutside of the experiment. For example, suppose that thefrequency of contributions to a class discussion is the de-pendent variable in a study. Now suppose that just priorto a session a student who has been contributing to dis-cussions at a high rate was involved in a fight in the lunch-room and emits substantially fewer contributionscompared to his level of responding in previous sessions.This change in the student’s behavior may, or may not, bea result of the lunchroom fight. If the lunchroom fightcoincided with a change in the independent variable, itwould be especially difficult to detect or separate any ef-fects of the experimental conditions from those of theextra-experimental event.

Although the researcher may be aware of some eventsthat are likely causes of variability during a study, manyother potential confounds go undetected. Repeated mea-surement is both the control for and the means to detectthe presence and effects of such variables. The uncon-trolled variables responsible for a subject’s having a “badday” or an unusually “good day” are particularly trouble-some in research designs with few and/or widely spacedmeasurements of the dependent variable. This is one ofthe major weaknesses of using pretest-posttest compar-isons to evaluate the effects of a treatment program.

Because groups-comparison experiments are predi-cated on subjects’ similarity in relevant characteristics(e.g., gender, age, ethnicity, cultural and linguistic back-ground, current skills), they are vulnerable to confound-ing by differences among subjects. Concern that thecharacteristics of one or more subjects may confound anexperiment’s results is generally not an issue in the single-subject experiments of applied behavior analysis. First, aperson should participate in a study because she will ben-efit if the target behavior is changed successfully. Sec-ond, a subject’s idiosyncratic characteristics cannotconfound a study using a true within-single experimentaldesign. With the exception of a multiple baseline across

subjects analysis, each participant in a behavioral studyserves as her own control, which guarantees identicallymatched subjects in all experimental conditions becausethose subjects are the same person. Third, the externalvalidity of results from a single-subject analysis is not de-pendent on the extent to which the subject(s) shares cer-tain characteristics with others. The extent to which afunctional relation applies to other subjects is establishedby replicating the experiment with different subjects.

Setting Confounds

Most applied behavior analysts conduct studies in nat-ural settings where a host of variables are beyond theircontrol. Studies in natural settings are more prone to con-founding by uncontrolled events than are studies con-ducted in laboratories where extraneous variables can bemore tightly managed. Even so, the applied experimenteris not without resources to mitigate the detrimental ef-fects of setting confounds. For instance, when the appliedresearcher observes that an uncontrolled event has coin-cided with changes in the data, he should hold all possi-ble aspects of the experiment constant until repeatedmeasurement again reveals stable responding. If the un-planned event appears to have a robust effect on the tar-get behavior, or is otherwise of interest to the investigator,and is amenable to experimental manipulation, the in-vestigator should treat it as an independent variable andexplore its possible effects experimentally.

Applied researchers concerned about setting con-founds must also be on the lookout for the availability of“bootleg” reinforcement within and outside the experi-mental situation. A good example of how a setting con-found operates occurs when, unbeknownst to theexperimenter, subjects have ready access to potential re-inforcers. In such a case, the effectiveness of those con-sequences as reinforcers diminishes.

Measurement Confounds

Many factors should be considered in designing an accurate and nonreactive measurement system. Still,numerous sources of confounding may exist within awell-planned measurement system. For instance, datamight be confounded by observer drift, the influence of the experimenter’s behavior on observers, and/or observer bias. Although admittedly difficult to accom-plish in applied settings where observers often see the independent variable being implemented, keeping observers naive to the conditions and expected outcomes of an experiment reduces the potential ofconfounding by observer bias. On a related note,

253

Planning and Evaluating Applied Behavior Analysis Research

when observers score permanent products, the productsshould not contain identifying marks that indicate whoproduced each product and under what experimental con-ditions it was produced. Having observers score papersfrom baseline and treatment conditions in randomizedorder reduces the likelihood that observer drift or bias willconfound the data within one treatment condition phase.(This procedure is more suitable to controlling for driftor bias by observers conducting postexperiment accuracyor IOA assessments.)

Unless a completely unobtrusive measurement sys-tem is devised (e.g., a covert system using one-way mir-rors, or observations conducted at some distance fromthe subject), reactivity to the measurement proceduremust always be considered as a possible confound. Tooffset this possible confound, the experimenter mustmaintain baseline conditions long enough for any reactiveeffects to run their course and for stable responding tobe obtained. If reactivity to measurement produces un-desirable effects (e.g., aggressive behavior, cessation ofproductivity) and a more unobtrusive measurement pro-cedure cannot be devised, intermittent probes should beconsidered. Measures can also be confounded by practice,adaptation, and warm-up effects, especially during theinitial stages of baseline. Again, the proper procedure isto continue baseline conditions until stable responding isobtained or variability is reduced to minimal levels. In-termittent probes should not be used for baseline mea-surement of behaviors for which practice effects would beexpected. This is because, if the target behavior is sus-ceptible to practice effects, those effects will occur dur-ing the intervention condition when more frequentmeasures are conducted, thereby confounding any effectsof the independent variable.

Independent Variable Confounds

Most independent variables are multifaceted; that is,there is usually more to a treatment condition than thespecific variable of interest to the investigator. For ex-ample, the effects of a token economy on students’ aca-demic productivity may be confounded by variables suchas the personal relationship between the students and theteacher who delivers the tokens, social interactions as-sociated with delivering and exchanging tokens, the ex-pectation of teacher and students that performance willimprove when the token system is implemented, and soon. If the intent is to analyze the effects of token rein-forcement per se, these potentially confounding variablesmust be controlled.

Schwarz and Hawkins (1970) provided a good ex-ample of a control procedure for ruling out an aspect as-sociated with a treatment as responsible for behavior

change. The researchers evaluated the effects of tokenreinforcement on three maladaptive behaviors of an ele-mentary student who was described as severely with-drawn. During treatment, the therapist and the girl meteach day after school and viewed a videotape that hadbeen made earlier that day of the student’s classroom be-havior. The therapist also administered tokens contingenton the girl’s videotaped behavior displaying progressivelyfewer occurrences of maladaptive behaviors.

Schwarz and Hawkins recognized an independentconfound potentially lurking in the design of theirstudy. They reasoned that if improvement occurred asa function of the treatment, the question would remainwhether the student’s behavior had improved becausethe therapist-delivered positive attention and rewardsimproved her self-concept, which in turn changed hermaladaptive behaviors in the classroom, which weresymptomatic of her poor self-concept. In that case,Schwarz and Hawkins could not be certain that the con-tingent tokens played an important role in changing thebehavior. Schwarz and Hawkins, anticipating this pos-sible confound, controlled for it in a simple and directway. Following baseline, they implemented a conditionin which the therapist met with the girl each day afterschool and provided her with social attention and tokenreinforcement contingent on improvements in hand-writing. During this control phase, the three target behaviors—face touching, slouching, and low voice volume—showed no change, thereby increasing theirconfidence in a conclusion that the girl’s ultimate be-havioral improvements during the subsequent interven-tion phases were due to the token reinforcement.

When medical researchers design experiments to testthe effects of a drug, they use a technique called aplacebo control to separate effects that may be producedby a subject’s perceived expectations of improvement be-cause of taking the drug apart from the effects actuallyproduced by the drug. In the typical groups-comparisondesign, the subjects in the experimental group receive thereal drug, and subjects in the control group receiveplacebo pills. Placebo pills contain an inert substance,but they look, feel, and taste exactly like the pills con-taining the real drug being tested.

Applied behavior analysts have also employedplacebo controls in single-subject experiments. For example, in their study evaluating a pharmacologicaltreatment of the impulsivity by students with attention-deficit/hyperactivity disorder (ADHD), Neef, Bicard,Endo, Coury, and Aman (2005) had a pharmacist prepareplacebos and medications in identical gelatin capsules in1-week supplies for each child. Neither the students northe observers knew if a child had taken the medication orhad taken the placebos. When neither the subject(s) northe observers know whether the independent variable is

254

Planning and Evaluating Applied Behavior Analysis Research

present or absent from session to session, this type ofcontrol procedure is called a double-blind control. Adouble-blind control procedure eliminates confoundingby subject expectations, parent and teacher expectations,differential treatment by others, and observer bias.

Treatment Integrity

The results of many experiments have been confoundedby the inconsistent application of the independent vari-able. The researcher must make a concerted effort to en-sure that the independent variable is applied exactly asplanned and that no other unplanned variables are ad-ministered inadvertently along with the planned treat-ment. The terms treatment integrity and proceduralfidelity refer to the extent to which the independent vari-able is implemented or carried out as planned.

Low treatment integrity invites a major source ofconfounding into an experiment, making it difficult, ifnot impossible, to interpret the results with confidence.Data from an experiment in which the independent vari-able was administered improperly, applied inconsistently,conducted piecemeal, and/or delivered in overdose or un-derdose form often lead to conclusions that—dependingon the results obtained—represent either a false positive(claiming a functional relation when no such relation ex-ists) or a false negative (failing to detect a functional re-lation when one actually does exist). If a functionalrelation is apparent from the analysis of the data, one can-not be sure whether the treatment variable as described bythe experimenter was responsible or whether the effectswere a function of extraneous, uncontrolled elements ofthe intervention as it was actually applied. On the otherhand, it may be equally erroneous to interpret the failureto produce significant behavior change as evidence thatthe independent variable is ineffective. In other words,had the independent variable been implemented asplanned, it might have been effective.

Numerous threats to treatment integrity exist in ap-plied settings (Billingsley, White, & Munson, 1980; Gre-sham, Gansle, & Noell, 1993; Peterson, Homer, &Wonderlich, 1982). Experimenter bias can cause the re-searcher to administer the independent variable in such away that it enjoys an unfair advantage over the baselineor comparative conditions. Treatment drift occurs whenthe application of the independent variable during laterphases of an experiment differs from the way it was ap-plied at the outset of the study. Treatment drift can resultfrom the complexity of the independent variable, whichcan make it difficult for practitioners to implement all ofthe elements consistently over the course of an experi-ment. Contingencies influencing the behavior of thoseresponsible for implementing the independent variablecan also result in treatment drift. For example, a teacher

might implement only those aspects of a procedure thatshe favors and might implement the full intervention onlywhen the experimenter is present.

Precise Operational Definition. Achieving a highlevel of treatment integrity begins with developing acomplete and precise operational definition of the treat-ment procedures. Besides providing the basis for train-ing the persons who will implement an intervention andjudging the level of treatment integrity attained, opera-tional definitions of treatment conditions are a requisitefor meeting the technological dimension of applied be-havior analysis (Baer et al., 1968). An investigator’sfailure to provide explicit operational definitions of thetreatment variable hampers the dissemination and properuse of the intervention by practitioners, and makes itdifficult for other researchers to replicate and ultimatelyvalidate the findings.

Gresham and colleagues (1993) recommended thatdescriptions of the independent variable be judged by thesame standards of explicitness that are used in assessingthe quality of definitions of dependent variable. That is,they should be clear, concise, unambiguous, and objec-tive. More specifically, Gresham and colleagues also sug-gested that treatments be operationally defined in eachof four dimensions: verbal, physical, spatial, and tempo-ral. They used Mace, Page, Ivancic, and O’Brien’s (1986)definition of a time-out procedure as an example of anoperational definition of an independent variable.

(a) Immediately following the occurrence of a targetbehavior (temporal dimension), (b) the therapist said“No, go to time-out” (verbal dimension), (c) led thechild by the arm to a prepositioned time-out chair(physical dimension), (d) seated the child facing thecorner (spatial dimension). If the child’s buttocks wereraised from the time-out chair or if the child’s head wasturned more than 45° (spatial dimension), the therapistused the least amount of force necessary to guide com-pliance with the time-out procedure (physical dimen-sion). (f ) At the end of 2 min (temporal dimension), thetherapist turned the time-out chair 45° from the corner(physical and spatial dimensions) and walked away(physical dimension). (pp. 261–262)

Simplify, Standardize, and Automate. When plan-ning an experiment, placing a high priority on simplify-ing and standardizing the independent variable andproviding criterion-based training and practice for thepeople who will be responsible for implementing it en-hances treatment integrity. Treatments that are simple,precise, and brief, and that require relatively little ef-fort, are more likely to be delivered with consistencythan those that are not. Simple, easy-to-implementtechniques also have a higher probability of being ac-cepted and used by practitioners than those that are not,

255

Planning and Evaluating Applied Behavior Analysis Research

and thus possess a certain degree of self-evident socialvalidity. Simplicity is, of course, a relative concern, nota mandate; effecting change in some socially importantbehaviors may require the application of intense, com-plex interventions over a long period of time and mayinvolve many people. Baer (1987) made this point suc-cinctly when he stated:

Long problems are simply those in which the task analy-sis requires a series of many behavior changes, perhapsin many people, and although each of them is relativelyeasy and quick, the series of them requires not so mucheffort as time, and so it is not arduous but merely tedious.(pp. 336–337)

Practitioners need not be thwarted or dismayed bycomplex interventions; they simply need to realize thetreatment integrity implications. All things being equal,however, a simple and brief treatment will probably beapplied more accurately and consistently than will a com-plex and extended one.

To ensure the consistent implementation of the in-dependent variable, experimenters should standardize asmany of its aspects as cost and practicality allow. Stan-dardization of treatment can be accomplished in a vari-ety of ways. When a treatment requires a complex and/orextended sequence of behaviors, a script for the personadministering it may improve the accuracy and consis-tency with which the independent variable is applied. Forexample, Heron, Heward, Cooke, and Hill (1983) used ascripted lesson plan with overhead transparencies to en-sure that a classwide peer tutor training program was im-plemented consistently across groups of children.

If automating the intervention will not compromiseit in any way, researchers might consider “canning” theindependent variable so that an automated device can beused for its delivery. Although a videotaped tutor trainingpresentation in Heron and colleagues’ (1983) study wouldhave eliminated any potential confounding caused by theteacher’s slightly different presentations of the lessonfrom group to group and across sets of tutoring skills,using a canned presentation would also have eliminatedthe desired interactive and personal aspects of the train-ing program. Some treatment variables are well suited toautomated presentation in that automation neither limitsthe desirability of the treatment nor seriously reduces itssocial validity in terms of acceptability or practicability(e.g., use of videotaped programs to model residentialenergy conservation).

Training and Practice. Training and practice inproperly implementing the independent variable providesthe person(s) who will be responsible for conducting thetreatment or experimental sessions with the necessary

7Different levels or degrees of treatment integrity can be manipulated as anindependent variable to analyze the effects of full versus partial imple-mentations of an intervention, or various kinds of treatment “mistakes”(e.g., Holcombe, Wolery, & Snyder, 1994; Vollmer, Roane, Ringdahl, &Marcus, 1999).

skills and knowledge to carry out the treatment. It wouldbe a mistake for the researcher to assume that a person’sgeneral competence and experience in the experimentalsetting (e.g., a classroom) guarantees correct and consis-tent application of an independent variable in that setting(e.g., implementing a peer-mediated tutoring program).

As stated earlier, scripts detailing treatment proce-dures and cue cards or other devices that remind andprompt people through steps of an intervention can behelpful. Researchers should not, however, assume thatmerely providing the intervention agent with a detailedscript will ensure a high degree of treatment integrity.Mueller and colleagues (2003) found that a combinationof verbal instructions, modeling, and/or rehearsal was re-quired for parents to implement pediatric feeding proto-cols with a high level of treatment integrity. Performancefeedback has also been shown to improve the integritywith which parents and practitioners implement behaviorsupport plans and explicit teaching techniques (e.g., Cod-ding, Feinberg, Dunn, & Pace, 2005; Sarakoff & Strumey,2004; Witt, Noell, LaFleur, & Mortenson, 1997).

Assessing Treatment Integrity. Although simpli-fication, standardization, and training help increase thedegree of treatment integrity, they do not guarantee it.If there is any doubt about the correct and consistentapplication of the independent variable, investigatorsshould provide data on the accuracy and reliability ofthe independent variable (Peterson et al., 1982; Wolery,1994). Treatment integrity (or procedural fidelity) datareveal the extent to which the actual implementation ofall of the experimental conditions over the course of astudy matches their descriptions in the method sectionof a research report.7

Even though the effective control of the presence andabsence of the independent variable is a requisite to aninternally valid experiment, applied behavior analystshave not always made sufficient efforts to assure the in-tegrity of the independent variable. Two reviews of arti-cles published in the Journal of Applied Behavior Analysisfrom 1968 to 1990 found that the majority of authors didnot report data assessing the degree to which the inde-pendent variable was properly and consistently applied(Gresham et al., 1993; Peterson et al., 1982). Petersonand colleagues noted that a “curious double standard” haddeveloped in applied behavior analysis in which data onthe interobserver agreement of dependent variable mea-

256

Planning and Evaluating Applied Behavior Analysis Research

sures were required for publication, but such data wereseldom provided or required for the independent variable.

Peterson and colleagues (1982) suggested that thetechnology developed for assessing and increasing theaccuracy and believability of measures of the dependentvariable is fully applicable to the collection of proceduralfidelity data. Importantly, observation and recording ofthe independent variable provides the experimenter withdata indicating whether calibration of the treatment agentis necessary (i.e., bringing the intervention agent’s be-havior into agreement with the true value of the inde-pendent variable). Observation and calibration give theresearcher an ongoing ability to use retraining and prac-tice to ensure a high level of treatment integrity over thecourse of an experiment.

Figure 4 shows the data collection form used bytrained observers to collect treatment integrity data in astudy evaluating the effects of different qualities and du-rations of reinforcement for problem behavior, compli-ance, and communication within a treatment package forescape-maintained problem behavior (Van Norman,2005). The observers viewed videotapes of randomly se-lected sessions representing approximately one third toone half of all sessions in each condition and phase ofthe study. The percentage of treatment integrity for eachcondition was calculated by dividing the number of stepsthe experimenter completed correctly during a sessionby the total number of steps completed.

This overview of sources of potential confoundingvariables is, of necessity, incomplete. A complete inven-

tory of all possible threats to the internal validity of ex-perimental research would be well beyond the scope ofthis text. And presenting such a list might suggest that aresearcher need only control for the variables listed andnot worry about anything else. In truth, the list of poten-tial confounds is unique to every experiment. The effec-tive researcher is the one who questions and probes theinfluence of as many relevant variables as possible. Noexperimental design can control all potential confounds;the challenge is to reduce, eliminate, or identify the in-fluence of as many potentially confounding variables aspossible.

Social Validity:Assessing the Applied Value of Behavior Changes and the Treatments ThatAccomplish ThemIn his landmark article, “Social Validity: The Case forSubjective Measurement or How Applied Behavior Analy-sis Is Finding Its Heart,” Montrose Wolf (1978) proposedthe then “radical concept that clients (including parentsand guardians of dependent people, and even those whosetaxes support social programs) must understand and ad-mire the goals, outcomes, and methods of an interven-tion” (Risley, 2005, p. 284). Wolf recommended that thesocial validity of a study in applied behavior analysis

1. The instructor delivers a task prompt at the beginning of the session, e.g., "it's time to work" or similar.

2. If the participant does not respond, the instructor represents the choice by replacing materials or restating the contingencies.

3. The break card (or similar) is presented simultaneously (within 3 seconds) with work task materials.

4. Following work choice (touching materials associated with work)a. Removes the task materialsb. Presents a timer with green colored cue cardc. Provides access to high preference itemsd. Engages in play with the participant for 1 minute

Procedural StepsPhase 1/A (SD)

5. Following break requestsa. Removes the task materialsb. Presents timer with yellow cue cardc. Provides access to moderately preferred tangible items and neutral commenting for 30 seconds

6. Following problem behavior within 10 s the instructora. Removes the task/play materialsb. Presents a timer with red cue cardc. Provides no attentiion or tangible items for 10 seconds

7. Which side was the break card (or similar) presented (the participant's R = right or L = left) for each choice presentation.

Opportunities Correct % Correct Yes

Y

Y

Y

Y

Y

Y

R R L L L R L R R L R L R L L R

No

N

N

N

N

N

N

N/A

N/A

N/A

N/A

N/A

N/A

N/A

Video Clip # Rater Initials: Date:1-1AL 7/6/05

16/16

7/7

8/8

E. B.

Figure 4 Example of a form used to record treatment integrity data.Adapted from “The Effects of Functional Communication Training, Choice Making, and an Adjusting Work Schedule on Problem Behavior Maintained by Negative Reinforcement” by R. K. Van Norman, 2005, p. 204. Unpublished doctoral dissertation. Columbus, OH: The Ohio State University. Used by permission.

257

Planning and Evaluating Applied Behavior Analysis Research

should be assessed in three ways: the social significanceof the target behavior, the appropriateness of the proce-dures, and the social importance of the results.

Although social validity assessments may increasethe likelihood that a study will be published and can behelpful in the marketing and public relations of behavioralprograms (Hawkins, 1991; Winett, Moore, & Anderson,1991), the ultimate purpose of social validity assessmentsis “to help choose and guide [behavior change] programdevelopments and applications” (Baer & Schwartz, 1991,p. 231). Social validity assessments are most often ac-complished by asking the direct consumers of a behaviorchange program (the learners, clients, research subjects)and/or a group of indirect consumers (e.g., family mem-bers, teachers, therapists, community people) questionsabout how satisfied they are with the relevance and im-portance of the goals of the program, the acceptability ofthe procedures, and the value of the behavior change out-comes achieved.8

Verbal statements by practitioners and consumersthat they find a treatment or program acceptable and ef-fective should not be viewed as either evidence that theprogram was effective, or if it was, that consumers willcontinue to use the methods. Noting Baer, Wolf, and Ris-ley’s (1968) admonition that “a subject’s verbal descrip-tion of his own nonverbal behavior usually would not beaccepted as a measure of his actual behavior” (p. 93),Hawkins (1991) recommended that the term consumersatisfaction be used instead of social validity because itacknowledges that what is typically obtained in socialvalidity assessments “is essentially a collection of con-sumer opinions” (p. 205), the validity of which has notbeen determined.

In measuring consumers’ verbal judgments, we are onlyhoping that these verbal behaviors are substantially con-trolled by variables directly relevant to the habilitationtask at hand, and thus that they predict habilitative out-comes to some degree. The validity of such consumerjudgments has yet to be established; they should not beviewed as a validity criterion but rather as a second opin-ion from a lay person who may or may not be better in-formed and less biased than the professional is. (p. 212)

Validating the Social Importance of Behavior Change Goals

The social validity of behavior change goals begins witha clear description of those goals.

8Detailed discussions of social validity and procedures for assessing it canbe found in Fuqua and Schwade (1986), Van Houten (1979), Wolf (1978),and the special section on social validity in the summer 1991 issue of theJournal of Applied Behavior Analysis.

To assess the social importance of goals, the researchermust be precise about the goals of the behavior changeeffort at the levels of (a) the broad social goal (e.g., im-proved parenting, enhanced social skills, improved car-diovascular health, increased independence), (b) thecategories of behavior hypothesized to be related to thebroad goal (e.g., parenting—providing instructionalfeedback, using time-out, etc.), and/or (c) the responsesthat comprise the behavioral category of interest (e.g.,using time-out—directing the child to a location awayfrom other people, instructing the child to “sit out” for aspecified duration, etc.). Social validation may be con-ducted for any of these levels of goals. (Fawcett, 1991,pp. 235–236)

Van Houten (1979) suggested two basic approachesto determining socially valid goals: (a) Assess the per-formance of persons considered competent, and (b) ex-perimentally manipulate different levels of performanceto determine empirically which produces optimal results.Observations of the performance of typical performerscan be used to identify and validate behavior change goalsand target levels of performance. To arrive at a sociallyvalid performance criterion for a social skills trainingprogram for two adults with disabilities who worked ina restaurant, Grossi, Kimball, and Heward (1994) ob-served four restaurant employees without disabilities overa period of 2 weeks to determine the frequency withwhich they acknowledged verbal initiations from cowork-ers. Results from these observations revealed that the em-ployees without disabilities acknowledged an average of90% of initiations directed toward them. This level ofperformance was selected as the goal for the two targetemployees in the study.

A study by Warren, Rogers-Warren, and Baer (1976)provided a good example of testing the effects of differ-ent levels of performance to determine socially valid out-comes. The researchers assessed the effect of differentfrequencies of children’s offers to share play materialswith their peers on the peers’ reactions to those offers.They found that peers accepted offers to share most con-sistently when those offers were made at a middle fre-quency; that is, not too frequently, not too seldom.

Validating the Social Acceptance of Interventions

Several scales and questionnaires for obtaining consumers’opinions of the acceptability of behavioral interventionshave been developed. For example, the Intervention Rat-ing Profile is a 15-item Likert-type scale for assessing the acceptability of classroom interventions (Martens, Witt, Elliott, & Darveaux, 1985). The Treatment Accept-ability Rating Form (TARF) consists of 20 questions withwhich parents rate the acceptability of behavioral treat-

258

Planning and Evaluating Applied Behavior Analysis Research

ments used in outpatient clinic (Reimers & Wacker, 1988).

the TARF used by Van Norman (2005) to obtain treat-ment acceptability information from each participant’sparents, teachers, therapists, and behavior support staff.Although some of the people whose opinions were beingsought had witnessed, or had watched a video of, the in-tervention being used with the student, the following description of the intervention was read to each con-sumer before he or she was asked to answer each of thequestions:

First we conducted an assessment to find out what moti-vated Zachary to engage in challenging behavior(s) suchas throwing materials, hitting people, and dropping tothe floor. We found that Zachary engaged in challengingbehavior(s), at least in part, in order to escape or avoidtask demands.

Next, we taught Zachary to ask for a break as a re-placement behavior for challenging behavior by usingphysical prompting and attaching the response of askingfor a break to access to a highly preferred item, lots ofattention, and a long duration break (3 min).

Then we gave Zachary the choice to ask for workby simply touching the work materials (essentially en-gaging in the first step of the task) and getting access to highly preferred items, attention, and long durationbreak (1 min) or asking for a break and getting access tomoderately preferred items for a shorter duration break(30 sec). At any time during this procedure if Zacharyengaged in problem behavior he was given a 10 s breakwith no attention and no activities/items.

Finally, we continued to give Zachary the choice toask for work, a break or engage in problem behavior,however now we required Zachary to comply with agreater number of task-related instructions before hewas given access to the highly preferred activities, atten-tion, and a 1 min break. Each session we increased thenumber of task-related instructions that were given andneeded to be complied with before access to the highlypreferred break.

Physical prompting was only used during the initialphase to teach Zachary new responses, specifically howto ask for a break and how to ask for work. OtherwiseZachary was making all independent choices as theywere presented. (p. 247)

Validating the Social Importance of Behavior Changes

Methods for assessing the social validity of outcomes in-clude (a) comparing participants’ performance to the per-formance of a normative sample, (b) asking consumers torate the social validity of participants’ performance, (c)asking experts to evaluate participants’ performance, (d)using a standardized assessment instrument, and (e) test-

ing participants’ newly learned level of performance inthe natural environment.

Normative Sample

Van den Pol and colleagues (1981) used the performanceof a normative sample of typical fast-food restaurant cus-tomers to assess the social validity of the posttrainingperformance of the young adults with disabilities whomthey had taught to order or pay for a meal without assis-tance. The researchers simply observed 10 randomly se-lected, typical customers who ordered and ate a meal infast-food restaurants. They recorded the accuracy withwhich these customers performed each step of a 22-steptask analysis. The students’ performance at the follow-up probe equaled or exceeded that of the customers inthe normative sample in all but 4 of 22 specific skills.

Using normative samples to assess the social valid-ity of behavior change is not limited to posttreatmentcomparisons. Comparing subjects’ behavior to ongoingprobes of the behavior of a normative sample provides aformative assessment measure of how much improve-ment has been made and how much is still needed. An ex-cellent sample of ongoing social validity assessment is astudy by Rhode, Morgan, and Young (1983), in whichtoken reinforcement and self-evaluation procedures wereused to improve the classroom behavior of six studentswith behavior disorders. The overall goal of the studywas to help the six students improve their appropriateclassroom behavior (e.g., following classroom rules, com-pleting teacher-assigned tasks, volunteering relevant re-sponses) and decrease inappropriate behavior (e.g.,talking out, noncompliance, aggression) so that theywould be accepted and successful in regular (general ed-ucation) classrooms. At least once per day throughout thecourse of the 17-week study, the researchers randomlyselected classmates in the regular classrooms for obser-vation. The same observation codes and procedures thatwere used to measure the six target students’ behaviorwere used to obtain the normative sample data.

Figure 6 shows the mean and range of the six stu-dents’ appropriate behavior during each condition andphase of the study compared to the normative sample.(Individual graphs showing the percentage of appropriatebehavior in the resource and regular classroom of all sixsubjects in each of nearly 90 sessions were also includedin Rhode and colleagues’ article.) During baseline, thesix boys’ levels of appropriate behavior were well belowthose of their nondisabled peers. During Phase I of thestudy, in which the subjects learned to self-evaluate, theirbehavior in the resource room improved to a level match-ing that of their regular classroom peers. However, whenthe subjects were in the regular classroom during Phase I,their behavior compared poorly with that of the other

Figure 5 shows the experimenter-modified version of

259

Planning and Evaluating Applied Behavior Analysis Research

Figure 5 Examples of questions adapted from the Treatment Acceptability RatingForm—Revised (Reimers and Wacker, 1988) to obtain consumers’ opinions of the acceptability of intervention procedures used to treat challenging behaviors ofsecondary students with severe disabilities.

Treatment Acceptability Rating Form—Revised (TARF-R)

1. How clear is your understanding of the suggested procedures?_____ _____ _____ _____ _____ _____ _____Not at Neutral Very

all clear clear

2. How acceptable do you find the strategies to be regarding your concerns about theidentified learner?_____ _____ _____ _____ _____ _____ _____Not at all Neutral Very

acceptable acceptable

3. How willing are you to implement the suggested procedures as you heard themdescribed?_____ _____ _____ _____ _____ _____ _____Not at Neutral Very

all willing willing

4. Given the learner’s behavior issues, how reasonable do you find the suggestedprocedures?_____ _____ _____ _____ _____ _____ _____Not at all Neutral Very

reasonable reasonable

5. How costly will it be to implement these strategies?_____ _____ _____ _____ _____ _____ _____Not at Neutral Very

all costly costly

11. How disruptive will it be to your classroom to implement the suggested procedures?_____ _____ _____ _____ _____ _____ _____Not at all Neutral Very disruptive disruptive

13. How affordable are these procedures?_____ _____ _____ _____ _____ _____ _____Not at all Neutral Very affordable affordable

14. How much do you like the proposed procedures?_____ _____ _____ _____ _____ _____ _____

Do not like Neutral Like them them at all very much

17. How much discomfort is your learner likely to experience as a result of theseprocedures?_____ _____ _____ _____ _____ _____ _____

No discomfort Neutral Very much at all discomfort

19. How willing would you be to change your classroom routine to implement theseprocedures?_____ _____ _____ _____ _____ _____ _____Not at Neutral Very

all willing willing

20. How well will carrying out these procedures fit into your classroom routine?_____ _____ _____ _____ _____ _____ _____Not at Neutral Very all well well

From “The Effects of Functional Communication Training, Choice Making, and an Adjusting Work Schedule on ProblemBehavior Maintained by Negative Reinforcement” by R. K. Van Norman, 2005, pp. 248–256. Unpublished doctoraldissertation. Columbus, OH: The Ohio State University. Used by permission.

260

Rie

n./

Feed

.100

90

80

70

60

50

40

30

20

10

0

Mea

n %

of

Ap

pro

pri

ate

Beh

avio

r

Bas

elin

e

Mat

ch/1

00

Mat

ch/5

0

Mat

ch/3

3 1/

3

Mat

ch/1

6 2/

3

S-Ev

al./

30

S-Ev

al./

30

S-Ev

al./

60

S-Ev

al./

VR

2 D

ays

S-Ev

al./

No

Po

ints

S-Ev

al./

Ver

b. 6

0 m

in.

S-Ev

al./

Ver

b. V

R2

Day

s

Baseline Phase I Phase II

= Resource room (subjects)= Regular classroom (subjects)= Regular classroom (peers)= Range/regular classroom (Subjects)

No

S-E

val.

Experimental Conditions

Figure 6 Example of using measures of the behavior of anormative sample standard forassessing the social validity ofoutcomes of a behavior changeprogram.From “Generalization and Maintenance of TreatmentGains of Behaviorally Handicapped Students fromResource Rooms to Regular Classrooms Using Self-Evaluation Procedures” by G. Rhode, D. P. Morgan, andK. R. Young, 1983, Journal of Applied Behavior Analysis,16, p. 184. Copyright 1984 by the Society for theExperimental Analysis of Behavior, Inc. Reprinted bypermission.

students in the normative sample. As Phase II progressed,which involved various strategies for generalization andmaintenance of the treatment gains, the mean level of ap-propriate behavior by the six students matched that oftheir nondisabled peers, and variability among the sixstudents decreased (except for one subject who exhibitedno appropriate behavior on one session in the next-to-last condition).

Consumer Opinion

The most frequently used method for assessing social va-lidity is to ask consumers, including subjects or clientswhenever possible, if they thought behavior changes oc-curred during the study or program, and if so, if theythought those behavior changes were important and valu-able. Figure 7 shows the questionnaire Van Norman(2005) used to obtain opinions of consumers (i.e., thesubjects’ parents, teachers, and instructional aides; schooladministrators; behavior support staff; an occupationaltherapist; a school psychologist; and a psychology aide)on the social validity of results of an intervention de-signed to reduce challenging behavior maintained by es-cape. Van Norman created a series of 5-minute video clips

from randomly selected before-intervention and after-intervention sessions and placed the clips in random orderon a CD. The social validity evaluators did not knowwhether each clip represented a before- or after-intervention session. After viewing each clip, the con-

Expert Evaluation

Experts can be called on to judge the social validity ofsome behavior changes. For example, as one measure ofthe social validity of changes in the unaided notetakingskills of high school students with learning disabilitiesin social studies lectures as a result of having been ex-posed to teacher-prepared guided notes, White (1991)asked 16 secondary social studies teachers to rate the stu-dents’ baseline and post-intervention lecture notes onthree dimensions: (1) accuracy and completeness com-pared to lecture content; (2) usefulness for study for testsover the lecture content; and (3) how the notes comparedto those taken by typical general education students. (Theteachers did not know whether each set of notes they wererating was from baseline or post-intervention condition.)

Planning and Evaluating Applied Behavior Analysis Research

sumer completed the questionnaire shown in Figure 7.

261

Planning and Evaluating Applied Behavior Analysis Research

intervention used to treat challenging behavior of secondary students with severedisabilities.

Social Validity of Results Questionnaire

Directions: Video Clip # _____

Please view the video and then circle one of the five choices that best describes the extentto which you agree or disagree with each of the three statements below.

1. The student is engaged in academic or vocational work tasks, sitting appropriately(bottom in seat), and attending to the teacher or materials.

1 2 3 4 5Strongly Disagree Disagree Undecided Agree Strongly Agree

2. The student is engaged in challenging behavior and not attending to the teacher ormaterials.

1 2 3 4 5Strongly Disagree Disagree Undecided Agree Strongly Agree

3. The student appears to have a positive affect (e.g., smiling, laughing).

1 2 3 4 5Strongly Disagree Disagree Undecided Agree Strongly Agree

Comments about the student’s behavior in the video clip: ___________________________

___________________________________________________________________________

___________________________________________________________________________

General comments about this student’s behavior:__________________________________

___________________________________________________________________________

___________________________________________________________________________

Name (optional): _____________________________________________________________

Relation to the learner in video (optional): ________________________________________

Adapted from “The Effects of Functional Communication Training, Choice Making, and an Adjusting Work Schedule onProblem Behavior Maintained by Negative Reinforcement” by R. K. Van Norman, 2005, p. 252. Unpublished doctoraldissertation. Columbus, OH: The Ohio State University. Used by permission.

Fawcett (1991) observed that, “If expert ratings arenot sufficiently high, [the researcher should] considerwhat else might be done to program for social validity.Despite the best efforts, assessments may show that re-search goals are regarded by consumer judges as in-significant, interventions procedures as unacceptable, orresults as unimportant” (p. 238).

Standardized Tests

Standardized tests can be used to assess the social valid-ity of some behavior change program outcomes. Iwata,Pace, Kissel, Nau, and Farber (1990) developed the Self-Injury Trauma Scale (SITS) to enable researchers andtherapists to measure the number, type, severity, and lo-cation of injuries produced by self-injurious behavior.

The SITS yields a Number Index and Severity Index withscores of 0 to 5, and an estimate of current risk. Althoughthe data collected in a treatment program may show sig-nificant decreases in the behaviors that produce self-injury(e.g., eye poking, face slapping, head banging), the socialsignificance of the treatment must be validated by evi-dence of reduced injury. Iwata and colleagues wrote:

. . . the social relevance of the behavior lies in its trau-matic outcome. The measurement of physical injuriesprior to treatment can establish the fact that a client orsubject actually displays behavior warranting serious at-tention. . . . Conversely, injury measurement followingtreatment can corroborate observed changes in behaviorbecause reduction of an injury-producing responsebelow a certain level should be reflected in the eventualdisappearance of observable trauma. In both of these in-

Figure 7 Form for obtaining consumer opinions of social validity of results of an

262

Planning and Evaluating Applied Behavior Analysis Research

stances, data on injuries provide a means of assessingsocial validity. (Wolf, 1978, pp. 99–100)

Twohig and Woods (2001) used the SITS to validatethe outcomes of a habit-reversal treatment for the chronicskin picking of two typically developing adult males.Both men reported that they had engaged in skin pickingsince childhood, digging the fingernails into the ends ofa finger and pulling or scrapping the skin, which some-times caused bleeding, scarring, and infections. Two ob-servers independently rated pretreatment, posttreatment,and follow-up photographs of the two men’s hands withthe SITS. The Number Index (NI) and Severity Index (SI)SITS scores on the pretreatment photographs for bothmen were 1 and 2, respectively, indicating one to four in-juries on either hand and distinct but superficial breaks inthe skin. NI and SI scores of 0 on the posttreatment pho-tos for both men indicated no apparent injuries. On thefollow-up photos taken 4 months after treatment hadended, both men had SITS NI and SI scores of 1, indi-cating red or irritated skin.

Real-World Test

Perhaps the most socially valid way to assess the socialvalidity of a learner’s newly acquired behavior is to putit to an authentic test in the natural environment. For in-stance, the validity of what three adolescents with learn-ing disabilities had learned about road signs and trafficlaws was validated when they passed the Ohio Depart-ment of Motor Vehicles test and earned their temporarydriver’s permits (Test & Heward, 1983).

In a similar way, the social validity of the cookingskills being learned by three secondary students with de-velopmental disabilities and visual impairments wastested frequently when their friends arrived at the end ofprobe sessions to share the food they had just prepared(Trask-Tyler, Grossi, & Heward, 1994). In addition toproviding a direct and authentic assessment of social validity, real-world tests put the learner’s repertoire incontact with naturally occurring contingencies of rein-forcement, which may promote maintenance and gener-alization of the newly acquired behaviors.

External Validity:Replicating Experiments to Determine the Generalityof Research FindingsExternal validity refers to the degree to which a func-tional relation found reliable and socially valid in a givenexperiment holds under different conditions. An inter-vention that works only within a circumscribed set of

conditions and proves ineffective when any aspect in theoriginal experiment is altered makes a limited contribu-tion to the development of a reliable and useful technol-ogy of behavior change. When a carefully controlledexperiment has shown that a particular treatment producesconsistent and socially significant improvements in thetarget behavior of a given subject, a series of importantquestions should then be asked: Will this same treatmentbe as effective if it is applied to other behaviors? Will theprocedure continue to work if it is changed in some way(e.g., if implemented at a different time of the day, by an-other person, on a different schedule)? Will it work in asetting different from the original experiment? Will itwork with participants of different ages, backgrounds,and repertoires? Questions about external validity are notabstract or rhetorical; they are empirical questions thatcan be addressed only by empirical methods.

A functional relation with external validity, or gen-erality, will continue to operate under a variety of con-ditions. External validity is a matter of degree, not anall-or-nothing property. A functional relation that cannotbe reproduced under any conditions other than the exactset of original variables (including the original subject)possesses no external validity. At the other end of thecontinuum, a procedure that is effective at any time, underany conditions, in any setting, with any behavior, and forany subject has complete generality (an improbable sit-uation). Most functional relations fall somewhere be-tween the two ends of this continuum, and those found tohave higher degrees of generality make the greater con-tribution to applied behavior analysis. Investigators whouse groups-comparison research methods approach theissue of external validity quite differently than do inves-tigators who use within-subjects research methods.

External Validity and Groups Design Research

As stated previously, practitioners of group-comparisonexperimental designs claim two advantages for the useof large groups of subjects. In addition to the assumptionthat aggregating the data of a group of subjects will con-trol for intersubject variability, researchers who employgroup designs assume that including many subjects in anexperiment increases the external validity of a study’s re-sults. On the surface this assumption is perfectly logical,and when viewed at the proper level of extrapolation, itis also true. The more subjects with which a functional re-lation has been demonstrated, the more likely it is thatthat functional relation will also be effective with othersubjects who share similar characteristics. And in fact,demonstrating a functional relation with various subjectsin different settings is exactly how applied behavior an-alysts document external validity.

263

Planning and Evaluating Applied Behavior Analysis Research

However, the investigator who claims that the find-ings of a groups-comparison study possess generality toother individuals in the population from which the ex-perimental subjects were chosen violates a fundamentalpremise of the groups-comparison method and ignores adefining characteristic of behavior. The proper inferencesabout the results of a groups-design study are from thesample to the population, not from the sample to the in-dividual (Fisher, 1956). The careful methods of randomsampling used in groups-design research are followed toensure that the participants in the study represent a het-erogeneous sample of all the relevant characteristicsfound in the population from which they were selected.Indeed, the better the sample represents the populationfrom which it is drawn, the less meaningful are the resultsfor any individual subject. “The only statement that canbe made concerns the average response of a group withthat particular makeup which, unfortunately, is unlikelyto be duplicated” (Hersen & Barlow, 1976, p. 56).

A second problem inherent in attempting to extendthe results of a groups-comparison study to other people(and unless care is taken, sometimes even to a subjectwho participated in the study, as was illustrated in Figure1) is that the groups-design experiment does not demonstrate a functional relation between the behavior ofany subject and some aspect of his or her environment. Inother words, from the perspective of behavior analysis,there is nothing in the results of a groups-design experi-ment that can have external validity; there is nothing togeneralize. Johnston and Pennypacker (1993a) made thispoint repeatedly and well.

Between groups designs tend to put the cart before thehorse. The tactic of exposing different levels of the inde-pendent variable to different groups composed of a largenumber of subjects and treating their responses collec-tively by asking if they represent the untested portion ofthe population provides comparisons that describe nomember of any group. By failing to focus on individualswith careful attention to experiment control, these tradi-tional methods greatly decrease the chances of discover-ing orderly relationships in the first place, therebymaking the questions of subject generality moot.(1993a, p. 352)

The researcher’s first objective is to obtain data thattruly represent the relationship between the experimen-tal conditions and the dependent variable for each sub-ject. If this is not accomplished, nothing else matters.Only when the findings are “true” does the question ofthe meaningfulness or universality of these results be-come relevant. (1993a, p. 250)

Groups-comparison designs and statistical inferencehave long dominated research in psychology, education,and the other social sciences. Despite its long-standingdominance, the extent to which this research tradition has

contributed to an effective technology of behavior changeis highly questionable (Baer, 1977b; Birnbrauer, 1981;Michael, 1974). The field of education is perhaps themost telling example of the inability of groups-design re-search to provide data that lead to improved practice(Greer, 1983; Heward & Cooper, 1992). Instructionalmethods in the classroom are often influenced more byfad, the personal style of individual teachers, and ideol-ogy than by the cumulative knowledge and understand-ing provided by rigorous and sustained experimentalanalysis of the variables of which learning is a function(Heron, Tincani, Peterson, & Miller, 2005; Kozloff, 2005;Zane, 2005).

The methods of groups-comparison experimentationare simply inappropriate for answering the questions ofprimary interest to the applied behavior analyst—em-pirical questions that can be pursued only by analysis ofrepeated measures of individual behavior under all rele-vant conditions. We agree with Johnston and Penny-packer (1993b):

We find the reasoning underlying all such proceduresalien to both the subject matter and the goals of a naturalscience of behavior and regard the utility of group com-parisons as extremely limited, no matter how elegant themathematical treatment of data they afford. . . . [groupcomparison experimentation constitutes] a process ofscientific inquiry that is almost totally inverted; insteadof using questions about natural phenomena to guide de-cisions about experimental design, models of design areallowed to dictate both the form and content of the ques-tions asked. Not only is this antithetical to the estab-lished role of experimentation in science, the types ofquestions allowed by groups comparison designs arelargely inappropriate or irrelevant to gaining an under-standing of the determinants of behavior. (pp. 94–95)

Our discussion of the inherent limitations of groups-comparison designs for behavior analysis should not beconfused with a position that group designs and statisti-cal inference have no value as research methods for seek-ing empirical knowledge about the world. On thecontrary, group-comparison designs and statistical infer-ence are highly effective tools for seeking answers to thekinds of questions for which they were devised. Properlydesigned and well-executed groups-comparison experi-ments can provide answers with a specific degree of con-fidence (i.e., probability) to questions involved in manylarge-scale evaluations. For example, a government bodyis less interested in the effects of a new regulation on anyindividual person (and even less interested in whether afunctional relation exists between the regulation and thatperson’s behavior) than it is in the probability that thebehavior of a predictable percentage of the populationwill be affected by the regulation. The former concern is

264

Planning and Evaluating Applied Behavior Analysis Research

a behavioral one, and the experimental methods of be-havior analysis provide the means to address it. The lat-ter concern is an actuarial one, and it is best pursued withthe methods of random sampling, groups comparison,and statistical inference.

External Validity and AppliedBehavior Analysis

The external validity (generality) of research findings inapplied behavior analysis is assessed, established, andspecified through the replication of experiments.

In order to know whether a particular result will be ob-tained for a subject in another study or under appliedcircumstances, what we really need to know is whatvariables are necessary to make the effect occur, whatvariables will prevent it from occurring, and what vari-ables will modulate it. . . . This information cannot belearned by increasing the size of the control and experi-mental groups. It requires conducting a series of experi-ments that identify and study variables that might fallinto one of these three categories. (Johnson & Penny-packer, 1993a, p. 251)

Replication in this context means repeating a pre-vious experiment.9 Sidman (1960) described two majortypes of scientific replication—direct and systematic.

Direct Replication

In a direct replication, the researcher makes every ef-fort to duplicate exactly the conditions of an earlier ex-periment. If the same subject is used in a directreplication, the study is an intrasubject direct replication.Intrasubject replication within experiments is a definingcharacteristic of applied behavior analysis research andthe primary tactic for establishing the existence and reli-ability of a functional relation. An intersubject directreplication maintains every aspect of the earlier experi-ment except that different, although similar, subjects areinvolved (i.e., same age, similar repertoires). Intersub-ject replication is the primary method for determiningthe extent to which research findings have generalityacross subjects.

The many uncontrolled variables in natural settingsmake the direct replication of experiments outside of the

9Johnston and Pennypacker (1980) pointed out a distinction between repli-cating an experiment and reproducing its results. They stated that the qual-ity of the replication should be judged only by “the extent to whichequivalent environmental manipulations associated with [the original ex-periment] are duplicated. . . . Thus, one replicates procedures in an effortto reproduce effects” (pp. 303–304). However, when most researchers re-port a “failure to replicate,” they mean that the results of the replicationdid not match those obtained in the earlier research (e.g., Ecott, Foate, Tay-lor, & Critchfield, 1999; Friedling & O’Leary, 1979).

laboratory extremely difficult. Nevertheless, intersubjectreplication is the rule rather than the exception in appliedbehavior analysis. Although numerous single-subjectstudies involve just one subject (e.g., Ahearn, 2003;Dixon & Falcomata, 2004; Kodak, Grow, & Northrup,2004; Tarbox, Williams, & Friman, 2004), the vast ma-jority of published studies in applied behavior analysisinclude direct intersubject replications. This is becauseeach subject is usually considered an intact experiment.For example, a behavior analysis study in which the in-dependent variable is manipulated in exactly the sameway for six subjects in the same setting yields five inter-subject replications.

Systematic Replication

The direct replication of experiments demonstrates thereliability of a functional relation, but the generality ofthat finding to other conditions can be established onlythrough repeated experimentation in which the conditionsof interest are purposefully and systematically varied. Ina systematic replication the researcher purposefullyvaries one or more aspects of an earlier experiment. Whena systematic replication successfully reproduces the re-sults of previous research, it not only demonstrates thereliability of the earlier findings but also adds to the ex-ternal validity of the earlier findings by showing that thesame effect can be obtained under different conditions.In a systematic replication, any aspect of a previous ex-periment can be altered: subjects, setting, administrationof the independent variable, target behaviors.

Although systematic replication offers greater po-tential rewards than direct replication does because itcan provide new knowledge about the variables underinvestigation, it entails some risk. Sidman (1960) de-scribed systematic replication as a gamble, but one wellworth taking.

If systematic replication fails, the original experimentwill still have to be redone, else there is no way of deter-mining whether the failure to replicate stemmed fromthe introduction of new variables in the second experi-ment, or whether the control of relevant factors was in-adequate in the first experiment.

On the other hand, if systematic replication suc-ceeds, the pay-off is handsome. Not only is the relia-bility of the original finding increased, but also itsgenerality with respect to other organisms and to otherexperimental procedures is greatly enhanced. Further-more, additional data are now available which could nothave been obtained by a simple repetition of the first ex-periment. (pp. 111–112)

Sidman went on to explain that economic husbandryof limited resources must also play an important role in

265

Planning and Evaluating Applied Behavior Analysis Research

the scientist’s determination of how a research programshould proceed. Direct replication of a long and costlyexperiment can provide data on only the reliability of afunctional relation, whereas systematic replication canprovide information on the reliability and generality ofthe phenomena under investigation, as well as new in-formation for additional experimentation.

The external validity of results of groups-comparisonresearch is viewed as an inherent characteristic of a givenexperiment, as something that can be directly assessedby examining the methods used to conduct the study (e.g.,sampling procedures). If that logic is extended to single-subject experiments, then the findings of single-subjectexperiments cannot be said to have any external validity.But as Birnbrauer (1981) pointed out, external validityis not something a single study has, but rather the prod-uct of many studies. External validity can be pursued onlythrough the active process of systematic replication.

Generality is established, or more likely limited, by ac-cumulating studies which are internally valid and byplacing the results into a systematic context, i.e., seek-ing out the principles and parameters that particularprocedures appear to be enunciating. The most informa-tive studies ask how can an earlier positive result be re-peated in the present circumstances, with the presentproblem? (p. 122)

Much of the applied behavior analysis literature con-sists of systematic replications. Indeed, one could arguequite persuasively that almost any applied behavior analy-sis study is a systematic replication of at least some as-pect of an earlier experiment. Even when the authors havenot pointed it out, virtually every published experimentreveals significant procedural similarity with previousexperiments. However, as we are using the term here,systematic replication refers to concerted and directedefforts to establish and specify the generality of a func-tional relation. For example, Hamlet, Axelrod, and Kuer-schner (1984) found a functional relation betweendemanded eye contact (e.g., “[Name], turn around”) andcompliance with adult instructions in two 11-year-oldschool children. Included in the same published reportwere the results of six replications conducted by the sameresearchers over a period of one year with nine studentsaged 2 to 21 years. Similar results were reproduced ineight of the nine replication subjects. Although somemight consider this an example of direct intersubjectreplication, Hamlet and colleagues’ replications were con-ducted in various settings (i.e., classrooms, homes, insti-tutions) and therefore should be viewed as a series ofsystematic replications that demonstrated not only the re-liability of the results but also considerable generalityacross subjects of different ages in different settings.

10Functional communication training (FCT) is described in Chapter enti-tled “Functional Behavior Assessment.”

Systematic replications across subjects sometimesreveal different patterns of effects, which the researchermight then study as a function of specific subject char-acteristics or contextual variables. For example, Hago-pian, Fisher, Sullivan, Acquisto, and LeBlanc (1998)reported the results of a series of systematic replicationswith 21 inpatient cases of functional communicationtraining with and without extinction and punishment.10

Lerman, Iwata, Shore, and DeLeon (1997) found thatthinning FR 1 schedules of punishment to intermittentpunishment produced different effects on the self-injurious behavior of five adults with profound mentalretardation.

Some systematic replications are attempts to repro-duce the results reported by another researcher in aslightly different situation or context. For example, Saighand Umar (1983) successfully reproduced in a Sudaneseclassroom the positive results originally reported with

Researchers sometimes report multiple experiments,with each experiment serving as a systematic replicationinvestigating the variables influencing a given functionalrelation. For example, Fisher, and colleagues (1993) con-ducted four studies designed to explore the effectivenessof functional communication training (FCT) with andwithout extinction and punishment.

Systematic replication is evident when a researchteam pursues a consistent line of related studies over time.Examples of this approach to replication can be found inVan Houten and colleagues’ studies investigating vari-ables affecting driver behavior and pedestrian safety (e.g.,Huybers, Van Houten, & Malenfant, 2004; Van Houten & Nau, 1981, 1983; Van Houten, Nau, & Marini, 1980;Van Houten & Malenfant, 2004; Van Houten, Malenfant,& Rolider, 1985; Van Houten & Retting, 2001); Neef,Markel, and colleagues’experiments on the impulsivity ofstudents with attention-deficit/hyperactivity disorder(ADHD) (e.g., Bicard & Neef, 2002; Ferreri, Neef, &Wait, 2006; Neef, Bicard, & Endo, 2001; Neef, Bicard,Endo, Coury, & Aman, 2005; Neef, Markel et al., 2005);and Miltenberger and colleagues’ line of research onteaching safety skills to children (e.g., Himle, Mil-tenberger, Flessner, & Gatheridge, 2004; Himle, Mil-tenberger, Gatheridge, & Flessner, & 2004; Johnson,Miltenberger et al., 2005; Johnson, Miltenberger et al.,2006; Miltenberger et al., 2004; Miltenberger et al., 2005).

the Good Behavior Game in an American classroom

ported that a “considerable degree of support for thecross-cultural utility of the game was established” (p. 343).

(Barrish, Saunders & Wolf, 1969). Saigh and Umar re-

266

Planning and Evaluating Applied Behavior Analysis Research

In many instances, the systematic replications nec-essary to explore and extend a significant line of researchrequire the independent efforts of investigators at differ-ent sites who are aware of, and build on, one another’swork. When independent teams of researchers at differ-ent geographical locations report similar findings, the netresult is a body of knowledge with significant scientificintegrity and technological value. This collective effortspeeds and enhances the refinement and rigorous testingof interventions that is necessary to the development andrefinement of evidence-based practices (Horner et al.,2005; Peters & Heron, 1993). One such example of in-dependent research teams at various sites reporting sys-tematic replications is a growing body of studiesexploring the effects of response cards on students’ aca-demic engagement, learning, and deportment duringgroup instruction. Investigators have reported a similarpattern of results—increased participation during in-struction, improved retention of lesson content, and/orreductions in off-task and disruptive behaviors—with re-sponse cards in a wide range of students (general educa-tion students, special education students, and ESLlearners), curriculum content (e.g., math, science, socialstudies, spelling), and instructional settings (e.g., ele-mentary, middle, secondary, and college classrooms)(e.g., Armendariz & Umbreit, 1999; Cavanaugh, Heward,& Donelson, 1996; Christle & Schuster, 2003; Davis &O’Neill, 2004; Gardner, Heward, & Grossi, 1994; Kel-lum, Carr, & Dozier, 2001; Lambert, Cartledge, Lo, &Heward, 2006; Marmolejo, Wilder, & Bradley, 2004).

Evaluating AppliedBehavior Analysis ResearchA list of all the expectations and characteristics of ex-emplary research in applied behavior analysis would bevery long. Thus far we have identified a considerablenumber of requirements for good applied behavior analy-sis. Our purpose now is to summarize those requirementsin a sequence of questions one might ask in evaluating thequality of research in applied behavior analysis. Thosequestions can be organized under four major headings: in-ternal validity, social validity, external validity, and sci-entific and theoretical significance.

Internal Validity

To determine whether an analysis of behavior has beenmade, the reader of an applied behavior analysis studymust decide whether a functional relation has beendemonstrated. This decision requires a close examina-tion of the measurement system, the experimental design,

and the degree to which the researcher controlled poten-tial confounds, as well as a careful visual analysis andinterpretation of the data.

Definition and Measurementof the Dependent Variable

The initial step in evaluating internal validity is to decidewhether to accept the data as valid and accurate measuresof the target behavior over the course of the experiment.Some of the important issues to be considered in this de-

Graphic Display

If the data are accepted as a valid and accurate repre-sentation of the dependent variable over the course ofthe experiment, the reader should next assess the extentof stability of the target behavior during each phase of thestudy. Before evaluating the stability of the data paths,however, the reader should examine the graphic displayfor any sources of distortion (e.g., scaling of axes, dis-tortions of time on the horizontal axis). The researcher orconsumer who suspects that any element of the graphmay encourage interpretations unwarranted by the datashould replot the data using a new set of appropriatelyscaled axes. In an assessment of the stability of the de-pendent variable within the different phases of an ex-periment, the length of the phase or condition must beconsidered as well as the presence of trends in the datapath. The reader should ask whether the conditions ineffect during each phase were conducive to practice ef-fects. If so, were these effects allowed to play themselvesout before experimental variables were manipulated?

Meaningfulness of Baseline Conditions

The representativeness or fairness of the baseline condi-tions as the basis for evaluating subsequent performancein the presence of the independent variable should be as-sessed. In other words, were the baseline conditionsmeaningful in relation to the target behavior, setting, andresearch questions addressed by the experiment? For ex-ample, consider two experiments by Miller, Hall, andHeward (1995) that evaluated the effects of two proce-dures for conducting 1-minute time trials during a daily10-minute practice session on the rate and accuracy withwhich students answered math problems. Throughout allconditions and phases of both experiments, the studentswere instructed to answer as many problems as they could

cision are captured by the questions shown in Figure 8.

267

Planning and Evaluating Applied Behavior Analysis Research

Figure 8 Questions that should be asked when evaluating the definition andmeasurement of the dependent variable in an applied behavior analysis study.

• Was the dependent variable precisely, completely, and unambiguously defined?• Were examples and nonexamples of the target behavior provided, if doing so would

enhance clarity?• Were the most relevant, measurable dimensions of the target behavior specified (e.g.,

rate, duration)?• Were important concomitant behaviors also measured?• Were the observation and recording procedures appropriate for the target behavior?• Did the measurement provide valid (i.e., meaningful) data for the problem or research

question addressed?• Was the measurement scale broad and sensitive enough to capture the socially

significant changes in the behavior?• Have the authors provided sufficient information on the training and calibration of

observers?• What procedures were used to assess and ensure the accuracy of measurement?• Were interobserver agreement (IOA) assessments reported at the levels at which the

study’s results are presented (e.g., by subject and experimental condition)?• Were observation sessions scheduled during the times, activities, and places most

relevant to the problem or research question?• Did observations occur often enough and closely enough in time to provide a

convincing estimate of behavior change over time?• Were any contingencies operating in the study that may have influenced the observers’

behavior?• Was there any expectation or indication that the dependent variable may have been

reactive to the measurement system? If so, were procedures taken to assess and/orcontrol reactivity?

• Were appropriate accuracy and/or reliability assessments of the data reported?

and they received feedback on their performance.Throughout both experiments, students’ worksheets weremarked and scored as follows:

Experimenters marked each student’s worksheets byputting an ‘X’ next to incorrect answers. The number ofcorrect answers over the total number of problems at-tempted was marked at the top of the first worksheetalong with a positive comment to encourage the stu-dents to keep trying. If a student’s score was lower thanhis or her highest previous score, comments such as“Keep trying, Sally!” “Work faster!” or “Keep workingon it!” were written. Whenever a student achieved his orher highest score to date, comments such as “Great job,Jimmy! This is your best ever!” were written on thepacket. In the event a student equaled her highest previ-ous score, “You tied your best score!” was written onthe packet.

At the beginning of each session, scored and markedworksheets from the prior day’s session were returned tothe students. Each session during the 10-minute contin-uous work period condition that functioned as the base-line condition began with the classroom teachers sayingto the students: “I want you to work hard and try to doyour best. Answer as many problems as you can. Don’t

worry if you do not answer all of the problems. There aremore problems in the packet than anyone can do. Just tryyour best” (p. 326).

The initial baseline (A) phase was followed by thetwo time-trial conditions (B and C) in an A-B-A-B-C-B-C design. Results for students in both classrooms showeda clear functional relation between both of the time-trialconditions and increased correct rate and accuracy overthe baseline condition. However, if the classroom teach-ers had not instructed and reminded the students to dotheir best and to answer as many problems as they couldprior to each baseline session, and if the students had notreceived feedback on their worksheets, the improved per-formance during the time-trial conditions would havebeen suspect. Even if a clear functional relation had beendemonstrated against such baseline conditions, appliedresearchers and consumers could, and should, questionthe importance of such results. Maybe the children sim-ply did not know they were expected to work fast. Per-haps the students would have solved problems in thebaseline condition at the same high rates that they did inthe time-trial conditions if they had been told to “go fast”and received feedback on their performance, praise fortheir improvements, and encouragement to answer more

268

Planning and Evaluating Applied Behavior Analysis Research

Correct ConclusionType I Error

(false positive)

Type II Error(false negative)

Correct Conclusion

Yes

No

Res

earc

her

Co

ncl

ud

esF

un

ctio

nal

Rel

atio

ns

Ex

ists

Yes No

Functional Relations Exist in Nature Figure 9 Ideally, an experimental design andmethods of data analysis help a researcher concludecorrectly that a functional relation between the indepen-dent and dependent variables exists (or does not exist)when in fact such a relation does (or does not) exist innature. Concluding that the results of an experimentreveal a functional relation when no such relation existsin nature is a Type I error. Conversely, concluding thatthat independent variable did not have an effect on thedependent variable when such a relation did occur is a Type II error.

problems. By including the daily instruction to work hardand answer as many problems as they could and return-ing the worksheets to the students as components of thebaseline condition, Miller and colleagues obtained mean-ingful data paths during baseline against which to testand compare the effects of the two time-trial conditions.

Experimental Design

The experimental design should be examined to deter-mine the type of experimental reasoning it affords. Whatelements of the design enable prediction, verification,and replication? Is the design appropriate for the researchquestions addressed by the study? Does the design ef-fectively control for confounding variables? Does the de-sign provide the basis for component and/or parametricanalyses if such questions are warranted?

Visual Analysis and Interpretation

Although various statistical methods for evaluating be-havioral data and determining the existence of functionalrelations in single-subject designs have been recom-mended (e.g., Gentile, Rhoden, & Klein, 1972; Hart-mann, 1974; Hartmann et al., 1980; Jones, Vaught, &Weinrott, 1977; Pfadt & Wheeler, 1995; Sideridis &Greenwood, 1996), visual inspection remains the mostcommonly used, and we believe the most appropriate,method for interpreting data in applied behavior analysis.We will briefly present four factors that favor visualanalysis over tests of statistical significance in appliedbehavior analysis.

First, applied behavior analysts have little interest inknowing that a behavior change is a statistically signifi-cant outcome of an intervention. Applied behavior ana-lysts are concerned with producing socially significantbehavior changes: “If a problem has been solved, you

can see that; if you must test for statistical significance,you do not have a solution” (Baer, 1977a, p. 171).

Second, visual analysis is well suited for identifyingvariables that produce strong, large, and reliable effects,which contribute to an effective, robust technology of behavior change. On the other hand, powerful tests ofstatistical analysis can detect the slightest possible correlation between the independent and dependent vari-ables, which may lead to the inclusion of weak, unreliablevariables in the technology.

Two types of errors are possible when determiningexperimental effect (see Figure 9). A Type I error(also called a false positive) is made when the researcherconcludes that the independent variable had an effect onthe dependent variable, when in truth no such relationexists in nature. A Type II error (also called a false neg-ative) is the opposite of a Type I error. In this case, the re-searcher concludes that an independent variable did nothave an effect on the dependent variable, when in truth itdid. Ideally, a researcher using well-reasoned experi-mental tactics coupled with a sound experimental designand buttressed by appropriate methods of data analysiswill conclude correctly that a functional relation betweenthe independent and dependent variables exists (or doesnot exist).

Baer (1977b) pointed out that the behavior analyst’sreliance on visual inspection to determine experimentaleffects results in a low incidence of Type I errors but in-creases the commission of Type II errors. The researcherwho relies on tests of statistical significance to determineexperimental effects makes many more Type I errors thanthe behavior analyst, but misses few, if any, variables thatmight produce some effect.

Scientists who commit relatively many Type 1 errors arebound to memorize very long lists of variables that aresupposed to affect diverse behaviors, some predictableportion of which are not variables at all. By contrast,

269

Planning and Evaluating Applied Behavior Analysis Research

scientists who commit very few Type 1 errors have rela-tively short lists of variables to remember. Furthermore,and much more important, it is usually only the very ro-bust, uniformly effective variables that will make theirlist. Those who will risk Type 1 errors more often willuncover a host of weak variables. Unquestionably, theywill know more, although some of that more is wrong,and much of it is tricky. . . . Those who keep their proba-bility of Type 2 errors low do not often reject an actuallyfunctional variable, relative to those whose Type 2 errorprobability is higher. Again, unquestionably, the practi-tioner with the lower probability of Type 2 errors willknow more; but again, the nature of that more is seenoften in its weakness, inconsistency of function, or itstight specialization. . . . Individual-subject-design practi-tioners . . . necessarily fall into very low probabilities ofType 1 errors and very high probabilities of Type 2 er-rors, relative to their group-paradigm colleagues. As aresult, they learn about fewer variables, but these vari-ables are typically more powerful, general, dependable,and—very important—sometimes actionable. These areexactly the variables on which a technology of behaviormight be built. (Baer, 1977b, pp. 170–171)

A third problem with using statistical methods todetermine the existence of functional relations in be-havioral data occurs with borderline data sets containingsignificant amounts of variability. Such data sets shouldmotivate a researcher to engage in additional experi-mentation in an effort to achieve more consistent exper-imental control and to discover the factors causing thevariability. The researcher who forgoes the additionalexperimentation in favor of accepting the results of a testof statistical significance as evidence of a functional re-lation risks leaving important findings in the realm ofthe unknown.

The situation where a significance test might seem help-ful is typically one involving sufficient uncontrolledvariability in the dependent variable that neither the ex-perimenter nor his readers can be sure that there is an interpretable relationship. This is evidence that the relevant behavior is not under good experimental con-trol, a situation calling for more effective experimenta-tion, not a more complex judgmental aid. (Michael,1974, p. 650)

Fourth, statistical tests of significance can be ap-plied only to data sets that conform to predeterminedcriteria. If statistical methods for determining experi-mental effects were to become highly valued in appliedbehavior analysis, researchers might begin to design ex-periments so that such tests could be computed. The re-sultant loss of flexibility in experimental design wouldbe counterproductive to the continued development ofbehavior analysis (Johnston & Pennypacker, 1993b;Michael, 1974).

Social Validity

The reader of a published study in applied behavior analy-sis should judge the social significance of the target be-havior, the appropriateness of the procedures, and thesocial importance of the outcomes (Wolf, 1978).

Many considerations should guide the applied be-havior analyst’s selection of target behaviors. The socialvalidity of the dependent variable should be assessed inlight of those factors. Ultimately, all of the issues andconsiderations relative to target behavior selection pointto one question: Will an increase (or decrease) in the mea-sured dimension of this behavior improve the person’slife directly or indirectly?

The independent variable should be evaluated notonly in terms of its effects on the dependent variable, butalso in terms of its social acceptability, complexity, prac-ticality, and cost. Regardless of their effectiveness, treat-ments that are perceived by practitioners, parents, and/orclients as unacceptable or undesirable for whatever rea-son are unlikely to be used. Consequently, such treat-ments will never have the chance to contribute to atechnology of behavior change. The same can be said ofindependent variables that are extremely complex andthus difficult to learn, teach, and apply. Similarly, treat-ment procedures that require large amounts of time and/ormoney to implement have less social validity than do pro-cedures that can be applied quickly and/or inexpensively.

Even though behavior change is clearly visible on agraphic display, it may not represent a socially valid im-provement for the participant and/or significant others inhis environment. In evaluating the results of an appliedbehavior analysis study, the reader should ask questionssuch as these: Is the participant (or significant others inthe participant’s life) better off now that the behavior haschanged? Will this new level of performance result in in-creased reinforcement (or decreased punishment) for thesubject now or in the future? (Hawkins, 1984). In someinstances it is relevant to ask whether the subject (or sig-nificant others) believes that her behavior has improved(Wolf, 1978).

Maintenance and Generalization of Behavior Change

Improvements in behavior are most beneficial when theyare long-lasting, appear in other appropriate environ-ments, and spill over to other related behaviors. Produc-ing these kinds of effects is a major goal of appliedbehavior analysis. When evaluating applied behavior

270

Planning and Evaluating Applied Behavior Analysis Research

analysis research, consumers should consider the main-tenance and generalization of behavior change in theirevaluation of a study. An impressive behavior change thatdoes not last or is limited to a specialized training settingmay not be socially significant. Did the researchers reportthe results of assessment of maintenance and general-ization through follow-up observations and measurementin nontraining environments? Better yet, if maintenanceand/or generalization were not evident in such follow-upobservations, did the experimenters modify their designand implement procedures in an attempt to produce andanalyze the occurrence of maintenance and/or general-ization? Additionally, the reader should ask whether re-sponse generalization—changes in functionally similarbut untreated behaviors concomitant with changes in thetarget behavior(s)—is an appropriate concern in a givenstudy. If so, did the experimenters attempt to assess, an-alyze, or discuss this phenomenon?

External Validity

As discussed earlier in this chapter, the generality ofthe findings of a given experiment to other subjects, set-tings, and behaviors cannot be assessed solely on in-herent aspects of the study itself. The generality of abehavior–environment relation can be established onlythrough the active process of systematic replication.Therefore, the reader of an applied behavior analysisstudy should compare the study’s results with those ofother published research with which it shares relevantfeatures. The authors of a published report identify inthe paper’s introduction the experiments that they be-lieve are most relevant. To make an effective judgmentof the external validity of the data from a given study,the reader must often locate previous studies in the lit-erature and compare the results of those studies withthose of the current experiment.

Even though external validity should not be consid-ered a characteristic of a study per se (Birnbrauer, 1981),various features of an experiment suggest to the reader anexpected, or likely, level of generality for the results. Forexample, an experiment that demonstrated a functionalrelation of similar form and degree in six subjects of dif-ferent ages, backgrounds, and current repertoires wouldindicate a higher probability of generality to other sub-jects than would an identical study demonstrating thesame results across six subjects of the same age, back-ground, and current repertoire. Similarly, if the experi-ment was conducted in various settings and a number ofdifferent people administered the independent variable,additional confidence in the external validity of the re-sults may be warranted.

11It is important to remember that although some research in applied be-havior analysis can rightfully be criticized as superficial because it adds little to our conceptual understanding of behavior, studies in which mean-ingful target behaviors are improved to a socially valid level by the appli-cation of a socially valid treatment variable (whether a package or not) arenever superficial to the participants and the significant others who sharetheir environment.

Theoretical Significance and Conceptual Sense

A published experiment should also be evaluated interms of its scientific merit. It is possible for a study toclearly demonstrate a functional relation between the in-dependent variable and a socially important target be-havior—and thus be judged significant from an appliedperspective—yet contribute little to the advancement ofthe field.11 It is possible to reliably reproduce an impor-tant behavior change while at the same time not fullyunderstand which variables are responsible for the ob-served functional relation. Sidman (1960) differentiatedthis kind of simple reliability from “knowledgeable re-producibility,” a more complete level of analysis inwhich all of the important factors have been identifiedand are controlled.

The Need for More ThoroughAnalyses of SociallyImportant Behavior

Even though no behavior analyst would argue the ne-cessity of systematic replication and the central role itplays in the development of an effective technology ofbehavior change, and even though the literature providesevidence that at least a loose form of systematic repli-cation is commonly practiced, a more critical examina-tion of the literature suggests the need for more thoroughanalyses of the functional relations under study. Nu-merous authors have discussed the importance of fo-cusing on the analytic side of applied behavior analysisas much as the applied side (e.g., Baer, 1991; Birnbrauer,1979, 1981; Deitz, 1982; Hayes, 1991; Iwata, 1991;Michael, 1980; Morris, 1991; Johnston, 1991; Penny-packer, 1981). After examining the majority of the ex-perimental articles published in the first 10 volumes ofthe Journal of Applied Behavior Analysis (1968 to 1977),Hayes, Rincover, and Solnick (1980) concluded that atechnical drift had occurred in the field away from con-ceptual analyses and toward an emphasis on client cure.They warned of a likely loss of scientific understandingas a result of focusing purely on the technical aspects ofimproving behavior in applied settings, and they rec-ommended an increased effort to perform more thoroughanalyses of behavior.

271

The importance of component analyses, parametricanalyses, and other more sophisticated analytic at-tempts are often to be found less in “control” (in an im-mediately applied sense) and more in “understanding”(in a scientific sense). One may easily control, say,aggressive behavior through the use of punishmentwithout having contributed significantly to an under-standing of aggression. . . . For example, if one has apackage program that is effective, there may be littleobvious value in doing a component analysis. But thesemore complicated analyses may increase our knowl-edge of the actual functional variables and subsequentlyincrease our ability to generate more efficient and gen-eral behavioral programs. Perhaps, we have gone toofar in our attempt to be immediately applied at the ex-pense of being ultimately more effective, in failing toencourage more analogue and analytical studies thathave treatment implications. (Hayes, Rincover, & Sol-nick, 1980, pp. 282–283)

Baer, Wolf, and Risley (1987), writing in the 20thanniversary issue of the Journal of Applied BehaviorAnalysis, emphasized the need to shift from demonstra-tions of behavior changes—as convincing as they mightbe—to a more complete analysis and conceptual under-standing of the principles that underlie the successfuldemonstrations.

Twenty years ago, analytic meant a convincing experi-mental design, and conceptual meant relevance to acomprehensive theory about behavior. Now, applied be-havior analysis is considered an analytic discipline onlywhen it demonstrated convincingly how to make speci-fied behavior changes and when its behavior-changemethods make systematic, conceptual sense. In the past20 years, we have sometimes demonstrated convincinglythat we had changed behavior as specified, but by meth-ods that did not make systematic, conceptual sense—itwas not clear why those methods had worked. Suchcases let us see that we were sometimes convincinglyapplied and behavioral, yet even so, not sufficiently ana-lytic. (p. 318)

We agree with the need for more sophisticated, thor-ough analyses of the variables controlling socially im-portant behavior. Fortunately, examination of the recentliterature reveals numerous examples of the componentand parametric analyses that are necessary steps to a morecomplete understanding of behavior—an understandingthat is prerequisite to the development of a thoroughlyeffective technology of behavior change. Several of thestudies cited earlier in this chapter as examples of sys-tematic replication incorporated component and para-metric analyses.

The extent of a phenomenon’s generality is knownonly when all of the necessary and sufficient conditions

for its reproducibility have been specified. Only when allof the variables influencing a functional relation havebeen identified and accounted for can an analysis be con-sidered complete. Even then, the notion of a completeanalysis is misleading: “Further dissection or elaborationof either variable in a functional relation inevitably re-veals fresh variability, and analysis proceeds anew. . . . theanalysis of behavior can never be complete” (Penny-packer, 1981, p. 159).

Evaluation of scientific significance takes into con-sideration such things as the authors’ technological de-scription of the experiment as well as their interpretationand discussion of the results. Are the procedures de-scribed in sufficient detail so that at least the unique as-pects of the study can be replicated?12

Readers should consider the level of conceptual in-tegrity displayed in an experimental report. Does theliterature review reveal a careful integration of the studywith previous research? Does the literature review pro-vide sufficient justification for the study’s researchquestions? Are the authors’ conclusions based on thedata obtained in the study? Have the authors respectedthe difference between basic principles of behavior andbehavior change tactics? Do the authors speculate be-yond the data without making it clear that they aredoing so? Do the authors suggest directions for addi-tional research to further analyze the problem studied?Is the study important for reasons other than the resultsactually obtained? For example, an experiment thatdemonstrates a new measurement technique, investi-gates a new dependent or independent variable, or in-corporates a novel tactic for controlling a confoundingvariable can contribute to the scientific advancementof behavior analysis, even though the study failed toachieve experimental control or produce socially sig-nificant behavior change.

Numerous criteria and considerations are involved inevaluating the “goodness” of a published study in ap-plied behavior analysis. Although each criterion is im-portant on one level or another, it is unlikely that anyexperiment will meet all of the criteria. And, in fact, it isunnecessary for an experiment to do so to be consideredgood. Nevertheless, incorporating as many of these con-siderations as possible into a study enhances its socialsignificance and scientific value as an applied behavioranalysis.

Planning and Evaluating Applied Behavior Analysis Research

12Ideally, published procedural descriptions should include sufficient detailto allow an experienced investigator to replicate the experiment. However,space limitations of most journals often prohibit such detail. The commonand recommended practice in replicating published studies is to requestcomplete experimental protocols from the original investigator(s).

272

SummaryImportance of the Individual Subject in Behavioral Research

1. The focus on the behavior of individual subjects has en-abled applied behavior analysts to discover and refine ef-fective interventions for a wide range of socially significantbehavior.

2. Knowing that the average performance of a group of sub-jects changed may not reveal anything about the perfor-mance of individual subjects.

3. To be most useful, a treatment must be understood at thelevel at which people come into contact with it and are af-fected by it: the individual level.

4. When repeated measurement reveals significant variabil-ity, the researcher should seek to identify and control thefactors responsible for it.

5. Attempting to cancel out variability through statistical ma-nipulation neither eliminates its presence in the data norcontrols the variables responsible for it.

6. The researcher who attributes the effects of unknown oruncontrolled variables to chance is unlikely to identify andanalyze important variables.

7. To control the effects of any variable, a researcher must ei-ther hold it constant throughout the experiment or manip-ulate it as an independent variable.

8. A great strength of within-subject experimental designs isthe convincing demonstration of a functional relation madepossible by replication within the design itself.

9. The overall performance of a group is socially significantin many situations.

10. When group results do not represent individual perfor-mances, researchers should supplement group data withindividual results.

11. When the behavior analyst cannot control access to theexperimental setting or identify individual subjects, thedependent variable must consist of the responses made byindividuals who enter the experimental setting.

Importance of Flexibility in Experimental Design

12. A good experimental design is any sequence and type ofindependent variable manipulations that produces datathat effectively and convincingly address the researchquestion(s).

13. To investigate the research question(s) of interest, an ex-perimenter must often build an experimental design thatemploys a combination of analytic tactics.

14. The most effective experimental designs use ongoing eval-uation of data from individual subjects as the basis for em-

ploying the three elements of baseline logic—prediction,verification, and replication.

Internal Validity: Controlling Potential Sources of Confounding in Experimental Design

15. Experiments that demonstrate a clear functional relationbetween the independent variable and the target behaviorare said to have a high degree of internal validity.

16. The strength of an experimental design is determined bythe extent to which it (a) demonstrates a reliable effectand (b) eliminates or reduces the possibility that factorsother than the independent variable produced the behav-ior change.

17. The phrase control of behavior is technically inaccuratebecause the experimenter controls only some aspect of thesubject’s environment.

18. A confounding variable is an uncontrolled factor knownor suspected to have exerted influence on the depen-dent variable.

19. Steady state responding is the primary means by whichapplied behavior analysts assess the degree of experimen-tal control.

20. Confounding variables can be viewed as related primar-ily to one of four elements of an experiment: subject, set-ting, measurement of the dependent variable, andindependent variable.

21. A placebo control is designed to separate any effects thatmay be produced by a subject’s expectations of improve-ment as a result of receiving treatment from the effects ac-tually produced by the treatment.

22. With a double-blind control procedure neither the sub-ject(s) nor the observers know when the independent vari-able is present or absent.

23. Treatment integrity and procedural fidelity refer to the ex-tent to which the independent variable is implemented orcarried out as planned.

24. Low treatment integrity invites a major source of con-founding into an experiment, making it difficult, if not im-possible, to interpret the results with confidence.

25. One threat to treatment integrity, treatment drift, occurswhen application of the independent variable during laterphases of an experiment differs from the way the treat-ment was applied at the outset of the study.

26. Achieving a high level of treatment integrity begins withan operational definition of treatment procedures.

27. Treatments that are simple, precise, and brief, and that re-quire relatively little effort, are more likely to be deliveredwith consistency than those that are not.

Planning and Evaluating Applied Behavior Analysis Research

273

Planning and Evaluating Applied Behavior Analysis Research

28. Researchers should not assume that a person’s generalcompetence or experience in the experimental setting, orthat providing the intervention agent with detailed writ-ten instructions or a script, will ensure a high degree oftreatment integrity.

29. Treatment integrity (or procedural fidelity) data measurethe extent to which the actual implementation of experi-mental procedures matches their descriptions in themethod section of a research report.

Social Validity: Assessing the Applied Value of BehaviorChanges and the Treatments That Accomplish Them

30. The social validity of an applied behavior analysis can beassessed in three ways: the social significance of the tar-get behavior, the appropriateness of the procedures, andthe social importance of the results.

31. Social validity assessments are most often accomplishedby seeking consumer opinions.

32. Socially valid goals can be determined empirically by as-sessing the performance of individuals judged to be highlycompetent, and experimentally manipulating different lev-els of performance to determine socially valid outcomes.

33. Several scales and questionnaires for obtaining consumers’opinions of the acceptability of behavioral interventionshave been developed.

34. Methods for assessing the social validity of outcomes in-clude (a) comparing participants’ performance to the per-formance of a normative sample, (b) using a standardizedassessment instrument, (c) asking consumers to rate thesocial validity of participants’ performance, (d) asking ex-perts to evaluate participants’ performance, and (e) test-ing participants’ newly learned level of performance inthe natural environment.

External Validity: Replicating Experiments to Determinethe Generality of Research Findings

35. External validity refers to the degree to which a functionalrelation found reliable and socially valid in a given ex-periment will hold under different conditions.

36. The proper inferences about the results of a groups-designstudy are from the sample to the population, not from thesample to the individual.

37. Because a groups-design experiment does not demonstratea functional relation between the behavior of any subjectand some aspect of his or her environment, the externalvalidity of the results is moot.

38. Although groups-comparison designs and tests of statisti-cal significance are necessary and effective tools for cer-

tain types of research questions, they have contributed lit-tle to an effective technology of behavior change.

39. The generality of research findings in applied behavioranalysis is assessed, established, and specified by the repli-cation of experiments.

40. In a direct replication the researcher makes every effort toduplicate exactly the conditions of an earlier experiment.

41. In a systematic replication the researcher purposefullyvaries one or more aspects of an earlier experiment.

42. When a systematic replication successfully reproduces theresults of previous research, it not only demonstrates thereliability of the earlier findings but also adds to the ex-ternal validity of the earlier findings by showing that thesame effect can be obtained under different conditions.

43. Systematic replications occur in both planned and un-planned ways through the work of many experimenters ina given area, and they result in a body of knowledge withsignificant scientific integrity and technological value.

Evaluating Applied Behavior Analysis Research

44. The quality and value of an applied behavior analysis studymay be evaluated by seeking answers to a sequence ofquestions related to the internal validity, social validity,external validity, and the scientific and theoretical signif-icance of the study.

45. A Type I error occurs when a researcher concludes thatthe independent variable had an effect on the dependentvariable when it did not. A Type II error occurs when a re-searcher concludes that the independent variable did nothave an effect on the dependent variable when it did.

46. Visual analysis effectively identifies variables that pro-duce strong, large, and reliable effects, which contribute toan effective and robust technology of behavior change.Statistical analysis detects the slightest possible correla-tions between the independent and dependent variables,which may lead to the identification and inclusion of weakand unreliable variables in the technology.

47. A study can demonstrate a functional relation between theindependent variable and a socially important target be-havior—and thus be significant from an applied perspec-tive—yet contribute little to the advancement of the field.

48. Only when all of the variables influencing a functional re-lation have been identified and accounted for can an analy-sis be considered complete.

49. When evaluating the scientific significance of a researchreport, readers should consider the technological descrip-tion of the experiment, the interpretation and discussion ofthe results, and the level of conceptual sense and integrity.

274

275

Positive Reinforcement

Key Terms

automatic reinforcementconditioned reinforcergeneralized conditioned reinforcerpositive reinforcement

positive reinforcerPremack principlereinforcer assessment

response-deprivation hypothesisstimulus preference assessmentunconditioned reinforcer

Behavior Analyst Certification Board® BCBA® & BCABA®Behavior Analyst Task List©, Third Edition

Content Area 3: Principles, Processes, and Concepts

3-3 Define and provide examples of positive (and negative) reinforcement.

3-4 Define and provide examples of conditioned and unconditioned reinforcement.

3-14 Describe and provide examples of the operant conditioning paradigm.

3-19 Define and provide examples of contingency-shaped and rule-governedbehavior and distinguish between them by providing examples.

Content Area 9: Behavior Change Procedures

9-2 Use positive (and negative) reinforcement:

(a) Identify and use reinforcers.

(b) Use appropriate parameters and schedules of reinforcement.

(c) Use response-deprivation procedures (e.g., the Premack principle).

© 2006 The Behavior Analyst Certification Board, Inc.,® (BACB®) all rights reserved. A current version of thisdocument may be found at www.bacb.com. Requests to reprint, copy, or distribute this document and ques-tions about this document must be submitted directly to the BACB.

From Chapter 11 of Applied Behavior Analysis, Second Edition. John O. Cooper, Timothy E. Heron, William L. Heward. Copyright © 2007by Pearson Education, Inc. All rights reserved.

276

In looking back, it seems to me that the most importantthing I learned in graduate school was from another stu-dent, Burrhus Frederic Skinner (I called him Burrhus,others called him Fred). This man had a box, withinwhich was a smaller box, within which he would place ahungry laboratory rat. When the animal, in its explo-rations, would depress a lever that projected from onewall, a pellet of food would be discharged into a tray be-neath the lever. Under such conditions, the rat wouldlearn, in a matter of minutes, sometimes seconds, how toget its meal by depression of the lever. It would evenkeep on pressing, sometimes at a rapid rate, when pelletswere delivered only now and then; and if the food sup-ply was cut off entirely, the animal would still keepworking for awhile.

—Fred Keller (1982, p. 7)

Although some people still believe that findingsfrom laboratory research on animal learning arenot applicable to human behavior, by the mid-

1960s applied researchers had established the significanceof positive reinforcement in education and treatment. “Itis safe to say that without Skinner’s detailed laboratoryanalyses of reinforcement (Skinner, 1938), there wouldbe no field of ‘applied behavior analysis’ today, least notas we know it” (Vollmer & Hackenberg, 2001, p. 241).Positive reinforcement is the most important and mostwidely applied principle of behavior analysis.

Fittingly, the lead article in the first issue of theJournal of Applied Behavior Analysis reported severalexperiments showing the effects of positive reinforce-ment on student behavior (Hall, Lund, & Jackson, 1968).Six elementary students who were disruptive or dawdledfrequently participated in this classic study. The depen-dent variable, study behavior, was defined individuallyfor each student depending on the subject matter being

taught, but it generally consisted of the student beingseated and oriented toward the appropriate object or per-son (e.g., looking at course materials or the lecturingteacher) and class participation (e.g., writing the assign-ment, answering the teacher’s question). The indepen-dent variable was teacher attention, cued by an observerwho held up a small square of colored paper not likely tobe noticed by the target student. On this signal, theteacher attended to the child by moving to his desk, mak-ing a verbal comment, giving him a pat on the shoulder,or the like.

The effects of contingent teacher attention on the be-havior of all six students were striking. Figure 1 showsthe results for Robbie, a third-grader chosen to participatebecause he was “a particularly disruptive student whostudied very little” (p. 3). During baseline, Robbie en-gaged in study behavior for an average of 25% of the ob-served intervals. The remainder of the time he snappedrubber bands, played with objects in his pocket, talkedand laughed with classmates, and played with an emptymilk carton from this earlier-served drink. The majority ofattention Robbie received during baseline followed thesenonstudy behaviors. During baseline, Robbie’s teacheroften urged him to work, told him to put his milk cartonaway, and told him to stop bothering his classmates.

Following baseline, the experimenters showed theteacher a graph of Robbie’s study behavior, presented theresults of previous studies in which contingent adult at-tention had improved child behavior, and discussed thefundamentals of providing social reinforcement. Hall andcolleagues (1968) described the procedure implementedduring the two reinforcement phases as follows:

Whenever Robbie had engaged in 1 min of continuousstudy the observer signaled his teacher. On this cue, the

100

80

60

40

20

0

Per

cen

t St

ud

y B

ehav

ior

5 10 15 20 25 30 35Sessions

Baseline Reinforcement1 Reversal Reinforcement2 Postchecks

Robbie

Figure 1 Percentage ofintervals of study behavior bya third-grade student duringbaseline and reinforcementconditions. The arrow to thefirst postcheck data pointshows when cueing theteacher to provide attentionwas discontinued.From “Effects of Teacher Attention on Study Behavior” by R. V. Hall, D. Lund, and D. Jackson, 1968, Journal of AppliedBehavior Analysis, 1, p. 3. Copyright 1968by the Society for the Experimental Analysisof Behavior, Inc. Reprinted by permission.

Positive Reinforcement

277

teacher approached Robbie, saying, “Very good workRobbie,” “I see you are studying,” or some similar re-mark. She discontinued giving attention for nonstudybehaviors including those which were disruptive to theclass. (p. 4)

During Reinforcement 1, Robbie’s study behaviorincreased to a mean of 71%. When a reversal to baselineconditions was introduced, his study behavior decreasedto a mean of 50%; but when Robbie’s teacher again pro-vided attention for study behavior (Reinforcement 2), hisstudy behavior recovered and stabilized at a level rang-ing between 70 and 80% of the observed intervals. Re-sults of follow-up observations over a 14-week periodafter signaling the teacher had been discontinued showedthat Robbie’s study behavior had maintained at 79%. Theteacher reported positive behavior changes associatedwith Robbie’s increased study behavior. By the final weekof Reinforcement 2, Robbie completed his spelling as-signments more consistently, his disruptive behavior haddiminished, and he continued to study while drinking hismilk and did not play with the carton afterwards.

The intervention used by Robbie’s teacher to helphim be more successful in the classroom was based on theprinciple of positive reinforcement. In this chapter weexamine the definition and nature of positive reinforce-ment, describe methods for identifying potential rein-forcers and assessing their effects, outline experimentalcontrol techniques for verifying whether a positive re-inforcement contingency is responsible for increased responding, and offer guidelines for using positive rein-forcement effectively.

Definition and Nature of Positive ReinforcementThe principle of reinforcement is deceptively simple.“The basic operant functional relation for reinforcementis the following: When a type of behavior (R) is followedby reinforcement (SR) there will be an increased futurefrequency of that type of behavior” (Michael, 2004,p. 30).1 However, as Michael and other authors have pointedout, three qualifications must be considered regarding theconditions under which the effects reinforcement willoccur. These qualifications are (a) the delay between theresponse and onset of the consequence, (b) the stimulusconditions in effect when the response was emitted, and

1Various terms, such as strengthening the behavior or increasing the like-lihood of future responding, are sometimes used by behavior analysts to de-scribe the basic effect of reinforcement. Although such terms appearoccasionally in this book, recognizing Michael’s (1995) concern that useof such terms “encourages a language of intervening variables, or an im-plied reference to something other than an observable aspect of behavior”(p. 274), we most often use increased future frequency to refer to the pri-mary effect of reinforcement.

2When the consequence that produced the increase in responding is best de-scribed as the termination or withdrawal of an already present stimulus,negative reinforcement has occurred. The fundamental nature and quali-fying conditions for positive reinforcement and negative reinforcement arethe same. Negative reinforcement is examined in detail in chapter entitled“Negative Reinforcement.”

(c) the strength of the current motivation with respect tothe consequence. In this section we examine these qual-ifications and several other concepts requisite to acquir-ing a full understanding of how reinforcement “works.”

Operation and Defining Effectof Positive Reinforcement

Positive reinforcement has occurred when a response isfollowed immediately by the presentation of a stimulusand, as a result, similar responses occur more frequentlyin the future. Figure 2 illustrates the two-term contin-gency—a response followed closely in time by the pre-sentation of a stimulus—and the effect on futureresponding that define positive reinforcement. This two-term contingency is the fundamental building block forthe selection of all operant behavior (Glenn, Ellis, &Greenspoon, 1992).

The stimulus presented as a consequence and re-sponsible for the subsequent increase in responding iscalled a positive reinforcer, or, more simply, a reinforcer.Teacher attention in the form of positive comments wasthe reinforcer that increased Robbie’s study behavior.Cold water flowing into a cup and the sight of a colorfulbird are the reinforcers for the two behaviors shown inFigure 2.

It is important to remember that a reinforcer does not(and cannot) affect the response that it follows. Rein-forcement only increases the frequency with which sim-ilar responses are emitted in the future.

It is not correct to say the operant reinforcement“strengthens the response which precedes it.” The re-sponse has already occurred and cannot be changed.What is changed is the future probability of responses inthe same class. It is the operant as a class of behavior,rather than the response as a particular instance, whichis conditioned. (Skinner, 1953, p. 87)

Skinner (1966) used rate of responding as the funda-mental datum for his research on reinforcement. Tostrengthen an operant is to make it occur more frequently.2

However, rate (or frequency) is not the only dimension ofbehavior selected, shaped, and maintained by reinforce-ment. Reinforcement can also strengthen the duration, la-tency, magnitude, and/or topography of behavior. Forexample, if reinforcement only follows those responsesthat fall within a range of magnitude—that is, above a min-imum force but below a maximum force—and responses

Positive Reinforcement

278

R SR+

Hold cup under tap andpush lever

Cold water flows intocup

Turn head and look left See colorful bird

Effect on Future Frequencyof Similar Responses

Figure 2 Two-term contin-gency illustrating positive reinforce-ment: A response (R) is followedclosely in time by a stimuluschange (SR+) that results in anincreased frequency of similarresponses in the future.

outside of that range of magnitudes are not followed by re-inforcement, the effect will be a higher frequency of re-sponses within that range. Reinforcement contingent onresponses meeting multiple criteria will strengthen a sub-set of responses meeting those criteria (e.g., responses bya golfer practicing 10-foot putts must fall within a narrowrange of force and form to be successful).

Importance of Immediacyof Reinforcement

Emphasizing the importance of the immediacy of rein-forcement is essential. The direct effects of reinforce-ment involve “temporal relations between behavior andits consequences that are on the order of a few seconds”(Michael, 2004, p. 161). Research with nonhumans sug-gests that at one end of the continuum as much as 30 sec-onds can elapse without critical loss of effect (e.g., Byrne,LeSage, & Poling, 1997; Critchfield & Lattal, 1993;Wilkenfeld, Nickel, Blakely, & Poling, 1992). However,a response-to-reinforcement delay of 1 second will beless effective than a 0-second delay. This is because be-haviors other than the target behavior occur during thedelay; the behavior temporally closest to the presenta-tion of the reinforcer will be strengthened by its presen-tation. As Sidman (1960) described, “If the reinforcerdoes not immediately follow the response that was re-quired for its production, then it will follow some otherbehavior. Its major effect will then be upon the behaviorthat bears, adventitiously to be sure, the closest prior tem-poral relationship to the reinforcement” (p. 371).

Malott and Trojan Suarez (2004) discussed the im-portance of immediacy as follows:

If the reinforcer is to reinforce a particular response, itmust immediately follow that response. But how imme-diate is immediate? We don’t have any experimental

data on this one for human beings, but the research onnonverbal animals suggests that a minute or two pushesthe limit (even 30 seconds is hard). And if you talk tomost behavior analysts working with nonverbal children,they’d agree. They’d quit their jobs if they had to wait60 seconds before delivering each reinforcer to theirchildren. Such a delay is a good way to ensure that nolearning would occur, even with people—at least no de-sirable learning.

So, if you’re trying to reinforce a response, don’tpush that 60-second limit. Push the other end—the 0-second end. The direct effect of reinforcement dropsoff quickly as you increase the delay, even to 3 or 4 sec-onds. And even a 1-second delay may reinforce thewrong behavior. If you ask a young child to look at youand deliver the reinforcer 1 second after the response,you’re liable to reinforce looking in the wrong direction.So one problem with delayed reinforcement is that it re-inforces the wrong response—the one that occurred justbefore the delivery of the reinforcer. (p. 6)

A common misconception is that delayed conse-quences can reinforce behavior, even if the consequencesoccur days, weeks, or even years after the responses oc-curred. “When human behavior is apparently affected bylong-delayed consequences, the change is accomplishedby virtue of the human’s complex social and verbal his-tory, and should not be thought of as an instance of thesimple strengthening of behavior by reinforcement”(Michael, 2004, p. 36).

For example, suppose that a piano student practiceddutifully every day for several months in preparation fora statewide competition, at which she received a first-place award for her solo piano performance. Althoughsome might believe that the award reinforced her persis-tent daily practice, they would be mistaken. Delayed con-sequences do not reinforce behavior directly. Delayedconsequences can, when combined with language,influence future behavior through instructional control

Positive Reinforcement

279

and rule following. A rule is a verbal description of a be-havioral contingency (e.g., “Turnip seeds planted by Au-gust 15 will yield a crop before a killing freeze”).Learning to follow rules is one way that a person’s be-havior can come under the control of consequences thatare too delayed to influence behavior directly. A state-ment by the piano teacher such as “If you practice yourassignments every day for one hour between now and thecompetition, you could win first place” could have func-tioned as a rule that influenced the piano student’s dailypractice. The student’s daily practice was rule governedif daily practice occurred because of her teacher’s rule.3

The following conditions provide strong indicators thatbehavior is the result of instructional control or rule fol-lowing rather than a direct effect of reinforcement (Mal-ott, 1988, Michael, 2004).

• No immediate consequence for the behavior is apparent.

• The response–consequence delay is greater than 30 seconds.

• Behavior changes without reinforcement.

• A large increase in the frequency of the behavioroccurs following one instance of reinforcement.

• No consequence for the behavior exists, includingno automatic reinforcement, but the rule exists.

Reinforcement Is Nota Circular Concept

A commonly held misconception is that reinforcementis the product of circular reasoning and therefore con-tributes nothing to our understanding of behavior. Cir-cular reasoning is a form of faulty logic in which thename used to describe an observed effect is mistaken asthe cause for the phenomenon. This confusion of causeand effect is circular because the observed effect is thesole basis for identifying the presumed cause. In circularreasoning, the suspected cause is not independent of itseffect—they are one and the same.

Here is an example of circular reasoning that occursoften in education. A student’s persistent difficulties inlearning to read (effect) leads to a diagnosis of a learningdisability, which is then offered as an explanation for thereading problem: “Paul’s reading problem is due to hislearning disability.” How do you know Paul has a learn-ing disability? Because he hasn’t learned to read. Why

3Excellent discussions of rule-governed behavior can be found in Baum(1994); Chase and Danforth (1991); Hayes (1989); Hayes, Zettle, andRosenfarb (1989); Malott and Garcia (1991); Malott and Trojan Suarez(2004); Reitman and Gross (1996); and Vaughan (1989).

hasn’t Paul learned to read? Because his learning dis-ability has prevented him from learning to read. Andaround and around it goes.

Similarly, it would be circular reasoning if we saidthat teacher attention increased Robbie’s study behav-ior because it is a reinforcer. However, the correct use isto say that because Robbie’s study behavior increasedwhen (and only when) it was followed immediately byteacher attention, teacher attention is a reinforcer. Thedifference is more than the direction of the relation, ora semantic sleight-of-hand. In circular reasoning the sus-pected cause is not manipulated as an independent vari-able to see whether it affects the behavior. In circularreasoning such experimental manipulation is impossi-ble because the cause and effect are the same. Paul’slearning disability cannot be manipulated as an inde-pendent variable because, as we used the concept in thisexample, it is nothing more than another name for the dependent variable (effect).

Reinforcement is not a circular concept because thetwo components of the response–consequence relationcan be separated, allowing the delivery of a consequenceto be manipulated to determine whether it increases thefrequency of the behavior it follows. Epstein (1982) de-scribed it as follows:

If we can show that a response increases in frequencybecause (and only because) it is followed by a particularstimulus, we call that stimulus a reinforcer and its pre-sentation, reinforcement. Note the lack of circularity.Reinforcement is a term we invoke when we observecertain relations between events in the world. . . . [However,] If we say, for example, that a particular stimulus strengthens a response behavior because it is a reinforcer, we are using the term reinforcer in a circu-lar fashion. It is because it strengthens behavior that wecall the stimulus a reinforcer. (p. 4)

Epstein (1982) went on to explain the difference be-tween using an empirically demonstrated principle suchas reinforcement in a theoretical account of behavior andusing a circular argument.

In some of his writings, Skinner speculates that certainbehavior (for example, verbal behavior) has come aboutthrough reinforcement. He may suggest, for example,that certain behavior is strong because it was reinforced.This use of the concept is not circular, only speculativeor interpretive. Using the language of reinforcement inthis way is reasonable when you have accumulated alarge data base. . . . When Skinner attributes some every-day behavior to past reinforcers, he is making a plausi-ble guess based on a large data base and principles ofbehavior established under controlled conditions. (p. 4)

Used properly, reinforcement describes an empiri-cally demonstrated (or speculative, in a theoretical or con-ceptual analysis) functional relation between a stimulus

Positive Reinforcement

280

Table 1 The Vocabulary of Reinforcement*

Term Restrictions Examples

reinforcer (noun) A stimulus Food pellets were used as reinforcers for the rat’s lever presses.

reinforcing (adjective) A property of a stimulus The reinforcing stimulus was produced more often than the other, nonreinforcing stimuli.

reinforcement (noun) As an operation, the delivery of consequences The fixed-ratio schedule of reinforcement delivered when a response occurs food after every tenth key peck.

As a process, the increase in responding that The experiment with monkeys demonstrated rein-results from the reinforcement forcement produced by social consequences.

to reinforce (verb) As on operation, to deliver consequences when a When a period of free play was used to reinforce response occurs; responses are reinforced and not the child’s completion of school work, the child’s organisms grades improved.

As a process, to increase responding through the The experiment was designed to find out whether reinforcement operation gold stars would reinforce cooperative play among

first-graders.

*This vocabulary is appropriate if and only if three conditions exist: (1) A response produces consequences; (2) that type of response occurs moreoften when it produces those consequences than when it does not produce them; and (3) the increased responding occurs because the responsehas those consequences. A parallel vocabulary is appropriate to punishment (including punisher as a stimulus and punish as a verb), with thedifference being that a punishing consequence makes responding occur less rather than more often.From Learning, Interim (4th ed.) by A. C. Catania, 2007, p. 69. Cornwall-on-Hudson, NY: SLoan Publishing.

change (consequence) immediately following a responseand an increase in the future frequency of similar re-sponses. Table 1 shows restrictions and examples of appropriate use of the terms reinforcer, reinforcing, re-inforcement, and to reinforce suggested by Catania(1998). Box 1 describes four mistakes commonly made when speaking and writing about reinforcement.

Reinforcement Makes AntecedentStimulus Conditions Relevant

Reinforcement does more than increase the future fre-quency of behavior it follows; it also changes the func-tion of stimuli that immediately precede the reinforcedbehavior. By virtue of being temporally paired with theresponse-reinforcer contingency, certain antecedentevents acquire the ability to evoke (make more likely) in-stances of the reinforced response class. A discriminativestimulus (SD, pronounced “ess-dee”) is an antecedentstimulus correlated with the availability of reinforcementfor a particular response class. Responding in the pres-ence of the SD produces reinforcement, and responding inthe SD’s absence (a condition called stimulus delta [S ,pronounced “ess-delta”]) does not. As a result of this his-tory of reinforcement, a person learns to make more re-sponses in the presence of the SD than in its absence. Thebehavior is then considered to be under stimulus control.

With the addition of the SD, the two-term contin-gency for reinforcement is transformed to the three-term

contingency of the discriminated operant. Figure 3shows examples of three-term contingencies for positivereinforcement. Assuming that cold water is currently re-inforcing and the person has a history of receiving coldwater only under blue taps, he is more likely to hold hiscup under the blue tap on the cooler (than, say, a red tap).Similarly, assuming that seeing a colorful bird is cur-rently reinforcing and a person has a history of seeingbirds more often when looking toward chirping sounds(than, say, other sounds or silence), turning one’s headand looking to the left will occur at a higher frequencywhen the chirping sound is heard.

Reinforcement Depends on Motivation

The phrase assuming that cold water is currently rein-forcing in the previous paragraph holds another key tounderstanding reinforcement. Although reinforcement iscommonly thought of as a way of motivating people—and it can be—the momentary effectiveness of any stim-ulus change as reinforcement depends on an existing levelof motivation with respect to the stimulus change in ques-tion. Motivating operations alter the current value of stim-ulus changes as reinforcement (Michael 2004).

Motivating operations (MOs) are environmental vari-ables that have two effects on behavior: (1) They alterthe operant reinforcing effectiveness of some specificstimuli, objects, or events (the value-altering effect);

Positive Reinforcement

281

A standard set of technical terms is prerequisite to themeaningful description of any scientific activity. Effec-tively communicating the design, implementation, andoutcomes of an applied behavior analysis depends on theaccurate use of the discipline’s technical language. Thelanguage of reinforcement includes some of the most im-portant elements of the behavior analyst’s vocabulary.

In this box we identify four mistakes made fre-quently by students of applied behavior analysis whendescribing reinforcement-based interventions. Perhapsthe most common mistake—confusing negative rein-forcement with punishment—is not discussed here.

Reinforcing the Person

Although it is proper to speak of presenting a reinforcerto a learner (e.g., “The teacher gave a token to Bobbyeach time he asked a question”), statements such as, “Theteacher reinforced Bobby when he asked a question” and“Chloe was reinforced with praise each time she spelleda word correctly” are incorrect. Behaviors are reinforced,not people. Bobby’s teacher reinforced question asking,not Bobby. Of course, reinforcement acts on and affectsthe overall person, in that it strengthens behaviors withinthe person’s repertoire. However, the procedural focusand the primary effect of reinforcement are on the be-haviors that it follows.

Practice As Reinforcementfor a Skill

Educators will sometimes say that students should prac-tice a skill because “practicing reinforces the skill.” Thephrase poses no problem if the speaker is describing acommon outcome of practice with the everyday languageconnotation of reinforce, as in “to make somethingstronger” (e.g., to reinforce concrete by embedding steelrods in it). Well-designed drill and practice on a skill usu-ally yields stronger performance in the form of better re-tention, reduced latency, higher response rates, and/orincreased endurance (e.g., Johnson & Layng, 1994;Swanson & Sachse-Lee, 2000). Unfortunately, a phrasesuch as “practicing reinforces the skill” is often misusedand misinterpreted as technical usage of the language ofoperant conditioning.

Although a skill that has been practiced is oftenstronger as a result of the practice, the practice itselfcould not be a reinforcer for the behavior practiced.

Practice refers to the form and manner in which the tar-get skill is emitted (e.g., answering as many math prob-lems as you can in 1 minute). Practicing is a behaviorthat could be reinforced with various consequences suchas a preferred activity (e.g., “Practice solving these mathproblems; then you can have 10 minutes of free time”).Depending on a learner’s history and preferences, theopportunity to practice a certain skill may function as areinforcer for practicing another skill (e.g., “Finish yourmath problems; then you’ll get to do 10 minutes of re-peated reading practice”).

Artificial Reinforcement

A distinction between natural and artificial reinforcersis made sometimes, as in this statement, “As the students’success rates improved, we gradually stopped using ar-tificial reinforcers, such as stickers and trinkets, and in-creased the use of natural reinforcers.” Some authors havesuggested that applications of the principles of behavior re-sult in “artificial control” (e.g., Smith, 1992). A behavior–consequence contingency may be effective or ineffectiveas reinforcement, but none of its elements (the behavior,the consequence, or the resultant behavior change) is, orcan be, artificial.

The reinforcement contingencies and stimuli usedas reinforcers in any behavior change program are al-ways contrived—otherwise there would be no need forthe program—but they are never artificial (Skinner,1982). The meaningful distinction when talking aboutreinforcement contingencies is not between the naturaland the artificial, but between contingencies that alreadyexist in a given setting prior to a behavior change pro-gram and contingencies that are contrived as part of theprogram (Kimball & Heward, 1993). Although the ulti-mate effectiveness of a behavior change program maydepend on shifting control from contrived to naturallyoccurring contingencies, there is no such thing as artifi-cial reinforcement.

Reinforcement and FeedbackAs Synonyms

Some speakers and writers mistakenly use reinforcementand feedback interchangeably. The two terms refer to dif-ferent operations and outcomes, though some of each termencompasses parts of the other term’s meaning. Feedbackis information a person receives about a particular aspectof his or her behavior following its completion (e.g., “Verygood, Kathy. Two quarters equal 50 cents.”). Feedback is

Box 1Common Mistakes in Talking and Writing

about Reinforcement

Positive Reinforcement

282

most often provided in the form of verbal descriptions ofperformance, but it can also be provided by other meanssuch as vibration or lights (e.g., Greene, Bailey, & Barber,1981). Because feedback is a consequence that often re-sults in the increased future frequency of behavior, itsometimes leads to the faulty assumption that reinforce-ment must involve feedback or that reinforcement is justa behaviorist’s term for feedback.

Reinforcement always increases the future frequencyof responding. Feedback may result in (a) an increase inthe future frequency of the student’s performance as areinforcement effect and/or as a prompt or instruction onhow to respond next time (e.g. “You’re handwriting isimproving, Jason, but don’t forget to cross your Ts”),and/or (b) a reduction in the frequency of some aspectof the learner’s performance as a function of punishmentor instruction (e.g., “You dropped your elbow on thatpitch. Don’t do that.”). Feedback may have multiple ef-fects, increasing one aspect of performance and de-creasing another. Feedback may also have no effect onfuture responding whatsoever.

Reinforcement is defined functionally by its effecton future responding; feedback is defined by its formalcharacteristics (information about some aspect of per-formance). The operation of either concept is neither nec-essary nor sufficient for the other. That is, reinforcementmay occur in the absence of feedback, and feedback mayoccur without a reinforcement effect.

Sometimes CommonsenseLanguage Is Better

The technical language of behavior analysis is complex,and mastering it is no simple matter. Beginning studentsof behavior analysis are not the only ones who commit

terminology errors. Well-trained practitioners, establishedresearchers, and experienced authors also make mistakesnow and then when speaking and writing about behavioranalysis. Using behavioral concepts and principles—suchas positive reinforcement—to confidently explain com-plex situations involving multiple processes and uncon-trolled and unknown variables is a mistake that catchesthe most attentive and conscientious of behavior analystsat times.

Instead of invoking the terminology and concepts ofreinforcement to explain the influence of temporally distant consequences on behavior, it is probably wiser to follow Jack Michael’s (2004) advice and simply use everyday descriptive language and commonsense relations.

Incorrectly used technical language is worse than com-mon sense language because it suggests that the situa-tion is well understood, and it may displace seriousattempts at further analysis. Until we are able to providean accurate analysis of the various processes relevant toindirect effects [of reinforcement], we are better off usingordinary descriptive language. Thus, say “the successfulgrant application is likely to encourage future efforts inthe same direct,” but don’t say it as though you had thescience of behavior behind you. Stop referring to suc-cessful settlements of a labor dispute as reinforcementfor striking, and successful election of a political candi-date as reinforcement for political activity. . . . Don’t talkabout good grades as reinforcement for effective studybehavior, although they are no doubt responsible formaintaining it in some cases. Just say that they’re re-sponsible for maintaining it. Restraint of this sort willdeprive some of us of an opportunity to (incorrectly) dis-play our technical knowledge, but so much the better. (p. 165, emphasis in original)

and (2) They alter the momentary frequency of all be-havior that has been reinforced by those stimuli, ob-jects, or events (the behavior-altering effect). Thevalue-altering effect, like response-reinforcementdelay, is relevant to the effectiveness of the reinforcerat the time of conditioning, and stating that the conse-quence is a form of reinforcement implies that a rele-vant MO is in effect and at sufficient strength. (p. 31)

In other words, for a stimulus change to “work” as re-inforcement at any given time, the learner must alreadywant it. This is a critical qualification in terms of the en-vironmental conditions under which the effects of rein-forcement will be seen. Michael (2004) explained thisqualification as follows:

The behavior-altering effect is relevant to the increasedfuture frequency of the reinforced behavior, and must be

added as a third qualification to the operant reinforce-ment relation: In a given stimulus situation (S) when atype of behavior (R) is followed immediately by rein-forcement (SR) there will be an increase in the future fre-quency of that type of behavior in the same or similarstimulus conditions, but the increased frequency will onlybe seen when the MO relevant to the reinforcement thatwas used is again in effect. (p. 31, emphasis in original)

Motivating operations take two forms. An MO thatincreases the current effectiveness of a reinforcer is calledan establishing operation (EO) (e.g., food deprivationmakes food more effective as a reinforcer); an MO thatdecreases the current effectiveness of a reinforcer is anabolishing operation (AO) (e.g., food ingestion reducesthe effectiveness of food as a reinforcer).4

4Motivating operations are described in detail in chapter entitled “Moti-vating Operations.”

Positive Reinforcement

283

SR+

Cold water flows intocup

See colorful bird

Effect on Future Frequencyof Similar Responses

in Presence of SDSD R

Blue tap on water coolerHold cup under tap and

push lever

Chirping sound to left Turn head and look left

Figure 3 Three-term contingency illustrating positive reinforcement of a discriminated operant: A re-sponse (R) emitted in the presence of a discriminative stimulus (SD) is followed closely in time by a stimuluschange (SR+) and results in an increased frequency of similar responses in the future when the SD is present.A discriminated operant is the product of a conditioning history in which responses in the presence of the SD

produced reinforcement while similar responses in the absence of the SD (a condition called stimulus delta[S�]) have not been reinforced (or resulted in a reduced amount or quality of reinforcement than in the SD

condition).

Adding the establishing operation (EO) to a dis-criminated operant results in a four-term contingencyas shown in Figure 4. Spending several hours in a hot and stuffy room without water is an EO that (a)makes water more effective as a reinforcer and (b) in-creases the momentary frequency of all behaviors thathave produced water in the past. Similarly, a parkranger stating prior to a hike that any hiker who de-scribes the coloring of the bird that makes a certainchirping sound will receive a $5 token for the gift shopis an EO that will (a) make seeing a bird that makesthe chirping sound effective as reinforcement and (b)increase the frequency of all behaviors (e.g., turningone’s head and looking around) that have produced sim-ilar consequences (in this case, seeing the source ofsounds) in the past.

In plain English, establishing operations (EOs) de-termine what an individual wants at any particular mo-ment. EOs are dynamic, always changing. Thereinforcer value (the want) goes up with increasing lev-els of deprivation and goes down with levels of satiation.Vollmer and Iwata (1991) demonstrated how the rein-forcing effectiveness of three classes of stimuli—food,music, and social attention—varied under conditions ofdeprivation and satiation. Participants were five adultswith developmental disabilities, and the dependent vari-able was the number of responses per minute on twomotor tasks—pressing a switch or picking up smallblocks from a container and putting them through the

hole in the top of another container. All sessions lasted10 minutes and began with the experimenter saying,“Do this, [participant’s name],” and modeling the re-sponse. During baseline, participants’ responses re-ceived no programmed consequences. During thedeprivation and satiation conditions, responses were fol-lowed by presentation of either food, music, or social at-tention. Initially each response was followed by theprogrammed consequence; this gradually shifted toevery third, fifth, or tenth response being followed bythe consequence.

Different procedures were used to create depriva-tion and satiation conditions for each stimulus class.With food, for example, baseline and deprivation con-dition sessions were conducted 30 minutes prior to aparticipant’s scheduled lunchtime; sessions during the satiation condition were conducted within 15 min-utes after the participant had eaten lunch. For social attention, baseline and deprivation condition sessionswere conducted immediately following a 15-minute period in which the participant had either been alone or had been observed to have had no social inter-action with another person. Immediately prior to eachsession in the satiation condition, the experimenter pro-vided continuous social interaction (e.g., played a simple game, conversation) with the participant for 15 minutes.

All five participants responded at higher rates underthe deprivation condition than during the satiation

Positive Reinforcement

284

SR+

Cold water flowsinto cup

See colorful bird

Effect on Future Frequencyof Similar Responses

under Similar Conditionsin the Future

SD R

Blue tap on water cooler

Hold cup under tapand push lever

Chirping sound to left

Turn head and look left

EO

Water deprived for 2hours in hot, stuffy room

Park ranger says, “After ourhike, anyone who describes

the bird that makes thischirping sound will receive a $5 token for the gift shop.”

Figure 4 Four-term contingency illustrating positive reinforcement of a discriminated operant made current by amotivating operation: An establishing operation (EO) increases the momentary effectiveness of a stimulus change asa reinforcer, which in turn makes the SD more likely to evoke behavior that has been reinforced by that stimuluschange in the past.

condition. Figure 5 shows the effects of deprivation and satiation of social attention on the effectiveness ofsocial attention as a reinforcer for two of the study’s participants, Donny and Sam. Other researchers have re-ported similar findings concerning the effects of depri-

vation and satiation of various stimuli and events as mo-tivating operations that affect the relative effectivenessof reinforcement (e.g., Gewirtz & Baer, 1958; Klatt, Sher-man, & Sheldon, 2000; North & Iwata, 2005; Zhou,Iwata, & Shore, 2002).

30

25

20

15

10

5

0

Baseline Deprivation Satiation Deprivation

Donny

30

25

20

15

10

5

0

Baseline Deprivation Satiation Deprivation

Sam

Satiation

10 20 30 40

Sessions

Res

po

nse

s p

er M

inu

te

Figure 5 Responses perminute by two students duringbaseline and when socialattention was used as rein-forcement under deprivationand satiation conditions forsocial attention.From “Establishing Operations andReinforcement Effects” by T. R. Vollmer and B. A. Iwata, 1997, Journal of AppliedBehavior Analysis, 24, p. 288. Copyright1991 by the Society for the ExperimentalAnalysis of Behavior, Inc. Reprinted bypermission.

Positive Reinforcement

285

Automaticity of ReinforcementA reinforcing connection need not be obvious to the in-dividual reinforced.

—B. F. Skinner (1953, p. 75)

The fact that a person does not have to understand or ver-balize the relation between his actions and a reinforcingconsequence, or for that matter even be aware that a con-sequence has occurred, for reinforcement to occur isknown as the automaticity of reinforcement. Skinner(1983) provided an interesting example of automaticityin the third and final volume of his autobiography, A Mat-ter of Consequences. He described an incident that tookplace at a meeting of distinguished scholars who had beeninvited to discuss the role of intention in political activ-ity. At one point during the meeting, the psychologistErich Fromm began to argue that “people were not pi-geons,” perhaps implying that an operant analysis basedon positive reinforcement could not explain human be-havior, which is the product of thought and free will.Skinner recounted what happened next:

I decided that something had to be done. On a scrap ofpaper I wrote, “Watch Fromm’s left hand. I am going toshape [reinforce by successive approximations] a chop-ping motion” and passed it down the table to Halleck [a member of the group]. Fromm was sitting directlyacross the table and speaking mainly to me. I turned mychair slightly so that I could see him out of the corner ofmy eye. He gesticulated a great deal as he talked, andwhenever his left hand came up, I looked straight at him. If he brought the hand down, I nodded and smiled.Within five minutes he was chopping the air so vigor-ously that his wristwatch kept slipping out over hishand. (p. 150–151, words in brackets added)

Arbitrariness of the Behavior Selected

So far as the organism is concerned, the only importantproperty of the contingency is temporal.

—B. F. Skinner (1953, p. 85)

No logical or adaptive connection between behavior anda reinforcing consequence is necessary for reinforcementto occur. In other words, reinforcement will strengthenany behavior that immediately precedes it. This arbitrarynature of the behavior selected is critical to understand-ing reinforcement. All other relations (e.g., what’s logi-cal, desirable, useful, appropriate) must compete with thecritical temporal relation between behavior and conse-quence. “To say that reinforcement is contingent upon aresponse may mean nothing more than that it followedthe response . . . conditioning takes place presumably be-

5It is a mistake to assume that all superstitious behavior is the direct resultof adventitious reinforcement. Many superstitious behaviors are probablythe result of following cultural practices. For example, high school base-ball players may wear their caps inside out and backwards when a late-inning rally is needed because they have seen major leaguers don such“rally caps” in the same situation.

cause of the temporal relation only, expressed in termsof the order and proximity of response and reinforce-ment” (Skinner, 1948, p. 168).

Skinner (1948) demonstrated the arbitrary nature ofthe behaviors selected by reinforcement in one of hismost famous experimental papers, “‘Superstition’ in thePigeon.” He gave pigeons a small amount of food every15 seconds, “with no reference whatsoever to the bird’sbehavior” (p. 168). The fact that reinforcement willstrengthen whatever behavior it immediately follows wassoon evident. Six of the eight birds developed idiosyn-cratic behaviors “so clearly defined, that two observerscould agree perfectly in counting instances” (p. 168). Onebird walked counterclockwise around the cage; anotherrepeatedly thrust its head into one of the upper corners ofthe cage. Two birds acquired a “pendulum motion of thehead and body, in which the head was extended forwardand swung from right to left with a sharp movement fol-lowed by a somewhat slower return” (p. 169). The pi-geons had exhibited none of those behaviors at “anynoticeable strength” during adaptation to the cage or be-fore the food was periodically presented.

Whatever behavior the pigeons happened to be exe-cuting when the food hopper appeared tended to be re-peated, which made it more likely to be occurring whenfood appeared the next time. That is, reinforcement wasnot contingent (in the sense of, dependent) on the be-havior; it was only a coincidence that reinforcementsometimes followed the behavior. Such accidentally re-inforced behavior is called “superstitious” because it hasno influence on whether reinforcement follows. Humansengage in many superstitious behaviors. Sports providecountless examples: A basketball player tugs on his shortsbefore shooting a foul shot, a golfer carries his lucky ballmarker, a batter goes through the same sequence of ad-justing his wristbands before each pitch, a college foot-ball fan wears a goofy-looking necklace made of inediblenuts to bring good luck to his team.5

The importance of understanding the arbitrariness ofreinforcement goes far beyond providing a possible ex-planation for the development of harmless superstitiousand idiosyncratic behaviors. The arbitrary nature of se-lection by reinforcement may explain the acquisition andmaintenance of many maladaptive and challenging be-haviors. For example, a caregiver’s well-meaning socialattention provided in an attempt to console or divert aperson who is hurting himself may help shape and main-tain the very behavior the caregiver is trying to prevent or

Positive Reinforcement

286

eliminate. Kahng, Iwata, Thompson, and Hanley (2000)documented with a functional analysis that social rein-forcement maintained the self-injurious behavior (SIB)and aggression of three adults with developmental dis-abilities. Kahng and colleagues’ data support the hy-pothesis that aberrant behaviors may have been selectedand maintained by social attention because of the arbi-trariness of reinforcement.

Automatic Reinforcement

Some behaviors produce their own reinforcement inde-pendent of the mediation of others. For example, scratch-ing an insect bite relieves the itch. Behavior analysts usethe term automatic reinforcement to identify a behavior–reinforcement relation that occurs without the presenta-tion of consequences by other people (Vaughan &Michael, 1982; Vollmer, 1994, 2006). Automatic rein-forcement occurs independent of social mediation by oth-ers. Response products that function as automaticreinforcement are often in the form of a naturally pro-duced sensory consequence that “sounds good, looksgood, tastes good, smells good, feels good to the touch,or the movement itself is good” (Rincover, 1981, p. 1).

Persistent, nonpurposeful, repetitive self-stimulatorybehaviors (e.g., flipping fingers, head rolling, body rock-ing, toe walking, hair pulling, fondling body parts) mayproduce sensory stimuli that function as automatic rein-forcement. Such “self-stimulation” is thought to be a fac-tor in the maintenance of self-injurious behavior (Iwata,Dorsey, Slifer, Bauman, & Richman, 1994), stereotypicrepetitive movements, and “nervous habits” such as hairpulling (Rapp, Miltenberger, Galensky, Ellingson, &Long, 1999), nail biting, chewing on the mouth or lips,and object manipulation such as continually twirling apencil or fondling jewelry (Miltenberger, Fuqua, &Woods, 1998).

The response product that functions as automatic re-inforcement may be an unconditioned reinforcer or aonce neutral stimulus that, because it has been pairedwith other forms of reinforcement, has become a condi-tioned reinforcer. Sundberg, Michael, Partington, andSundberg (1996) described a two-stage conditioning his-tory that may account for this type of conditioned auto-matic reinforcement.

For example, a person may persist in singing or hum-ming a song while coming home for a movie despite noobvious direct reinforcement for singing. In order forthis behavior to occur as automatically reinforced be-havior, a special two-stage conditioning history is neces-sary. In stage one, some stimulus (e.g., a song) must bepaired with an existing form of conditioned or uncondi-tioned reinforcement (e.g., an enjoyable movie, popcorn,

relaxation). As a result, the new stimulus can become aform of conditioned reinforcement (e.g., hearing thesong may now be a new form of conditioned reinforce-ment). In stage two, the emission of a response (forwhatever reason) produces a response product (i.e., theauditory stimuli produced by singing the song) that hastopographical similarity to that previously neutral stimu-lus (e.g., the song), and may now have self-strengtheningproperties. (pp. 22–23)

Several theorists have suggested that automatic re-inforcement may help to explain the extensive babblingof infants and how babbling shifts naturally, without ap-parent intervention from others, from undifferentiatedvocalizations to the speech sounds of their native lan-guage (e.g., Bijou & Baer, 1965; Mowrer, 1950; Skin-ner, 1957; Staats & Staats, 1963; Vaughan & Michael,1982). Caregivers frequently talk and sing while hold-ing, feeding, and bathing a baby. As a result of repeatedpairing with various reinforcers (e.g., food, warmth), thesounds of a caregiver’s voice may become conditionedreinforcers for the baby. The baby’s babbling is auto-matically reinforced when it produces sounds that matchor closely approximate the caregiver’s. At that point, “Theyoung child alone in the nursery may automatically re-inforce his own exploratory vocal behavior when he pro-duces sounds that he has heard in the speech of others”(Skinner, 1957, p. 58).

Although the idea that automatic reinforcement is afactor in early language acquisition has been proposedfor many years, experimental analyses of the phenome-non have appeared in the literature only recently (e.g.,Miguel, Carr, & Michael, 2002; Sundberg et al., 1996;Yoon & Bennett, 2000). Sundberg and colleagues (1996)reported the first study showing the effects of a stimulus–stimulus pairing procedure on the frequency with whichchildren emitted new vocal sounds without directreinforcement or prompts to respond. Five children,ages 2 to 4 and representing a broad range of languageabilities, served as subjects. During the prepairing (base-line) condition, the parents and in-home trainers sat a fewfeet away from the child and recorded each word or vocalsound emitted by the child as he played with a train setand several toys. Data were collected in consecutive 1-minute intervals. The adults did not interact with thesubject during the prepairing baseline. The stimulus–stimulus pairing procedure consisted of a familiar adultapproaching the child, emitting a target vocal sound,word, or phrase, and then immediately delivering a stim-ulus that had been established previously as a form of re-inforcement for the child (e.g., tickles, praise, bouncingin a parachute held by adults). This stimulus–stimuluspairing procedure was repeated 15 times per minute for1 or 2 minutes. The adult used a variety of pitches and in-tonations when voicing the target sound, word, or phrase.

Positive Reinforcement

287

20

15

10

5

0

Cu

mu

lati

ve V

oca

l R

esp

on

ses

Subject 2

Prepairing PostpairingPairing

New Word “Apple”

Other Vocalization

1 2 3 4 5 6 7 8 9 10 11 12 13Minutes

Figure 6 Cumulative number of times a 4-year-old child with autism vocalized “apple”before and after “apple” had been paired repeatedly with an established form of reinforce-ment. Automatic reinforcement may explain the increased frequency of the child’s vocal-izing “apple” after pairing.From “Automatic Reinforcement” by M. L. Sundberg, J. Michael, J. W. Partington, and C. A. Sundberg, 1996, “Repertoire-AlteringEffects of Remote Contingencies” The Analysis of Verbal Behavior, 13, p. 27. Copyright 1996 by the Association for BehaviorAnalysis, Inc. Used by permission.

6Mands, tacts, and intraverbals—three elementary verbal operants first de-scribed by Skinner (1957)—are explained in chapter entitled “Verbal Behavior.”

During the postpairing condition, which began immedi-ately after the stimulus–stimulus pairings, the adultmoved away from the child and conditions were the sameas during the prepairing condition.

The stimulus–stimulus pairing of a vocal sound,word, or phrase with an established reinforcer was fol-lowed by an increased frequency of the targeted wordduring the postpairing condition for all five children.Figure 6 shows the results of a representative sample of one of three pairings conducted with Subject 2, a 4-year-old boy with autism. Subject 2 had a verbal reper-toire of more than 200 mands, tacts, and intraverbals, butrarely emitted spontaneous vocalizations or engaged invocal play.6 During the prepairing condition, the childdid not say the target word and emitted four other vocal-izations at a mean rate of 0.5 per minute. The stimulus–stimulus pairing procedure consisted of pairing the wordapple with tickles approximately 15 times in 60 seconds.Immediately after the pairing, the subject said “apple”17 times in 4 minutes, a rate of 4.25 responses per minute.In addition, the child said “tickle” four times within the

first minute of the postpairing condition. Sundberg andcolleagues’ results provide evidence that the children’svocal response products may have functioned as auto-matic conditioned reinforcement after being paired withother forms of reinforcement.

Kennedy (1994) noted that applied behavior analystsuse two meanings of the term automatic reinforcement.In the first instance, automatic reinforcement is deter-mined by the absence of social mediation (Vollmer, 1994;2006). In the second instance, when functional behaviorassessments do not identify a reinforcer for a persistentbehavior, some behavior analysts hypothesize that auto-matic reinforcement is the controlling variable. WhenSIB occurs in the absence of social attention or any otherknown form of reinforcement, automatic reinforcementis often assumed to be involved (e.g., Fisher, Lindauer,Alterson, & Thompson, 1998; Ringdahl, Vollmer, Marcus, & Roane, 1997; Roscoe, Iwata, & Goh, 1998). Determining that a behavior may be maintained by automatic reinforcement, and when possible isola-ting or substituting the source of that reinforcement (e.g., Kennedy & Souza, 1995; Shore, Iwata, DeLeon,Kahng, & Smith, 1997), has important implica-tions or designing interventions to either capitalize on

Positive Reinforcement

288

the automatic reinforcing nature of the behavior or coun-teract it.

In summing up the uses and limitations of automaticreinforcement as a concept, Vollmer (2006) suggested that:

• Practitioners should recognize that not all rein-forcement is planned or socially mediated.

• Some behaviors maintained by automatic reinforce-ment (e.g., self-stimulation, stereotypy) may not bereduced or eliminated with certain procedures (e.g.,timeout, planned ignoring, or extinction).

• Affixing the label automatic reinforcement to anobserved phenomenon too quickly may limit ouranalysis and effectiveness by precluding fur-ther efforts to identify the actual reinforcer-maintaining behavior.

• When socially mediated contingencies are difficultto arrange or simply not available, practitionersmight consider automatic reinforcement as a po-tential aim.

Classifying ReinforcersIn this section we review the technical classification of re-inforcers by their origin as well as several practical cat-egories by which practitioners and researchers oftendescribe and classify reinforcers by their formal charac-teristics. The reader should recognize, however, that allreinforcers, regardless of type or classification, are thesame in their most important (i.e., defining) characteris-tic: All reinforcers increase the future frequency of be-havior that immediately precedes them.

Classification of Reinforcers by Origin

There are two basic types of reinforcers by origin—thatis, whether a reinforcer is the product of the evolution ofthe species (an unconditioned reinforcer) or the result ofthe learning history of the individual (a conditioned reinforcer).

Unconditioned Reinforcers

A stimulus change that functions as reinforcement eventhough the learner has had no particular learning historywith it is called an unconditioned reinforcer. (Some au-thors use the terms primary reinforcer and unlearned re-inforcer as synonyms for unconditioned reinforcers.)Because unconditioned reinforcers are the product of theevolutionary history of a species (phylogeny), all bio-logically intact members of a species are more or lesssusceptible to reinforcement by the same unconditioned

7Remember, it is the environment, not the learner, that does the pairing.The learner does not have to “associate” the two stimuli.

reinforcers. For example, food, water, oxygen, warmth,and sexual stimulation are examples of stimuli that donot have to undergo a learning history to function as re-inforcers. Food will function as an unconditioned rein-forcer for a human deprived of sustenance; water willfunction as an unconditioned reinforcer for a person de-prived of liquid, and so forth.

Human touch may also be an unconditioned rein-forcer (Gewirtz & Pelaez-Nogueras, 2000). Pelaez-Nogueras and colleagues (1996) found that infantspreferred face-to-face interactions that included touchstimulation. Two conditioning treatments were imple-mented in alternated counterbalanced order. Under thetouch condition, infants’ eye-contact responses were fol-lowed immediately by adult attention (eye contact), smil-ing, cooing, and rubbing the infants’ legs and feet.Eye-contact responses during the no-touch conditionwere followed by eye contact, smiles, and coos from theadult, but no touching. All of the babies in the study emit-ted eye contact for longer durations, smiled and vocal-ized at higher rates, and spent less time crying andprotesting in the contingent condition that included touch.From these results and several related studies, Pelaez-Nogueras and colleagues concluded that “these resultssuggest that . . . touch stimulation can function as a pri-mary reinforcer for infant behavior” (p. 199).

Conditioned Reinforcers

A conditioned reinforcer (sometimes called a secondaryreinforcer or learned reinforcer) is a previously neutralstimulus change that has acquired the capability to func-tion as a reinforcer through stimulus–stimulus pairingwith one or more unconditioned reinforcers or condi-tioned reinforcers. Through repeated pairings, the previ-ously neutral stimulus acquires the reinforcementcapability of the reinforcer(s) with which it has beenpaired.7 For example, after a tone has been paired re-peatedly with food, when food is delivered as a reinforcer,the tone will function as a reinforcer when an EO hasmade food a currently effective reinforcer.

Neutral stimuli can also become conditioned rein-forcers for humans without direct physical pairing withanother reinforcer through a pairing process Alessi (1992)called verbal analog conditioning.

For example, a class of preschool children who havebeen receiving M&M candies for good school workmight be shown pieces of cut up yellow construction

Positive Reinforcement

289

paper and told, “These pieces of yellow paper are whatbig kids work for” (Engelmann, 1975, pp. 98–100).Many children in the group immediately refuse M&Ms,and work extra hard, but accept only pieces of yellowpaper as their rewards.

We might say that the pieces of yellow paper act as“learned reinforcers.” Laboratory research tells us thatneutral stimuli become reinforcers only through directpairing with primary reinforcers (or other “learned rein-forcers”). Yellow paper was not paired with any rein-forcer and certainly not with the primary (M&Ms)reinforcers. Yellow paper acquired reinforcing propertieseven more powerful than the primary M&Ms rein-forcers, as demonstrated by the children’s refusal to ac-cept M&Ms, demanding instead pieces of yellow paper.(For the sake of this example, assume that the childrenhad not been satiated with M&Ms just before the ses-sion.) (p. 1368)

It is sometimes thought that the “power” of a condi-tioned reinforcer is determined by the number of times ithas been paired with other reinforcers. However, a state-ment such as, “The more often the tone is paired withfood, the more reinforcing the tone will become” is notcompletely accurate. Although numerous pairings willincrease the likelihood that the tone will function as aconditioned reinforcer in the first place (though a singlepairing is sometimes sufficient), the momentary effec-tiveness of the tone as a reinforcer will be a function ofthe relevant EO for the reinforcer(s) with which the con-ditioned reinforcer has been paired. A tone that has beenpaired only with food will function as an effective rein-forcer for a food-deprived learner, but the tone will havelittle effect as a reinforcer if the learner has just consumeda lot of food, regardless of the number of times it hasbeen paired with food.

A generalized conditioned reinforcer is a condi-tioned reinforcer that as a result of having been pairedwith many unconditioned and conditioned reinforcersdoes not depend on a current EO for any particular formof reinforcement for its effectiveness. For example so-cial attention (proximity, eye contact, praise) is a gener-alized conditioned reinforcer for many people because itoccurs simultaneously with many reinforcers. The morereinforcers with which a generalized conditioned rein-forcer has been paired, the greater is the likelihood thatit will be effective at any given time. Because it can be ex-changed for a nearly limitless variety of backup rein-forcers, money is a generalized conditioned reinforcerwhose effectiveness is usually independent of current es-tablishing operations.

It is sometimes thought that a conditioned reinforceris called a generalized conditioned reinforcer because itcan function as reinforcement across a wide range of be-haviors. But this is not so—any reinforcer is capable ofstrengthening any behavior that immediately precedes its

occurrence. A conditioned reinforcer is called a general-ized conditioned reinforcer because it is effective as re-inforcement across a wide range of EO conditions.Because of their versatility across EO conditions, gener-alized conditioned reinforcers offer great advantages forpractitioners, who often have limited control of the EOsfor particular reinforcers.

Generalized conditioned reinforcers provide the basisfor implementing a token economy, a reinforcement-basedsystem capable of improving multiple behaviors of mul-tiple participants (e.g., Higgins, Williams, & McLaugh-lin, 2001; Phillips, Phillips, Fixen, & Wolf, 1971). In atoken economy, participants receive tokens (e.g., points,check marks, poker chips) contingent on a variety of tar-get behaviors. Participants accumulate the tokens and ex-change them at specific times for their choices from amenu of backup reinforcers (e.g., free time, computertime, snacks).

Classification of Reinforcers by Formal Properties

When applied behavior analysts describe reinforcers bytheir physical properties—a practice that can enhancecommunication among researchers, practitioners, and theagencies and people they serve—reinforcers are typicallyclassified as edible, sensory, tangible, activity, or social.

Edible Reinforcers

Researchers and practitioners have used bites of preferredfoods, snacks, or candy, and sips of drinks, as reinforcers.One interesting and important use of edibles as rein-forcers is in the treatment of chronic food refusal in chil-dren. For example, Riordan, Iwata, Finney, Wohl, andStanley (1984) used “highly preferred food items” as re-inforcers to increase the food intake of four children ata hospital treatment facility. The treatment program con-sisted of dispensing the high-preference food items (e.g.,cereal, yogurt, canned fruit, ice cream) contingent on theconsumption of a target food item (e.g., vegetables,bread, eggs).

Edible reinforcers were also used by Kelley, Piazza,Fisher, and Oberdorff (2003) to increase cup drinking byAl, a 3-year-old boy who had been admitted to a daytreatment program for food refusal and bottle depen-dency. The researchers measured the percentage of trialsin which Al consumed 7.5 ml of three different liquidsfrom the cup. During baseline, when Al was praised ifhe consumed the drink, his consumption averaged 0%,44.6% and 12.5% of trials for orange juice, water, and a

Positive Reinforcement

290

chocolate drink, respectively. During the positive rein-forcement component of the cup-drinking intervention,each time Al consumed the drink the therapist praisedhim (as was done in baseline) and delivered a level spoonof peaches (a preferred food) to his mouth. Al consumedall three beverages on 100% of the trials during the pos-itive reinforcement condition.

Sensory Reinforcers

Various forms of sensory stimulation such as vibration(e.g., massager), tactile stimulation (e.g., tickles, strokeswith a feather boa), flashing or sparkling lights, and musichave been used effectively as reinforcers (e.g., Bailey &Meyerson, 1969; Ferrari & Harris, 1981; Gast et al., 2000;Hume & Crossman, 1992; Rincover & Newsom, 1985;Vollmer & Iwata, 1991).

Tangible Reinforcers

Items such as stickers, trinkets, school materials, tradingcards, and small toys often serve as tangible reinforcers.An object’s intrinsic worth is irrelevant to its ultimate ef-fectiveness as a positive reinforcer. Virtually any tan-gible item can serve as a reinforcer. Remember Engelmann’s (1975) kindergarten students who workedfor yellow slips of paper!

Activity Reinforcers

When the opportunity to engage in a certain behaviorserves as reinforcement, that behavior may be called anactivity reinforcer. Activity reinforcers may be everydayactivities (e.g., playing a board game, leisure reading, lis-tening to music), privileges (e.g., lunch with the teacher,shooting baskets in the gym, first in line), or specialevents (e.g., a trip to the zoo).

McEvoy and Brady (1988) evaluated the effects ofcontingent access to play materials on the completion ofmath worksheets by three students with autism and be-havior disorders. During baseline, the teacher told thestudents to complete the problems as best that they could,and that they should either complete other unfinished as-signments or “find something else to do” if they finishedthe worksheets before a 6-minute timing elapsed. Noother prompts or instructions were given for completingthe worksheets. The teacher praised the completion ofthe worksheets.

On the first day of intervention for each student, hewas taken to another room and shown a variety of toysand play materials. The teacher told the student he wouldhave approximately 6 minutes to play with the materials

if he met a daily criterion for completing math problems.Figure 7 shows the results. During baseline, the rate at which all three students correctly completed problemswas either low (Dicky) or highly variable (Ken andJimmy). When contingent access to the play activitieswas introduced, each student’s completion rate increasedand eventually exceeded criterion levels.

Premack (1959) hypothesized that activity rein-forcers can be identified by looking at the relative dis-tribution of behaviors in a free operant situation. Premackbelieved that behaviors themselves could be used as re-inforcers and that the relative frequency of behavior wasan important factor in determining how effective a givenbehavior might be as a reinforcer if the opportunity toengage in the behavior is contingent on another behav-ior. The Premack principle states that making the op-portunity to engage in a behavior that occurs at arelatively high free operant (or baseline) rate contingenton the occurrence of low-frequency behavior will func-tion as reinforcement for the low-frequency behavior.For a student who typically spends much more timewatching TV than doing homework, a contingency basedon the Premack principle (informally known as“Grandma’s Law”) might be, “When you have finishedyour homework, you can watch TV.”

Building on Premack’s concept, Timberlake and Al-lison (1974) proposed the response-deprivation hy-pothesis as a model for predicting whether access to onebehavior (the contingent behavior) will function as rein-forcement for another behavior (the instrumental re-sponse) based on the relative baseline rates at which eachbehavior occurs and whether access to the contingent be-havior represents a restriction compared to the baselinelevel of engagement. Restricting access to a behavior pre-sumably acts as a form of deprivation that serves as anEO, thus making the opportunity to engage in the re-stricted behavior an effective form of reinforcement (Al-lison, 1993; Iwata & Michael, 1994).

Iwata and Michael (1994) cited a series of three studies by Konarski and colleagues as demonstrating the veracity and applied implications of the response-deprivation hypothesis. In the first study, when studentswere given access to coloring (a high-probability be-havior contingent on completing math problems (a low-probability behavior), they spent more time doing math,but only if the reinforcement schedule represented a re-striction of the amount of time spent coloring comparedto baseline (Konarski, Johnson, Crowell, & Whitman,1980). The researchers found that a contingency in whichstudents could earn more time coloring than they did inbaseline for completing math problems was ineffective.These basic findings were reproduced in a subsequentstudy in which access to reading (or math, depending onthe subject) was contingent on math (or reading)

Positive Reinforcement

291

6.0

5.0

4.0

3.0

2.0

1.0

0

Dicky

Baseline Contingent Access

Problems completed correctlyProblems completed incorrectly

6.0

5.0

4.0

3.0

2.0

1.0

0

Ken

(6.9)

Nu

mb

er p

er M

inu

te

6.0

5.0

4.0

3.0

2.0

1.0

0

Jimmy

Sessions5 10 15 20 25 30

(6.3) (6.3)

Figure 7 Number of math problems com-pleted correctly and incorrectly per minute bythree special education students during base-line and contingent access to play materials.Dashed horizontal lines indicate criteria.From “Contingent Access to Play Materials as an AcademicMotivator for Autistic and Behavior Disordered Children” by M. A. McEvoy and M. P. Brady, 1988, Education and Treatment ofChildren, 11, p. 15. Copyright 1998 by the Editorial Review Board of Education and Treatment of Children. Used by permission.

(Konarski, Crowell, Johnson, & Whitman (1982). In thethird study, Konarski, Crowell, and Duggan (1985) tookthe response-deprivation hypothesis a step further by ex-amining the “reversibility of reinforcement” within sub-jects; that is, engaging in either of two activities—readingor math—could serve as reinforcement for increased per-formance in the other activity, in a response-deprivationcondition for the contingent activity. Response depriva-tion for writing as the contingent response resulted in in-creases in math (instrumental response); conversely,response deprivation for math as the contingent responseproduced increases in reading. In all three studies, re-sponse restriction was the key factor in determiningwhether access to the contingent response would be reinforcing.

Iwata and Michael (1994) concluded that the collec-tive results of Konarski and colleagues’ studies illustrateeach of three predictions based on the response-deprivation hypothesis (assume the ratio of baseline ratesof doing homework to watching TV is 1:2 in the follow-ing examples):

• Reinforcement of a low-rate target behavior whenaccess to a high-rate contingent behavior is re-stricted below baseline levels (e.g., 30 minutes ofhomework gets access to 30 minutes of TV).

• Nonreinforcement of a low-rate behavior when ac-cess to a high-rate contingent behavior is not re-stricted below baseline levels (e.g., 30 minutes ofhomework gets access to 90 minutes of TV).

• Reinforcement of a high-rate target behavior whenaccess to the low-rate behavior is restricted be-low baseline levels (e.g., 30 minutes of TV yields 5 minutes of homework).

Although recognizing that practitioners seldom de-sign reinforcement programs to increase the rate of be-haviors such as TV watching that already occur at highrates, Iwata and Michael (1994) noted that:

There are a number of instances in which one may wishto produce highly accelerated performance (e.g., as insuperlative academic or athletic performance that is

Positive Reinforcement

292

good to begin with). In such cases, one need not find an-other activity that occurs at a higher rate to serve as rein-forcement if one could arrange a suitable deprivationschedule with an activity that occurs at a relatively lowrate. (p. 186)

As with all other descriptive categories of reinforcers,there is no a priori list that reveals what activities will orwill not function as reinforcers. An activity that servesas effective reinforcement for one learner might havequite another effect on the behavior of another learner.For example, in Konarski, Crowell, and colleagues’(1982) study, access to math functioned as reinforcementfor doing more reading for three students, whereas get-ting to read was the reinforcer for completing math prob-lems for a fourth student. Many years ago, a classiccartoon brought home this crucial point very well. Thecartoon showed two students dutifully cleaning the chalk-board and erasers after school. One student said to theother, “You’re cleaning erasers for punishment!? I get toclean erasers as a reward for completing my homework.”

Social Reinforcers

Physical contact (e.g., hugs, pats on the back), proximity(e.g., approaching, standing, or sitting near a person), at-tention, and praise are examples of events that often serveas social reinforcers. Adult attention is one of the mostpowerful and generally effective forms of reinforcementfor children. The nearly universal effects of contingentsocial attention as reinforcement has led some behavioranalysts to speculate that some aspects of social atten-tion may entail unconditioned reinforcement (e.g.,Gewirtz & Pelaez-Nogueras, 2000; Vollmer & Hacken-berg, 2001).

The original experimental demonstrations and dis-covery of the power of adults’ social attention as rein-forcement for children’s behavior took place in a seriesof four studies designed by Montrose Wolf and carriedout by the preschool teachers at the Institute of Child De-velopment at the University of Washington in the early1960s (Allen, Hart, Buell, Harris, & Wolf, 1964; Harris,Johnston, Kelly, & Wolf, 1964; Hart, Allen, Buell, Har-ris, & Wolf, 1964; Johnston, Kelly, Harris, & Wolf, 1966).Describing those early studies, Risley (2005) wrote:

We had never seen such power! The speed and magni-tude of the effects on children’s behavior in the realworld of simple adjustments of something so ubiquitousas adult attention was astounding. Forty years later, so-cial reinforcement (positive attention, praise, “catchingthem being good”) has become the core of most Ameri-can advice and training for parents and teachers—mak-ing this arguably the most influential discovery ofmodern psychology. (p. 280)

8The first volume of the Journal of Applied Behavior Analysis (1968) is atreasure trove of classic studies in which simple and elegant experimentaldesigns revealed the powerful effects of operant conditioning and contin-gency management. We strongly encourage any serious student of appliedbehavior analysis to read it from cover to cover.

Because of the profound importance of this long-known but underused phenomenon, we describe a sec-ond study showing the effects of contingent attention asreinforcement for children’s behavior. The first volume ofthe Journal of Applied Behavior Analysis included nofewer than seven studies building on and extending Wolfand colleagues’ pioneering research on social reinforce-ment.8 R. Vance Hall and colleagues conducted two ofthose studies. Like the Hall, Lund, and Jackson (1968)study, from which we selected the example of a teacher’suse of positive reinforcement with Robbie that introducedthis chapter, the three experiments reported by Hall, Pa-nyan, Rabon, and Broden (1968) continue to serve aspowerful demonstrations of the effects of teacher atten-tion as social reinforcement.

A first-year teacher whose class of 30 sixth-gradersexhibited such high rates of disruptive and off-task be-haviors that the school principal described the class as“completely out of control” participated in one of the ex-periments. Throughout the study, Hall, Panyan, and col-leagues (1968) measured teacher attention and studentsbehavior during a continuous 30-minute observation pe-riod in the first hour of the school day. The researchersused a 10-second partial-interval observation and record-ing procedure to measure study behavior (e.g., writingthe assignment, looking in the book, answering theteacher’s question) and nonstudy behavior (e.g., talkingout, being out of seat, looking out the window, fightingor poking a classmate). The observers also recorded theoccurrence of teacher attention in each interval. Each in-stance of teacher verbal attention, defined as a commentdirected to a student or group of students, was recordedwith a “+” if it followed appropriate study behavior, andwith a “�” if it followed nonstudy behavior.

During baseline the class had a mean percentage ofintervals of study behavior of 44%, and the teacher madean average of 1.4 comments following study behaviorper session (see Figure 8). “Almost without exceptionthose [comments] that followed study behavior were ap-proving and those that followed nonstudy behavior werein the form of a verbal reprimand” (Hall, Panyan et al.,1968, p. 316). The level of study behavior by the classwas 90% on one day when the helping teacher presenteda demonstration lesson (see data points marked by solidarrow). On three occasions during baseline (data pointsmarked by open arrows), the principal met with theteacher to discuss his organizational procedures in an ef-fort to improve the students’ behavior. These counseling

Positive Reinforcement

293

100

80

60

40

20

0

Per

cen

tage

of

Tim

e Sp

ent

Stu

dyin

g

Baseline

5 10 15 20 25 30 35 40 45 50 55

Reinforcement1 Reversal Reinforcement2 Post

40

20

05 10 15 20 25 30 35 40 45 50 55

Sessions

Inte

rval

s o

f T

each

erA

tten

tio

n t

o S

tud

y

Figure 8 A record of class study behavior and teacher attention for study behavior during reading period in a sixth-grade classroom. Baseline = before experimental proce-dures; reinforcement 1 = increased teacher attention for study; reversal = removal of teacherattention for study; reinforcement 2 = return to increased teacher attention for study. Post-follow-up checks occurred up to 20 weeks after termination of the experimental procedures.From “Instructing Beginning Teachers in Reinforcement Procedures Which Improve Classroom Control” by R. V. Hall, M. Panyan, D. Rabon, and M. Broden, 1968, Journal of Applied Behavior Analysis, 1, p. 317. Copyright by the Society for the ExperimentalAnalysis of Behavior, Inc. Reprinted by permission.

sessions resulted in the teacher writing all assignments onthe board (after the first meeting) and changing the seat-ing chart (after the third meeting). Those changes had noapparent effect on the students’ behavior.

Prior to the first day of the reinforcement condition,the teacher was shown baseline data on the class study be-havior and the frequency of teacher attention followingstudy behavior. The teacher was instructed to increasethe frequency of positive comments to students when theywere engaged in study behavior. After each session dur-ing this condition the teacher was shown data on the levelof class study behavior and the frequency of his com-ments that followed study behavior. During the first re-inforcement phase, teacher comments following studybehavior increased to a mean frequency of 14.6, and themean level of study behavior was 72%. The teacher, prin-cipal, and data collectors reported that the class was underbetter control and that noise had decreased significantly.

During a brief return of baseline conditions, theteacher provided “almost no reinforcement for study be-havior,” and a sharp downward trend in class study be-havior was observed. The teacher, principal, and data

collectors all reported that disruptive behavior and highnoise levels had returned. The reinforcement conditionswere then reinstated, which resulted in a mean frequencyof 14 teacher comments following study behavior, and amean level of 76% of intervals of study behavior.

Identifying PotentialReinforcers

In the laboratory, we had learned to use a simple test:Place a candy in the palm of our hand, show it to thechild, close our fist fairly tightly around the candy, andsee if the child will try to pull away our fingers to get atthe candy. If he or she will do that, even against in-creasingly tightly held fingers, the candy is obviously a reinforcer.

—Murray Sidman (2000, p. 18)

The success of many behavior change programs requiresan effective reinforcer that the practitioner or researchercan control. Fortunately, identifying effective and

Positive Reinforcement

294

accessible reinforcers for most learners is relatively easy.Sidman (2000) described a quick and simple method fordetermining whether candy would likely function as areinforcer. However, every stimulus, event, or activitythat might function as a reinforcer cannot be held in thepalm of the hand.

Identifying robust and reliable reinforcers for manylearners with severe and multiple disabilities poses amajor challenge. Although many common events serveas effective reinforcers for most people (e.g., praise,music, free time, tokens), these stimuli may not serve asreinforcers for all learners. Time, energy, and resourceswould be lost if planned interventions were to fail be-cause a practitioner used a presumed, instead of an ac-tual, reinforcer.

Also, reinforcer preferences shift, and the transitorynature of preference has been reported repeatedly in theliterature (Carr, Nicholson, & Higbee, 2000; DeLeon etal., 2001; Kennedy & Haring, 1993; Logan & Gast, 2001;Ortiz & Carr, 2000; Roane, Vollmer, Ringdahl, & Mar-cus, 1998). Preference assessments may change with theperson’s age, interest level, time of day, social interac-tions with peers, and the presence of certain establish-ing operations (EOs) (Gottschalk, Libby, & Graff, 2000).What a teacher asks in September to determine preferences may have to be repeated a month later (orsooner). Likewise, a therapist who asks a client what isreinforcing during a morning session may find that thisstimulus is not stated as a preferred item in an afternoonsession.

After reviewing 13 published studies that evaluatedpreferences and reinforcers for people with profoundmultiple disabilities, Logan and Gast (2001) concludedthat preferred stimuli do not always function as rein-forcers, and preferred stimuli at one point in time changedlater. Additionally, people with severe-to-profound de-velopmental disabilities may engage in activities for such

a limited time that it is difficult to clearly determinewhether a stimulus change is a reinforcer.

To meet the challenge of identifying effective rein-forcers, researchers and practitioners have developed avariety of procedures that fall under the twin headings ofstimulus preference assessment and reinforcer assess-ment. Stimulus preference assessment and reinforcer as-sessment are often conducted in tandem, as described byPiazza, Fisher, Hagopian, Bowman, and Toole (1996):

During preference assessments, a relatively large num-ber of stimuli are evaluated to identify preferred stimuli.The reinforcing effects of a small subset of stimuli (i.e.,the highly preferred stimuli) are then evaluated duringreinforcer assessment. Although the preference assess-ment is an efficient procedure identifying potential rein-forcers from a large number of stimuli, it does notevaluate the reinforcing effects of the stimuli. (pp. 1–2)

Stimulus preference assessment identifies stimulithat are likely to serve as reinforcers, and reinforcer as-sessment puts the potential reinforcers to a direct test bypresenting them contingent on occurrences of a behaviorand measuring any effects on response rate. In this sec-tion we describe a variety of techniques developed by re-searchers and practitioners for conducting stimuluspreference assessments and reinforcer assessment (seeFigure 9). Together these methods form a continuum of approaches ranging from simple and quick to morecomplex and time-consuming.

Stimulus Preference Assessment

Stimulus preference assessment refers to a variety ofprocedures used to determine (a) the stimuli that the per-son prefers, (b) the relative preference values of thosestimuli (high preference versus low preference), and (c) the conditions under which those preference values

Person

Significantothers

Pre-taskchoice

Contrivedobservation

Naturalisticobservation

Single stimulus

Pairedstimuli

Multiplestimuli

Multipleschedules

Progressiveratio

schedules

Concurrentschedules

Ask Free-Operant Trial-Based

Stimulus Preference Assessment Reinforcer Assessment

Figure 9 Stimulus preference assessment and reinforcer assessment methods for identifying potential reinforcers.

Positive Reinforcement

295

change when task demands, deprivation states, or sched-ules of reinforcement are modified. Generally speaking,stimulus preference assessment is usually conductedusing a two-step process: (1) A large pool of stimuli thatmight be used as reinforcers is gathered, and (2) thosestimuli are presented to the target person systematicallyto identify preference. It is essential for practitioners tonarrow the field of possible stimuli to those that havegood odds of functioning as reinforcers.

In more specific terms, stimulus preference assess-ments can be conducted using three basic methods: ask-ing the person (or his or her significant others) to identifypreferred stimuli; observing the person interacting or en-gaging with various stimuli in a free operant situation;and measuring the person’s responses to trial-based testsof paired or multiply presented stimuli. In choosing whichmethod to use, practitioners must balance two competingperspectives: (a) gaining the maximum amount of pref-erence assessment data within the least amount of time,but without false positives (i.e., believing a stimulus ispreferred when it is not), versus (b) conducting a moretime- and labor-intensive assessment that will delay in-tervention, but may yield more conclusive results.

Asking about Stimulus Preferences

A person’s preference for various stimuli might be de-termined by merely asking what she likes. Asking cangreatly reduce the time needed for stimulus preferenceassessment, and it often yields information that can beintegrated in an intervention program. Several variationsof asking exist: asking the target person, asking signifi-cant others in the person’s life, or offering a pretaskchoice assessment.

Asking the Target Person. A straightforwardmethod for determining stimulus preference is to askthe target person what he likes. Typical variations in-clude asking open-ended questions, providing the per-son with a list of choices or asking him to rank-order alist of choices.

• Open-ended questions. Depending on the learner’slanguage abilities, an open-ended assessment ofstimulus preference can be done orally or in writ-ing. The person may be asked to name preferencesamong general categories of reinforcers—for ex-ample, What do you like to do in your free time?What are your favorite foods and drinks? Are thereany types of music or performers whose music youlike? An open-ended assessment can be accom-plished simply by asking the learner to list asmany favorite activities or items as possible. Sheshould list not only everyday favorite things and

activities, but also special items and activities. Is simply a sheet with numbered lines on whichfamily members identify potential rewards theywould like to earn by completing tasks on contin-gency contracts.

• Choice format. This format could include askingquestions such as the following: “Which wouldyou do a lot of hard work to get? Would you ratherget things to eat, like chips, cookies, popcorn, orget to do things, like art projects, play computergames, or go to the library?” (Northup, George,Jones, Broussard, & Vollmer, 1996, p. 204)

• Rank-ordering. The learner can be given a list ofitems or stimuli and instructed to rank-order themfrom most to least preferred.

For learners with limited language skills, pictures ofitems, icons, or, preferrably, the actual stimuli can be pre-sented. For example, a teacher, while pointing to an icon,might ask a student, “Do you like to drink juice, use thecomputer, ride the bus, or watch TV?” Students simplynod yes or no.

Surveys have been developed to assess students’ preferences. For example, elementary school teachersmight use the Child Reinforcement Survey, which in-cludes 36 rewards in four categories: edible items (e.g.,fruit, popcorn), tangible items (e.g., stickers), activities(e.g., art projects, computer games), and social attention(e.g., a teacher or friend saying, “I like that”) (Fantuzzo,Rohrbeck, Hightower, & Work, 1991). Other surveys arethe School Reinforcement Survey Schedule for students ingrades 4 through 12 (Holmes, Cautela, Simpson, Motes,& Gold, 1998) and the Reinforcement Assessment for In-dividuals with Severe Disabilities (Fisher, Piazza, Bow-man, & Almari, 1996).

Although asking for personal preferences is relativelyuncomplicated, the procedure is not foolproof with re-spect to confirming that a preferred choice will later serveas a reinforcer. “Poor correspondence between verbalself-reports and subsequent behavior has been long notedand often demonstrated” (Northup, 2000, p. 335). Al-though a child might identify watching cartoons as a pre-ferred event, watching cartoons may function as areinforcer only when the child is at home on Saturdaymornings, but not at Grandma’s house on Sunday night.

Further, surveys may not differentiate accurately be-tween what children claim to be high-preference and low-preference items for reinforcers. Northup (2000) foundthat preferences of children with attention-deficit/hyper-activity disorder (ADHD) did not rise beyond chance lev-els when survey results were later compared to reinforcerfunctions. “The relatively high number of false positivesand low number of false negatives again suggest that sur-veys may more accurately identify stimuli that are not

Positive Reinforcement

296

reinforcers than those that are” (p. 337). Merely askingchildren their preferences once might lead to false posi-tives (i.e., children may choose an event or stimulus as areinforcer, but it may not be reinforcing).

Asking Significant Others. A pool of potential re-inforcers can be obtained by asking parents, siblings,friends, or caregivers to identify the activities, items,foods, hobbies, or toys that they believe the learnerprefers. For example, the Reinforcer Assessment for In-dividuals with Severe Disabilities (RAISD) is an inter-view protocol that asks caregivers to identify preferredstimuli across visual, auditory, olfactory, edible, tactile,and social domains (Fisher et al., 1996). Significantothers then rank-order the selected preferences basedon likely high- versus low-preference items. Finally,significant others are asked to identify the conditionsunder which they predict that specific items mightfunction as reinforcers (e.g., cookies with milk versusjust cookies alone). Again, although stimuli that areidentified as highly preferred by significant others arenot always effective as reinforcers, they often are.

Offering a Pretask Choice. In this method thepractitioner asks the participant to choose what hewants to earn for doing a task. The participant thenchooses one item from two or three options presented(Piazza et al., 1996). All of the stimuli presented as pre-task choices will have been identified as preferred stim-uli by other assessment procedures. For instance, ateacher might make the following statement: “Robyn,when you finish your math problems, you may have 10minutes to play Battleship with Martin, read quietly, orhelp Ms. Obutu prepare the social studies poster. Whichactivity do you want to work for?” A learner’s choice ofa consequence will not necessarily be a more effectivereinforcer than one selected by the researcher or practi-tioner (Smith, Iwata, & Shore, 1995).

Free Operant Observation

The activities that a person engages in most often whenable to choose freely from among behaviors will oftenserve as effective reinforcers when made contingent onengaging in low-probability behaviors. Observing andrecording what activities the target person engages inwhen she can choose during a period of unrestricted ac-cess to numerous activities is called free operant obser-vation. A total duration measure of the time the personengages with each stimulus item or activity is recorded.The longer the person engages with an item, the strongerthe inference that the item is preferred.

Procedurally, the person has unconstrained and si-multaneous access to a predetermined set of items or ac-tivities or to the materials and activities that are naturallyavailable in the environment. There are no response re-quirements, and all stimulus items are available and withinthe person’s sight and reach. An item is never removedafter engagement or selection. According to Ortiz andCarr (2000), free operant responding is less likely to pro-duce aberrant behavior that might otherwise be observedif a stimulus is removed. Free operant observations can becontrived or conducted in naturalistic settings.

Contrived Free Operant Observation. Practitionersuse contrived observation to determine whether, when,how, and the extent to which the person engages witheach of a predetermined set of activities and materials.The observation is contrived because the researcher orpractitioner “salts” the environment with a variety ofitems that may be of interest to the learner.

Free operant assessment presupposes that the per-son has had sufficient time to move about and explorethe environment and has had the chance to experienceeach of the stimuli, materials, or activities. Just prior tothe free operant observation period, the learner is pro-vided brief noncontingent exposure to each item. All ofthe items are then placed within view and easy access tothe learner, who then has the opportunity to sample andchoose among them freely. Observers record the totalduration of time that the learner engages with each stim-ulus item or activity.

Naturalistic Free Operant Observation. Natural-istic observations of free operant responding are con-ducted in the learner’s everyday environment (e.g.,playground, classroom, home). As unobtrusively aspossible, the observer notes how the learner allocateshis time and records the number of minutes the learnerdevotes to each activity. For instance, Figure 10shows how a teenager, Mike, distributed his time during2 hours of free time each day after school. Mike’s par-ents collected these data by keeping a chart of the totalnumber of minutes their son was engaged in each activ-ity. The summary chart for the week shows that Mikeplayed computer video games, watched television, andtalked on the phone to his friends every day. On twodifferent days Mike spent 10 minutes reading a librarybook, and he played with a new construction toy for abrief time on Wednesday. Two activities—watchingtelevision and playing video games—occurred the mostoften and for the longest duration. If Mike’s parentswanted to apply the Premack principle introduced ear-lier in this chapter to increase the amount of time hespends reading for pleasure or playing with the con-struction toy (i.e., low-probability behaviors), they

Positive Reinforcement

297

Figure 10 Number of minutes Mike spent engaged in activities during 2 hours offree time after school.

Activity Mon Tue Wed Thu Fri TotalLeisure reading — 10 — 10 — 20Watch TV 35 50 60 30 30 205Phone with friends 15 15 10 20 10 70Play video games 70 45 40 60 80 295Play with construction toy — — 10 — — 10Minutes observed 120 120 120 120 120 600

might make watching television or playing video games(i.e., high-probability behaviors) contingent on a cer-tain amount of time spent leisure reading or playingwith the construction toy.

Trial-Based Methods

In trial-based methods of stimulus preference assessment,stimuli are presented to the learner in a series of trialsand the learner’s responses to the stimuli are measured asan index of preference. One or more of three measures ofthe learner’s behavior are recorded in trial-based stimu-lus preference assessment: approach, contact (DeLeon &Iwata, 1996), and engagement with the stimulus(DeLeon, Iwata, Conners, & Wallace, 1999; Hagopian,Rush, Lewin, & Long, 2001; Roane et al., 1998).Approach responses typically include any detectablemovement by the person toward the stimulus (e.g., eyegaze, head turn, body lean, hand reach), a contact is tal-lied each time the person touches or holds the stimulus,and engagement is a measure of the total time or per-centage of observed intervals in which the person inter-acts with the stimulus (e.g., in which the person held amassager against her leg). An assumption is made thatthe more frequently the person approaches, touches orholds, or engages with a stimulus, the more likely it isthat the stimulus is preferred. As DeLeon and colleagues(1999) stated, “duration of item contact is a valid indexof reinforcer value” (p. 114).

Preferred stimuli are sometimes labeled as high-preference (HP), medium-preference (MP), or low-preference (LP) stimuli based on predetermined criteria(e.g., stimuli chosen 75% or more of the time are HP)(Carr, Nicolson, & Higbee, 2000; Northup, 2000; Pace,Ivancic, Edwards, Iwata, & Page, 1985; Piazza et al.,1996). An implicit, but testable, assumption is that ahighly preferred stimulus will serve as a reinforcer. Al-though this assumption does not always hold (Higbee,Carr, & Harrison, 2000), it has proven to be an efficientassumption with which to begin.

The many variations of trial-based stimulus prefer-ence assessment can be grouped by presentation methodas single stimulus (successive choice), paired stimuli(forced choice), and multiple stimuli.

Single Stimulus. A single-stimulus presentationmethod, also called a “successive choice” method, rep-resents the most basic assessment available for deter-mining preference. Simply stated, a stimulus ispresented and the person’s reaction to it is noted. Pre-senting one stimulus at a time “may be well suited forindividuals who have difficulty selecting among two ormore stimuli” (Hagopian et al., 2001, p. 477).

Target stimuli across all sensory systems (i.e., visual,auditory, vestibular, tactile, olfactory, gustatory, and mul-tisensory) are presented one at a time in random order,and the person’s reaction to each stimulus is recorded(Logan, Jacobs et al., 2001; Pace et al., 1985). Approachor rejection responses are recorded in terms of occur-rence (yes or no), frequency (e.g., number of touches perminute), or duration (i.e., time spent engaged with anitem). After recording, the next item in the sequence ispresented. For example, a mirror might be presented todetermine the duration of time the person gazes into it,touches it, or rejects the mirror (i.e., pushes it away). Eachitem should be presented several times, and the order ofpresentation should be varied.

Paired Stimuli. Each trial in the paired-stimuli pre-sentation method, also sometimes called the “forcedchoice” method, consists of the simultaneous presentationof two stimuli. The observer records which of the twostimuli the learner chooses. During the course of the as-sessment, each stimulus is matched randomly with allother stimuli in the set (Fisher et al., 1992). Data from apaired-stimuli assessment show how many times eachstimulus is chosen. The stimuli are then rank-ordered interms of high, medium, or low preference. Piazza andcolleagues (1996) used 66 to 120 paired-stimuli trials todetermine high, middle, and low preferences. Pace andcolleagues (1985) found that paired-stimuli presentations

Positive Reinforcement

298

yielded more accurate distinctions between high- andlow-preference items than did single-stimulus presenta-tions. Paired-stimuli sometimes outperform single-stimulus presentation formats with respect to ultimatelyidentifying reinforcers (Paclawskyj & Vollmer, 1995).

Because every possible pair of stimuli must be pre-sented, paired-stimuli assessing may take more time thanthe simultaneous presentation of an array of multiplestimuli (described in the next section). However, DeLeonand Iwata (1996) argued that ultimately the paired-stimulimethod may be more time efficient because “the moreconsistent results produced by the PS method may indi-cate that stable preferences can be determined in fewer,or even single, sessions” (p. 520).

Multiple Stimuli. The multiple-stimuli presenta-tion method is an extension of the paired-stimuli proce-dure developed by Fisher and colleagues (1992). Theperson chooses a preferred stimulus from an array ofthree or more stimuli (Windsor, Piche, & Locke, 1994).By presenting multiple stimuli together, assessmenttime is reduced. For example, instead of presenting aseries of trials consisting of all possible pairs of stimulifrom a group of six stimuli and continuing until allpairs have been presented, all six stimuli are presentedsimultaneously.

The two major variations of the multiple-stimuli pref-erence assessment are multiple stimuli with replacementand multiple stimuli without replacement. The differencebetween the two is which stimuli are removed or replacedafter the person indicates a preference among the dis-played items in preparation for the next trial. In the mul-tiple stimuli with replacement procedure, the item chosenby the learner remains in the array and items that were notselected are replaced with new items. In the multiple stim-uli without replacement procedure, the chosen item is re-moved from the array, the order or placement of theremaining items is rearranged, and the next trial beginswith a reduced number of items in the array.

In any case, each trial begins by asking the person,“Which one do you want the most?” (Higbee et al., 2000)“Choose one” (Ciccone, Graff, & Ahearn, 2005) and thencontinuing until all items from the original array, or thegradually reducing array, have been selected. The entiresequence is usually repeated several times, although asingle round of trials may identify stimuli that functionas reinforcers (Carr et al., 2000).

The stimuli presented in each trial might be tangibleobjects themselves, pictures of the items, or verbal de-scriptions. Higbee, Carr, and Harrison (1999) provideda variation of the multiple-stimuli procedure that includedstimulus preference selection based on a tangible objectversus a picture of the object. The tangible objects pro-

duced greater variation and distribution of preferencesthan the picture objects did. Cohen-Almeida, Graff, andAhearn (2000) found that the tangible object assessmentwas about as effective as a verbal preference assessment,but the clients completed the verbal preference assess-ment in less time.

DeLeon and Iwata (1996) used an adaptation of themultiple-stimuli and paired-stimuli presentations they de-scribed as a brief stimulus assessment to reduce the timeneeded to determine stimulus preference. Basically, in thebrief stimulus assessment, once a particular stimulus itemis chosen, that item is not returned to the array. Subse-quent trials present a reduced number of items from whichto choose (Carr et al., 2000; DeLeon et al., 2001; Roaneet al., 1998). DeLeon and Iwata (1996) found that multi-ple stimuli without replacement identified preferred itemsin approximately half the time that the multiple stimuliwith replacement procedure did. According to Higbee andcolleagues (2000), “With a brief stimulus preference pro-cedure, practitioners have a method for reinforcer identi-fication that is both efficient and accurate” (pp. 72–73).

Guidelines for Selecting and Using StimulusPreference Assessments

Practitioners can combine assessment procedures to com-pare single versus paired, paired versus multiple, or freeoperant versus trial-based methods (Ortiz & Carr, 2000).In day-to-day practice, brief stimulus presentations usingcomparative approaches might facilitate reinforcer iden-tification, thereby speeding up possible interventionsusing those reinforcers. In summary, the goal of stimuluspreference assessments is to identify stimuli that are mostlikely to function as reinforcers. Each method for as-sessing preference has advantages and limitations withrespect to identifying preferences (Roane et al., 1998).Practitioners may find the following guidelines helpfulwhen conducting stimulus preference assessments(DeLeon & Iwata, 1996; Gottschalk et al., 2000; Higbeeet al., 2000; Ortiz & Carr, 2000; Roane et al., 1998;Roscoe, Iwata, & Kahng, 1999):

• Monitor the learner’s activities during the time pe-riod before the stimulus preference assessment ses-sion to be aware of EOs that may affect the results.

• Use stimulus preference assessment options thatbalance the cost-benefit of brief assessments (butpossible false positives) with more prolonged as-sessments that may delay reinforcer identification.

• Balance using a stimulus preference method thatmay yield a ranking of preferred stimuli against anassessment method that occurs without rankings,

Positive Reinforcement

299

but occurs more frequently, to counteract shifts inpreference.

• When time is limited, conduct a brief stimuluspreference assessment with fewer items in an array.

• When possible, combine data from multiple as-sessment methods and sources of stimulus prefer-ence (e.g., asking the learner and significant others,free operant observation, pretask choice, and trial-based methods).

Reinforcer AssessmentThe only way to tell whether or not a given event is rein-forcing to a given organism under given conditions is tomake a direct test.

—B. F. Skinner (1953, pp. 72–73)

Highly preferred stimuli may not always function as re-inforcers (Higbee et al., 2000); even the candy that achild pried out of Sidman’s hand may not have func-tioned as reinforcement under certain conditions. Con-versely, least preferred stimuli might serve as reinforcersunder some conditions (Gottschalk et al., 2000). Theonly way to know for sure whether a given stimulusserves as a reinforcer is to present it immediately fol-lowing the occurrence of a behavior and note its effectson responding.

Reinforcer assessment refers to a variety of direct,data-based methods used to present one or more stimulicontingent on a target response and then measuring thefuture effects on the rate of responding. Researchers andpractitioners have developed reinforcer assessment meth-ods to determine the relative effects of a given stimulusas reinforcement under different and changing conditionsand to assess the comparative effectiveness of multiplestimuli as reinforcers for a given behavior under specificconditions. Reinforcer assessment is often accomplishedby presenting stimuli suspected of being reinforcers con-tingent on responding within concurrent, multiple, orprogressive-ratio reinforcement schedules.9

Concurrent Schedule Reinforcer Assessment

When two or more contingencies of reinforcement oper-ate independently and simultaneously for two or morebehaviors a concurrent schedule of reinforcement is ineffect. When used as a vehicle for reinforcer assessment,a concurrent schedule arrangement essentially pits two

9These and other types of schedules of reinforcement and their effects onbehavior are described in chapter entitled “Schedules of Reinforcement.”

stimuli against each other to see which will produce thelarger increase in responding when presented as a con-sequence for responding. If a learner allocates a greaterproportion of responses to one component of the con-current schedule over the other, the stimulus used as acontingent consequence for that component is the moreeffective reinforcer. For example, using a concurrentschedule in this way shows the relative effectiveness ofhigh-preference (HP) and low-preference (LP) stimuli asreinforcers (Koehler, Iwata, Roscoe, Rolider, & O’Steen,2005; Piazza et al., 1996).

Concurrent schedules may also be used to determinedifferences between relative and absolute reinforcementeffects of stimuli. That is, will an LP stimulus now pre-sented contingently in the absence of the HP stimulusserve as a reinforcer? Roscoe and colleagues (1999) usedconcurrent schedules to compare the effects of HP and LPstimuli as reinforcers for eight adults with developmen-tal disabilities. Following the preference assessments, aconcurrent schedule of reinforcement was establishedusing the high-preference and low-preference stimuli.The target response was pressing either of two microswitch panels. Each panel was a different color. Pressinga panel would illuminate a small light in the center of thepanel. A training condition took place prior to baseline toestablish panel pressing in the subjects’ repertoires and toexpose them to the consequences of responding. Duringbaseline, pressing either panel resulted in no programmedconsequences. During the reinforcement phase, an HPstimulus was placed on a plate behind one of the panelsand an LP stimulus was placed on a plate behind anotherpanel. All responses to either panel resulted in the par-ticipant immediately receiving the item on the plate be-hind the respective panel (i.e., an FR 1 schedule ofreinforcement).

Under the concurrent schedule of reinforcement thatenabled participants to choose reinforcers on the sameFR 1 schedule, the majority of participants allocated mostof their responding to the panel that produced the HPstimulus as reinforcement (e.g., see results for Sean,Peter, and Matt on Figure 11). However, these same participants, when later presented with the opportunityto obtain LP stimuli as reinforcers on a single-schedulecontingency (i.e., only one panel to push), showed in-creased levels of responding over baseline similar to thoseobtained with the HP stimuli in the concurrent schedule.The study by Roscoe and colleagues (1999) demonstratedhow concurrent schedules may be used to identify therelative effects of stimuli as reinforcers. The study alsoshowed that the potential effects of a stimulus as a rein-forcer may be masked or overshadowed when that stim-ulus is pitted against another stimulus on a concurrentschedule. In such cases, a potentially reinforcing stimu-lus might be abandoned prematurely.

Positive Reinforcement

300

10

8

6

4

2

0

ConcurrentBL Sr+ BL Sr+

Single

Sean

5 10 15 20 25 30 35

HP

LP

10

8

6

4

2

0

ConcurrentBL Sr+ BL Sr+

Single

Peter

10 15 205 25

10

8

6

4

2

0

ConcurrentBL Sr+ BL Sr+

Single

Matt

10 155 20

10

8

6

4

2

0

ConcurrentBL Sr+

Mike

10 155 20Sessions

Res

po

nse

s p

er M

inu

te

Figure 11 Responses per minute during concur-rent-schedule and single-schedule baseline and rein-forcement conditions for four adults with mentalretardation.From “Relative versus Absolute Reinforcement Effects: Implications forPreference Assessments” by E. M. Roscoe, B. A. Iwata, and S. Kahng, 1999,Journal of Applied Behavior Analysis, 32, p. 489. Copyright 1999 by theSociety for the Experimental Analysis of Behavior, Inc. Used by permission.

Multiple Schedule Reinforcer Assessment

A multiple schedule of reinforcement consists of two ormore component schedules of reinforcement for a singleresponse with only one component schedule in effect atany given time. A discriminative stimulus (SD) signalsthe presence of each component schedule, and that stim-ulus is present as long as the schedule is in effect. Oneway that a multiple schedule could be used for reinforcerassessment would be to present the same stimulus eventcontingent (i.e., response dependent) on each occurrenceof the target behavior in one component of the multiple

schedule and on a fixed-time schedule (i.e., response in-dependent) in the other component. For example, if apractitioner wanted to use a multiple schedule to assesswhether social attention functioned as a reinforcer, shewould provide social attention contingent on occurrencesof cooperative play when one component of the multipleschedule is in effect, and during the other componentthe practitioner would present the same amount and kindof social attention except on a fixed-time schedule, in-dependent of cooperative play (i.e., noncontingent reinforcement). The teacher could apply the response-dependent schedule during the morning play period, andthe response-independent schedule during the afternoonplay period. If social attention functioned as reinforce-ment, cooperative play would likely increase over itsbaseline rate in the morning periods, and because of norelationship with cooperative play, attention would likelyhave no effect in the afternoon period. This situation fol-lows a multiple schedule because there is one class ofbehavior (i.e., cooperative play), a discriminative stim-ulus for each contingency in effect (i.e., morning and af-ternoon play periods), and different conditions forreinforcement (i.e., response dependent and response in-dependent).

Progressive-Ratio Schedule Reinforcer Assessment

Stimulus preference assessments with low response re-quirements (e.g., FR 1) may not predict the effectivenessof the stimulus as a reinforcer when presented with higherresponse requirements (e.g., on an FR 10 schedule, a stu-dent must complete 10 problems to obtain reinforcement).As DeLeon, Iwata, Goh, and Worsdell (1997) stated:

Current assessment methods may make inaccurate pre-dictions about reinforcer efficacy when the task used intraining regimens requires either more responses or moreeffort before the delivery of reinforcement. . . . for someclasses of reinforcers, simultaneous increases in sched-ule requirements may magnify small differences in pref-erences that are undetected when requirements are low.In such cases, a stimulus preference assessment involv-ing low response requirements (FR1) schedules does notaccurately predict the relative potency of reinforcersunder increased response requirements. (pp. 440, 446)

Progressive-ratio schedules provide a framework forassessing the relative effectiveness of a stimulus as re-inforcement as response requirements increase. In aprogressive-ratio schedule of reinforcement the responserequirements for reinforcement are increased systemati-cally over time independent of the participant’s behav-ior. In a progressive-ratio schedule, the practitionergradually requires more responses per presentation of the

Positive Reinforcement

301

14121086420

5 10 15 20 30 40 4525 35

Elaine

FR 1 FR 2 FR 5 FR 10 FR 20 FR 1

Drink

Balloon

24

20

16

12

8

4

05 10 15 20 30 40 4525 35

Rick

FR 1 FR 2 FR 5 FR 10 FR 20

Cookie

Massager

5 10 15 20 30 40 4525 35

Elaine

FR 1 FR 2 FR 5 FR 10 FR 1

Chip

Pretzel

14121086420

24

20

16

12

8

4

05 10 15 20 30 40 4525 35

Rick

FR 1 FR 2 FR 5 FR 10Cookie

Cracker

Nu

mb

er o

f R

ein

forc

ers

Ear

ned

Sessions

Figure 12 Responses per minute duringconcurrent-schedule and single-schedulebaseline and reinforcement conditions for four adults with mental retardation.From “Emergence of Reinforcer Preference as a Function ofSchedule Requirements and Stimulus Similarity” by I. G. DeLeon, B. A. Iwata, H. Goh, and A. S. Worsdell, 1997, Journal of AppliedBehavior Analysis, 30, p. 444. Copyright 1997 by the Society for theExperimental Analysis of Behavior, Inc. Used by permission.

preferred stimulus until a breaking point is reached andthe response rate declines (Roane, Lerman, & Vorndran,2001). For example, initially each response produces re-inforcement (FR 1), then reinforcement is delivered afterevery second response (FR 2), then perhaps after everyfifth, tenth, and twentieth response (FR 5, FR 10, and FR20). At some point, a preferred stimulus may no longerfunction as reinforcement (Tustin, 1994).

DeLeon and colleagues (1997) used a progressiveratio within a concurrent schedule to test the relative ef-fectiveness of two similarly preferred stimuli (e.g., cookieand cracker) and two dissimilar stimuli (e.g., drink andballoon) as reinforcers for micro switch panel pressing forElaine and Rick, two adults with mental retardation. Onepanel was blue and one was yellow. The experimentersplaced two reinforcers on separate plates, and put oneplate behind each of the panels. Each trial (24 per ses-sion for Rick; 14 per session for Elaine) consisted of thesubject pushing either one of the panels and immediately

receiving the item on the plate behind that panel. Duringthe first phase, an FR 1 schedule was used (i.e., each re-sponse produced the item on the plate). Later, the re-sponse requirement for obtaining the items was graduallyincreased (FR 2, FR 5, FR 10, and FR 20).

Elaine and Rick made responses that produced thetwo dissimilar items at roughly the same rates during theFR 1 phase (see the top two graphs in Figure 12). As response requirements for receiving the dissimilar stim-uli increased, Elaine and Rick continued to evenly allo-cate responding between the two panels. However, wheninitially equivalent and similar reinforcers (food in FR1) were compared under increasing schedule require-ments, the differences in responses rates on the two pan-els revealed clear and consistent preferences (see thebottom two graphs in Figure 12). For example, whenElaine needed to work more to receive food, she allo-cated the majority of her responses to the panel that pro-duced chips rather than the one that produced pretzels. In

Positive Reinforcement

302

the same way, as the number of responses required to re-ceive reinforcement increased, Rick showed a clear pref-erence for cookies over crackers. These results suggestthat “for some classes of reinforcers, simultaneous in-creases in schedule requirements may magnify small differences in preference that are undetected when requirements are low” (DeLeon et al., 1997, p. 446).

Increasing response requirements within a concur-rent schedule may reflect the effects of increasing re-sponse requirements on the choice between reinforcersand also reveal whether and under what conditions tworeinforcers are substitutable for each other. If two rein-forcers serve the same function (i.e., are made effectiveby the same establishing operation), an increase in theprice (i.e., response requirement) for one of the rein-forcers will lead to a decreased consumption of that itemif a substitutable reinforcer is available (Green & Freed,1993). DeLeon and colleagues (1997) used a hypotheti-cal person with a slight preference for Coke over Pepsi asan analogy to explain the results shown in Figure 12.

Assuming that Coke and Pepsi are both available for$1.00 per serving and that a person has only a slightpreference for Coke, the individual may allocate choicesrather evenly, perhaps as a function of periodic satiationfor the preferred item, but with slightly more overall se-lections of Coke. Now assume that the cost of each is increased to $5.00 per serving. At this price, the prefer-ence for Coke is likely to be expressed. By contrast, asimilar arrangement involving Coke and bus tokens may produce different results. Again, at $1.00 per item,roughly equal selection between the two options wouldnot be surprising, assuming that the establishing opera-tion for each dictates that both are momentarily equallyvaluable. However, these items serve distinctly differentfunctions and are not substitutable; that is, the person isnot free to trade one for the other and to continue to re-ceive functionally similar reinforcement at the samerate. The person is more likely to continue choosingequally, even when the price for both reinforcers in-creases substantially.

The same might be said for the results obtained in the present study. When choices involved two substi-tutable items, such as a cookie and a cracker, concurrentincreases in the cost of each may have “forced” the expression of slight preference for one of the items.However, when reinforcers that were unlikely to be sub-stitutes, such as a cookie and a massager, were concur-rently available and equally preferred, increases in costhad little effect on preference. (pp. 446–447)

Although Stimulus X and Stimulus Y may each func-tion as reinforcers when task demands are low, or whenthe reinforcement schedule is dense, when the task de-mands increase or when the schedule becomes leaner (i.e.,more responses required per reinforcement), participants

may choose only Stimulus Y. DeLeon and colleagues(1997) pointed out that practitioners who are alert to theserelationships might be more skeptical in believing thatoriginal preferences will be sustained under changing en-vironmental conditions, and judicious in how they plan re-inforcement delivery relative to task assignment onceintervention is underway. That is, it might be better tosave some types of preferred stimuli for when task de-mands are high rather than substituting them for otherequally preferred stimuli when task demands are low.

Control Procedures for Positive ReinforcementPositive reinforcement control procedures are used to manipulate the contingent presentation of a potential re-inforcer and observe any effects on the future frequencyof behavior. Control, as the term is used here, requiresan experimental demonstration that the presentation of astimulus contingent on the occurrence of a target responsefunctions as positive reinforcement. Control is demon-strated by comparing response rates in the absence andpresence of a contingency, and then showing that withthe absence and presence of the contingency the behav-ior can be turned on and off, or up and down (Baer, Wolf,& Risley, 1968). Historically, researchers and practition-ers have used the reversal technique as the major controltechnique for positive reinforcement. Briefly, the rever-sal technique includes two conditions and a minimum offour phases (i.e., ABAB). In the A condition, the behav-ior is measured over time until it achieves stability in theabsence of the reinforcement contingency. The absenceof the contingency is the control condition. In the B con-dition, the reinforcement contingency is presented; thesame target behavior continues to be measured to assessthe effects of the stimulus change. The presence of the re-inforcement contingency is the experimental condition. Ifthe rate of responding increases in the presence of thecontingency, the analyst then withdraws the reinforce-ment contingency and returns to the A and B conditionsto learn whether the absence and presence of the contin-gency will turn the target behavior down and up.

However, using extinction as the control conditionduring the reversal phase presents practical and concep-tual problems. First, withdrawing reinforcement may re-sult in extinction-produced side effects (e.g., an initialincrease in response rate, emotional responses, aggres-sion) that affect the demonstration of control. Second,in some situations it may be impossible to withdrawthe reinforcement contingency completely (Thompson& Iwata, 2005). For example, it is unlikely that ateacher could completely remove teacher attention

Positive Reinforcement

303

during the A condition. In addition to these problems,Thompson and Iwata (2005) noted that, although

extinction has often been successful in reversing the be-havioral effects of positive reinforcement, its use as acontrol procedure presents interpretive difficulties. Es-sentially, extinction does not adequately isolate the rein-forcement contingency as the variable controlling thetarget response, because mere stimulus presentation can-not be ruled out as an equally viable explanation. (p. 261,emphasis added)

According to Thompson and Iwata (2005), “the idealcontrol procedure for positive reinforcement eliminates thecontingent relation between the occurrence of the target re-sponse and the presentation of the stimulus while control-ling for the effects of stimulus presentation alone” (p. 259).They reviewed the effectiveness of three variations of the re-versal technique as control procedures for determining re-inforcement: noncontingent reinforcement (NCR),differential reinforcement of other behavior (DRO), anddifferential reinforcement of alternative behavior (DRA).10

Noncontingent Reinforcement

Noncontingent reinforcement (NCR) is the presentation ofa potential reinforcer on a fixed-time (FT) or variable-time (VT) schedule independent of the occurrence of thetarget behavior. The response-independent presentation ofthe potential reinforcer eliminates the contingent relationbetween the target behavior and the stimulus presenta-tion while allowing any effects of the stimulus presenta-tion alone to be detected. Thus, NCR meets Thompsonand Iwata’s (2005) criteria for an ideal control procedurefor positive reinforcement.

The NCR reversal technique should entail a mini-mum of five phases (ABCBC): A is a baseline condition;B is an NCR condition, where the potential reinforcer ispresented on a fixed- or variable-interval schedule inde-pendent of the target behavior; and C is a condition inwhich the potential reinforcer is presented contingent onthe occurrence of the target behavior. The B and C con-ditions are then repeated to learn whether the level of re-sponding decreases and increases as a function of theabsence and presence of the response-consequence con-tingency. The quality, amount, and rate of reinforcementshould be approximately the same during the contingentand noncontingent B and C conditions of the analysis.

NCR often produces persistent responding, perhapsbecause of accidental reinforcement that sometimes oc-curs with a response-independent schedule, or becausesimilar EOs and antecedent stimulus conditions evokethe persistent responding. Whatever the cause, persistent10Chapter entitled “Reversal and Alternating Treatments Designs” presentsthe ABAB, NCR, DRO, and DRA control techniques in the context of sin-gle-case experimental designs.

11Chapter entitled “Differential Reinforcement” describes the use of DROand DRA as behavior change tactics for decreasing the frequency of un-desirable behavior.

responding is a limitation of the NCR control procedurebecause it makes achieving a reversal effect (reduced re-sponding) more time-consuming than the reversal tech-nique with extinction. Achieving the effect may requirelengthy contact with the NCR schedule.

Differential Reinforcement of Other Behavior

A practitioner using differential reinforcement of other be-havior (DRO) delivers a potential reinforcer whenever thetarget behavior has not occurred during a set time inter-val. The DRO reversal technique includes a minimum offive phases (i.e., ABCBC): A is a baseline condition; B isa reinforcement condition, in which the potential reinforceris presented contingent on the occurrence of the target be-havior; and C is the DRO control condition in which thepotential reinforcer is presented contingent on the absenceof the target behavior. The analyst then repeats the B andC conditions to determine whether the level of respond-ing decreases and increases as a function of the absenceand presence of the response-consequence contingency.

The DRO schedule allows for the continued presen-tation of the reinforcement contingency during the re-versal phases of the control procedure. In one condition,the contingency is active with occurrences of the targetbehavior. In another condition, the contingency is activefor the omission of the target behavior. The DRO controlprocedure may produce the reversal effect in less timethan the NCR schedule, perhaps because of the elimina-tion of accidental reinforcement of the target behaviors.

Differential Reinforcementof Alternative Behavior

When differential reinforcement of an alternative be-havior (DRA) is used as a control condition, the potentialreinforcer is presented contingent on occurrences of a de-sirable alternative to the target behavior.11 The DRA re-versal technique includes a minimum of five phases (i.e.,ABCBC): A is a baseline condition; B is a reinforcementcondition, in which the potential reinforcer is presentedcontingent on the occurrence of the target behavior; andC is a condition in which the potential reinforcer is pre-sented contingent on the occurrence of an alternative be-havior (i.e., DRA). The analyst will then repeat phases Band C to ascertain whether the level of responding de-creases and increases as a function of the absence andpresence of the response–consequence contingency.

Thompson and Iwata (2005) summarized the limi-tations of using DRO and DRA as control conditions pro-cedures to test for positive reinforcement:

Positive Reinforcement

304

[DRO and DRA] introduce a new contingency that wasnot present in the original experimental arrangement. Asa result, reductions in the target response under a contin-gency reversal might be attributed to either (a) termina-tion of the contingency between the target response andthe reinforcer or (b) introduction of reinforcement forthe absence of the target response or for the occurrenceof a competing response. In addition, given that rein-forcement is provided contingent on some characteristicof responding during the contingency reversal, it may bedifficult to control for the rate of stimulus presentationacross experimental and control conditions. If respond-ing is not quickly reduced (DRO) or reallocated towardresponses that produce reinforcement (DRA), the rate ofreinforcement in the control condition may be low rela-tive to the rate of reinforcement in the experimental con-ditions. When this occurs, the contingency-reversalstrategy is functionally similar to the conventional ex-tinction procedure. (p. 267)

Given the considerations for the reversal techniquewith extinction, and its three variations, Thompson andIwata (2005) concluded that NCR offers the most thor-ough and unconfounded demonstration of the effects ofpositive reinforcement.

Using ReinforcementEffectivelyWe offer practitioners nine guidelines for applying pos-itive reinforcement effectively. These guidelines comefrom three main sources: the research literatures of the experimental analysis of behavior, applied behavioranalysis and our personal experiences.

Set an Easily Achieved InitialCriterion for Reinforcement

A common mistake in applications of reinforcement issetting the initial criterion for reinforcement too high,which prohibits the learner’s behavior from contactingthe contingency. To use reinforcement effectively, prac-titioners should establish an initial criterion so that theparticipant’s first responses produce reinforcement, andthen increase the criterion for reinforcement gradually asperformance improves. Heward (1980) suggested the fol-lowing method for establishing initial criteria for rein-forcement based on the learner’s level of respondingduring baseline (see Figure 13).

Figure 13 Examples of using data from a learner’s baseline performance to set an initial criterion for reinforcement.

The criterion-setting formulas areFor increasing behaviors:

baseline average < initial criterion ≤ highest performance during baseline

For decreasing behaviors:baseline average > initial criterion ≥ lowest performance during baseline

Examples

Performance Baseline Range for Target Behavior Goal Lowest Highest Average Initial Criterion

Playing alone Increase 2 min. 14 min. 6 min. 7–14 min.

Identifying letters Increase 4 letters 9 letters 5 letters 6–9 lettersof the alphabet

Number of leg Increase 0 22 8 9–22exercises completed

Percentage of math Increase 25% 60% 34% 40–60%problems correctly solved

Number of typing Decrease 16 28 22 16–21errors in one letter

Number of calories Decrease 2,260 3,980 2,950 2,260–2,900consumed per day

From “A Formula for Individualizing Initial Criteria for Reinforcement” by W. L. Heward, 1980, ExceptionalTeacher, 1 (9), p. 8. Copyright 1980 by the Exceptional Teacher. Used by permission.

Positive Reinforcement

305

For a behavior you wish to increase, set the initial crite-rion higher than the child’s average baseline perfor-mance and lower than or equal to his best performanceduring baseline. For a behavior you want to decrease infrequency, the initial criterion for reinforcement shouldbe set below the child’s average performance duringbaseline and greater than or equal to his lowest (or best)baseline performance. (p. 7)

Use High Quality Reinforcers of Sufficient Magnitude

Reinforcers that maintain responding on simple tasksmay not have the potency to produce similar levels of re-sponding on more difficult or longer tasks. Practitionerswill likely need to use a reinforcer of higher quality forbehaviors that require more effort or endurance. A highlypreferred stimulus chosen during preference assessmentssometimes functions as a high-quality reinforcer. Neefand colleagues (1992), for example, found that behaviorsthat received a lower reinforcer rate but a higher qualityreinforcer increased in frequency, whereas behaviors thatreceived a higher reinforcer rate with a lower quality re-inforcer decreased in frequency. Reinforcer quality is rel-ative also to other consequences for responding currentlyavailable to the learner.

Applied behavior analysts define the magnitude (oramount) of a reinforcer as (a) the duration of time for ac-cess to the reinforcer, (b) the number of reinforcers perunit of time (i.e., reinforcer rate), or (c) the intensity ofthe reinforcer. Increases in reinforcer magnitude may cor-relate with an increased effectiveness of the behavior-reinforcer relation. However, the effects of reinforcermagnitude are not well understood because “few appliedstudies have examined the effects of magnitude on re-sponding in a single-operant arrangement” (Lerman,Kelly, Vorndran, Kuhn, & LaRue, 2002, p. 30). Consid-eration of how much reinforcement to use should followthe maxim, “Reinforce abundantly, but don’t give awaythe store.” We suggest that the amount of reinforcementbe proportional to the quality of the reinforcer and theeffort required to emit the target response.

Use Varied Reinforcers to MaintainPotent Establishing Operations

Reinforcers often decrease in potency with frequent use.Presenting an overabundance of a specific reinforcer islikely to diminish the momentary effectiveness of the re-inforcer due to satiation. Practitioners can minimize sa-tiation effects by using a variety of reinforcers. If readinga specific book on sports functions as a reinforcer andthe teacher relies solely on this reinforcer, ultimately read-ing that book may no longer produce reinforcement. Con-

versely, known reinforcers that are not always availablemay have increased effectiveness when they are reintro-duced. If a teacher has demonstrated that “being first inline” is a reinforcer, but uses this reinforcer only onceper week, the reinforcement effect will be greater than if“being first in line” is used frequently.

Varying reinforcers may enable less preferred stim-uli to function as reinforcers. For example, Bowman et al.(1997) found that some learners responded better to a va-riety of less preferred stimuli as compared to a continu-ous access to a single, more highly preferred stimulus.Also, using a variety of reinforcers may keep the potencyof any one particular reinforcer higher. For example, Egel(1981) found that students’ correct responding and on-task behavior were higher when they had access to one ofthree randomly selected reinforcers across trials versus aconstant reinforcement condition in which one of thestimuli was presented following each successful trial.Even within a session, teachers could let students selecta variety of consequences from a menu. Similarly, vary-ing a property of a reinforcer may keep its reinforcingpotency for a longer time. If comic books are used as re-inforcers, having several different genres of comic booksavailable is likely to maintain their potency.

Use Direct Rather than IndirectReinforcement ContingenciesWhen Possible

With a direct reinforcement contingency, emitting thetarget response produces direct access to the reinforcer;the contingency does not require any intervening steps.With an indirect reinforcement contingency, the responsedoes not produce reinforcement directly. The practitionerpresents the reinforcer. Some research suggests that di-rect reinforcement contingencies may enhance perfor-mance (Koegel & Williams, 1980; Williams, Koegel, &Egel, 1981). Thompson and Iwata (2000), for example,linked the definitions of direct and indirect contingen-cies to the difference between automatic reinforcement(i.e., direct) and socially mediated reinforcement (i.e.,indirect) and summarized their research on response ac-quisition under direct and indirect contingencies of re-inforcement this way:

Under both contingencies, completion of identical tasks(opening one of several types of containers) producedaccess to identical reinforcers. Under the direct contin-gency, the reinforcer was placed inside the container tobe opened; under the indirect contingency, the therapistheld the reinforcer and delivered it to the participantupon task completion. One participant immediately per-formed the task at 100% accuracy under both contin-gencies. Three participants showed either moreimmediate or larger improvements in performance

Positive Reinforcement

306

under the direct contingency. The remaining two partici-pants showed improved performance only under the di-rect reinforcement contingency. Data taken on theoccurrence of “irrelevant” behaviors under the indirectcontingency (e.g., reaching for the reinforcer instead ofperforming the task) provided some evidence that thesebehaviors may have interfered with task performanceand that their occurrence was a function of differentialstimulus control. (p. 1)

Whenever possible, practitioners should use directreinforcement contingency, especially with learners withlimited behavioral repertoires.

Combine Response Prompts and Reinforcement

Response prompts are supplementary antecedent stimuliused to occasion a correct response in the presence of anSD that will eventually control the behavior. Applied be-havior analysts give response prompts before or duringthe performance of a target behavior. The three majorforms of response prompts are verbal instructions, mod-eling, and physical guidance.

Concerning verbal instructions, sometimes describ-ing the contingency (i.e., the verbal instructions) mayfunction as a motivating operation for learners with ver-bal skills, thereby, making it more likely that the learnerwill more quickly contact the reinforcer. For example,Mayfield and Chase (2002) explained their reinforcementcontingency to college students who were learning fivebasic algebra rules.

The reinforcement procedures were described to the par-ticipants in the general instructions administered at thebeginning of the study. Participants earned money forcorrect answers on all the tests and were not penalized forincorrect answers. During the session following a test,participants were presented with a record of their totalearnings on the test. This was the only feedback providedconcerning their performance on the tests. (p. 111)

Bourret, Vollmer, and Rapp (2004) used verbal re-sponse prompts during an assessment of the vocal verbalmand repertoires of three participants with autism.

Each vocalization assessment session consisted of 10trials, each 1 min in duration. A nonspecific prompt [de-scribing the contingency] was delivered (e.g., “If youwant this, ask me for it”) 10 s after the onset of the trial.A prompt including a model of the complete targetedutterance (e.g., “If you want this, say ‘chip’ ”) was de-livered 20 s into the trial. The participant was promptedto say just the first phoneme of the targeted response(e.g., “If you want this, say ‘ch’ ”) 30 s after the initia-tion of the trial.

Reinforce Each Occurrence of the Behavior Initially

Provide reinforcement for each occurrence of the targetbehavior (i.e., continuous reinforcement) to strengthenbehavior, primarily during the initial stages of learning anew behavior. After the behavior is established, gradu-ally thin the rate of reinforcement so that some but not alloccurrences of the behavior are reinforced (i.e., inter-mittent reinforcement). For example, a teacher might ini-tially reinforce each correct response to sight wordsprinted on flash cards and then use a ratio schedule tothin reinforcement. To firm the responses after the initiallearning, provide reinforcement following two correct re-sponses for a few trials, then following each set of fourcorrect responses, and so on. Hanley and colleagues(2001) gradually moved from a very dense fixed interval(FI) 1-second schedule of reinforcement (on an FI sched-ule, the first target response following the end of the in-terval produced reinforcement) to thinner schedule withthe following increments of intervals: 2 s, 4 s, 8 s, 16 s,25 s, 35 s, 46 s, and finally to an FI-58 seconds. For ex-ample, target responses that occurred before the end ofthe FI 58 seconds were not reinforced, but the first re-sponse after 58 seconds was reinforced.

Use Contingent Attention and Descriptive Praise

As discussed earlier in this chapter, social attention andpraise are powerful reinforcers for many people. How-ever, behavioral improvements following praise often in-volve something more, or altogether different from, thedirect effects of reinforcement. Michael (2004) discussedthe common conceptual mistake of assuming that in-creased responding following praise and attention are afunction of reinforcement.

Consider the common use of descriptive praise, provid-ing some general sign of social approval (a smile plussome comment such as “Good work!”) and, in addition,a brief description of the behavior that is responsible forthe approval (“I like the way you’re . . .!”). When suchpraise is provided to a normally verbal person over 5 or6 years of age, it probably functions as a form of in-struction or as a rule, much as if the praiser had said, “Ifyou want my continued approval you have to . . .” Forexample, a factory supervisor walks up to an employeewho is cleaning up an oil spill on the factory floor,

Positive Reinforcement

307

smiles broadly, and says, “George, I really like the wayyou’re cleaning up that spill before anyone steps in it.That’s very considerate of you.” Now suppose thatGeorge cleans up spills from that time forward—a ratherlarger change in behavior considering that it was fol-lowed by only a single instance of reinforcement. Wemight suspect that the praise functioned not simply asreinforcement but rather as a form of rule or instruction,and that George, for various reasons, provided himselfwith similar instruction every time another spill oc-curred. (pp. 164–165, emphasis in original)

A study by Goetz and Baer (1973) investigating theeffects of teacher praise on preschool children’s creativeplay with building blocks used descriptive praise in onecondition of the study. “The teacher remarked with in-terest, enthusiasm, and delight every time that the childplaced and/or rearranged the blocks so as to create a formthat had not appeared previously in that session’s con-struction(s). . . . ‘Oh, that’s very nice—that’s different’!)”(p. 212). The three 4-year-old girls increased the con-struction of block form diversity during each phase ofcontingent descriptive praise. Goetz and Baer did not conduct a component analysis to determine how muchof the girls’ improved performance could be attributed toreinforcement in the form of positive attention (“That’svery nice!”) or to the feedback they received (“That’s dif-ferent!”), which enabled them to create a rule to follow(“Building different things with the blocks gets theteacher’s attention.”). The authors surmised that

for some children, either [reinforcing attention and de-scriptive praise] will be sufficient without the other, butthat for other children, the mix of the two will be moreeffective than either alone. If so, then for applied pur-poses a package of positive attention and descriptivepraise is probably the best technique to apply to chil-dren in general. (p. 216, words in brackets added)

We recommend that, in the absence of data showingthat attention and praise have produced counter thera-peutic effects for a given learner, practitioners incorporatecontingent praise and attention into any intervention en-tailing positive reinforcement.

Gradually Increase the Response-to-Reinforcement Delay

We recommended in the previous guideline that practi-tioners reinforce each occurrence of a target behaviorduring the initial stages of learning, and then thin the de-livery of reinforcers by switching to an intermittent sched-ule of reinforcement. Because the consequences thatmaintain responding in natural environments are oftendelayed, Stromer, McComas, and Rehfeldt (2000) re-minded us that using continuous and intermittent sched-

12Moving from continuous reinforcement to an intermittent schedule of re-inforcement is sometimes described as a means of increasing reinforcerdelay (e.g., Alberto & Troutman, 2006; Kazdin, 2001). However, an inter-mittent schedule of reinforcement does not entail “delayed reinforcement”unless specified. Although only some occurrences of the target behavior arereinforced on an intermittent schedule of reinforcement (see chapter enti-tled “Schedules of Reinforcement”), reinforcement is delivered immediatelyfollowing the response that meets the contingency. For example, on a fixedratio 10 schedule of reinforcement, every tenth response produces imme-diate reinforcement. Delay-to-reinforcement or reinforcement delay de-scribes the time lapse between the response and delivery of the reinforcerafter the contingency has been met (e.g., the reinforcer was delivered 45 sec-onds after every tenth response).

ules of reinforcement might be just the first steps of pro-gramming consequences for everyday situations. “Es-tablishing the initial instances of a behavioral repertoiretypically requires the use of programmed consequencesthat occur immediately after the target response occurs.However, the job of the applied behavior analyst also in-volves the strategic use of delayed reinforcement. Be-haviors that yield delayed reinforcement are highlyadaptive in everyday life, but they may be difficult to es-tablish and maintain” (p. 359).12

Examples of the tactics that applied behavior ana-lysts have used to help people learn to respond effectivelyfor delayed consequences include: (a) a delay-to-reinforcement time interval that begins with a short delayand is then gradually increased the extent of the delay(Dixon, Rehfeldt, & Randich, 2003; Schweitzer & Sulzer-Azaroff, 1988); (b) a gradual increase in work require-ments during the delay (Dixon & Holcomb, 2000); (c) anactivity during the delay to “bridge the gap” between thebehavior and reinforcer (Mischel, Ebbesen, & Zeiss, 1972);and, importantly, (d) verbal instruction in the form of an as-surance that the reinforcer will be available following adelay (e.g., “The calculator will show the amount of moneyto be placed in a savings account for you. You will be givenall the nickels in your savings account on [day]” (Neef,Mace, & Shade, 1993, p. 39).

Gradually Shift from Contrived to Naturally Occurring Reinforcers

We end this chapter with an extract from Murray Sid-man’s (2000) insightful and thought-provoking accountof what he learned in the “early days” of applying be-havioral principles to human behavior. In describing aproject from 1965 to 1975 that emphasized the use ofpositive reinforcement with boys between the ages of 6and 20 years who were diagnosed with mental retarda-tion and living in a state institution, Sidman recollectedhow introducing tokens as generalized conditioned rein-forcers eventually led to praise from the project staff,

Positive Reinforcement

308

and later to learning itself, becoming powerful reinforcersfor the boys.

We began with tokens, which had the advantage ofbeing visible and easily handled. Later, after the boyshad learned to save tokens and to understand numbers,we were able to introduce points. For some, points ledeventually to money. As the boys saw how pleased wewere when they earned the tokens and points thatbrought them other reinforcers, our pleasure also be-came important to them, and we became able to usepraise as a reinforcer. As they learned more and more,

many of the boys found that what they learned permittedthem to deal more effectively with their gradually en-larging world. For them, learning itself became reinforcing. (p. 19)

Success in manipulating the environment may be theultimate naturally occurring reinforcer. As Skinner (1989)pointed out, this powerful reinforcer “does not need tobe contrived for instructional purposes; it is unrelated toany particular kind of behavior and hence always avail-able. We call it success.” (p. 91)

SummaryDefinition and Nature of Positive Reinforcement

1. Positive reinforcement is a functional relation defined bya two-term contingency: A response is followed immedi-ately by the presentation of a stimulus, and, as a result,similar responses occur more frequently in the future.

2. The stimulus change responsible for the increase in re-sponding is called a reinforcer.

3. The importance of the immediacy of reinforcement mustbe emphasized; a response-to-reinforcement delay of just1 second can diminish intended effects because the be-havior temporally closest to the presentation of the rein-forcer will be strengthened by its presentation.

4. The effects of long-delayed consequences on humanbehavior should not be attributed to the direct effect ofreinforcement.

5. A misconception held by some is that reinforcement is acircular concept. Circular reasoning is a form of faultylogic in which cause and effect are confused and not in-dependent of each other. Reinforcement is not a circularconcept because the two components of the response–consequence relation can be separated and the conse-quence manipulated to determine whether it increases thefrequency of the behavior it follows.

6. In addition to increasing the future frequency of the be-havior it follows, reinforcement changes the function ofantecedent stimuli. An antecedent stimulus that evokes be-havior because it has been correlated with the availabilityof reinforcement is called a discriminative stimulus (SD).

7. A discriminated operant is defined by a three-term con-

8. The momentary effectiveness of any stimulus change asreinforcement depends on an existing level of motivationwith respect to that stimulus change. An establishing op-eration (EO) (e.g., deprivation) increases the current ef-fectiveness of a reinforcer; an abolishing operation (AO)(e.g., satiation) decreases the current effectiveness of areinforcer.

9. A complete description of reinforcement of a discrimi-

10. Automaticity of reinforcement refers to the fact that a per-son does not have to understand or be aware of the relationbetween his behavior and a reinforcing consequence for re-inforcement to occur.

11. Reinforcement strengthens any behavior that immediatelyprecedes it; no logical or adaptive connection between be-havior and the reinforcing consequence is necessary.

12. The development of superstitious behaviors that often ap-pear when reinforcement is presented on a fixed-time sched-ule irrespective of the subject’s behavior demonstrates thearbitrary nature of the behaviors selected by reinforcement.

13. Automatic reinforcement occurs when behaviors pro-duce their own reinforcement independent of the medi-ation of others.

Classifying Reinforcers

14. Unconditioned reinforcers are stimuli that function as re-inforcement without requiring a learning history. Thesestimuli are the product of phylogenic development, mean-ing that all members of a species are susceptible to thesame properties of stimuli.

15. Conditioned reinforcers are previously neutral stimuli thatfunction as reinforcers as a result of prior pairing with oneor more other reinforcers.

16. A generalized conditioned reinforcer is a conditioned re-inforcer that as a result of having been paired with manyunconditioned and conditioned reinforcers does not de-pend on a current EO for any particular form of rein-forcement for its effectiveness.

17. When reinforcers are described by their physical proper-ties, they are typically classified as edible, sensory, tangi-ble, activity, or social reinforcers.

18. The Premack principle states that making the opportunityto engage in a high-probability behavior contingent on the

Positive Reinforcement

tingency of SD → R → SR+

nated operant entails a four-term contingency: EO →SD → R → SR+

309

occurrence of low-frequency behavior will function as re-inforcement for the low-frequency behavior.

19. The response-deprivation hypothesis is a model for pre-dicting whether contingent access to one behavior willfunction as reinforcement for engaging in another behav-ior based on whether access to the contingent behaviorrepresents a restriction of the activity compared to thebaseline level of engagement.

Identifying Potential Reinforcers

20. Stimulus preference assessment refers to a variety of pro-cedures used to determine (a) the stimuli that a personprefers, (b) the relative preference values (high versus low)of those stimuli, and (c) the conditions under which thosepreferences values remain in effect.

21. Stimulus preference assessments can be performed by ask-ing the target person and/or significant others what the tar-get person prefers, conducting free operant observations,and conducting trial-based assessments (i.e., single-,paired-, or multiple-stimulus presentations).

22. Preferred stimuli do not always function as reinforcers,and stimulus preferences often change over time.

23. Reinforcer assessment refers to a variety of direct, data-based methods for determining the relative effects of agiven stimulus as reinforcement under different andchanging conditions or the comparative effectives of mul-tiple stimuli as reinforcers for a given behavior under spe-cific conditions. Reinforcer assessment is often conductedwith concurrent schedules of reinforcement, multipleschedules of reinforcement, and progressive reinforce-ment schedules.

Control Procedures for Positive Reinforcement

24. Positive reinforcement control procedures are used to ma-nipulate the presentation of a potential reinforcer and ob-

serve any effects on the future frequency of behavior. Pos-itive reinforcement control procedures require a believ-able demonstration that the contingent presentationfollowing the occurrence of a target response functions aspositive reinforcement. Control is demonstrated by com-paring rates of responding in the absence and presence ofa contingency, and then showing that with the absence andpresence of the contingency the behavior can be turned onand off, or up and down.

25. In addition to a reversal design using the withdrawal ofthe reinforcement contingency (i.e., extinction) as the con-trol condition, noncontingent reinforcement (NCR), dif-ferential reinforcement of other behavior (DRO), anddifferential reinforcement of alternative behavior (DRA)can be used as control conditions for reinforcement.

Using Reinforcement Effectively

26. Guidelines for increasing the effectiveness of positive re-inforcement interventions include:

• Set an easily achieved initial criterion for reinforcement

• Use high quality reinforcers of sufficient magnitude

• Use varied reinforcers

• Use a direct rather than indirect reinforcement contin-gency whenever possible

• Combine response prompts and reinforcement

• Reinforce each occurrence of the behavior initially, thengradually thin reinforcement schedule

• Use contingent praise and attention

• Gradually increase the response-to-reinforcement delay

• Gradually shift from contrived to naturally occurringreinforcers

Positive Reinforcement

310

Negative Reinforcement

Key Terms

avoidance contingencyconditioned negative reinforcerdiscriminated avoidance

escape contingencyfree-operant avoidance

negative reinforcementunconditioned negative reinforcer

Behavior Analyst Certification Board® BCBA® & BCABA®Behavior Analyst Task List©, Third Edition

Content Area 3: Principles, Processes, and Concepts

3-3 Define and provide examples of (positive and) negative reinforcement.

Content Area 9: Behavior Change Procedures

9-2 Use (positive and) negative reinforcement:

(a) Identify and use (negative) reinforcers.

(b) Use appropriate parameters and schedules of (negative) reinforcement.

(d) State and plan for the possible unwanted effects of the use of (negative)reinforcement.

© 2006 The Behavior Analyst Certification Board, Inc.,® (BACB®) all rights reserved. A current version of thisdocument may be found at www.bacb.com. Requests to reprint, copy, or distribute this document and ques-tions about this document must be submitted directly to the BACB.

This chapter was written by Brian A. Iwata and Richard G. Smith.

From Chapter 12 of Applied Behavior Analysis, Second Edition. John O. Cooper, Timothy E. Heron, William L. Heward. Copyright © 2007by Pearson Education, Inc. All rights reserved.

311

Negative Reinforcement

The use of positive reinforcement in educationaland therapeutic programs is so commonplacethat the terms positive reinforcement and reinf-

orcement have become almost synonymous; in fact, theusual lay term for reinforcement is simply reward. Posi-tive reinforcement involves an increase in responding asa function of stimulus presentation. In a complementaryway, responding can lead to the termination of a stimulus,as in turning off an alarm clock in the morning, which re-sults in the cessation of noise. When responding increasesas a result of stimulus termination, learning has occurredthrough negative reinforcement. This chapter expands thediscussion of operant contingencies to include negativereinforcement. We define negative reinforcement, distin-guish between escape and avoidance contingencies, de-scribe events that may serve as the basis for negativereinforcement, illustrate ways negative reinforcement maybe used to strengthen behavior, and discuss ethical issuesthat arise when using negative reinforcement. Readers in-terested in more in-depth discussions of basic and appliedresearch on negative reinforcement are referred to reviewsby Hineline (1977) and Iwata (1987).

Definition of NegativeReinforcementA negative reinforcement contingency is one in whichthe occurrence of a response produces the removal, ter-mination, reduction, or postponement of a stimulus, whichleads to an increase in the future occurrence of that re-sponse. A full description of negative reinforcement re-quires specification of its four-term contingency (seeFigure 1): (a) The establishing operation (EO) for be-havior maintained by negative reinforcement is an an-tecedent event in whose presence escape (termination ofthe event) is reinforcing, (b) the discriminative stimulus(SD) is another antecedent event in whose presence a re-sponse is more likely to be reinforced, (c) the response isthe act that produces reinforcement, and (d) the reinforceris the termination of the event that served as the EO.

Positive versus NegativeReinforcement

Positive and negative reinforcement have a similar effecton behavior in that both produce an increase in respond-ing. They differ, however, with respect to the type of stim-ulus change that follows behavior, as illustrated in Figure2. In both examples, a stimulus change (consequence)strengthens the behavior that preceded it: Asking the sib-ling to make a sandwich is strengthened by obtainingfood; carrying rain protection is strengthened by block-ing the rain. However, behavior maintained by positive re-inforcement produces a stimulus that was absent prior toresponding, whereas behavior maintained by negative re-inforcement terminates a stimulus that was present priorto responding: Food was unavailable prior to asking forit but available after (positive reinforcement); rain waslanding on one’s clothing before raising the newspaperbut not after (negative reinforcement).

Thus, the key distinction between positive and nega-tive reinforcement is based on the type of stimulus changethat occurs following a response. Many stimulus changeshave discrete onsets and offsets and involve an “all-or-none” operation. For example, one can readily see the ef-fect of turning on a television (positive reinforcement) orturning off the light in a bedroom (negative reinforce-ment). Other stimulus changes exist on a continuum fromless to more, such as turning up the volume of a stereo tohear it better (positive reinforcement) or turning it downwhen it is too loud (negative reinforcement). Sometimes,however, it is difficult to determine whether an increase inresponding resulted from positive or negative reinforce-ment because the stimulus change is ambiguous. For ex-ample, although a change in temperature can be measuredquantitatively so we know that it either increased or de-creased following behavior, it is unclear whether turningon a heater when the temperature is 40° F is an exampleof positive reinforcement because the response “producedheat” or negative reinforcement because the response “re-moved cold.” Another example can be found in a classicstudy by Osborne (1969) on the use of free time as a re-inforcer in the classroom. During baseline, students wereobserved to get out of their seats frequently during long

Ask roommateto close window

Noise subsidesRoommate nearbyLoud noise from

construction outside

R SR-SDEO

Effect on Frequencyof Similar Responses

under Similar Conditionsin the Future

Figure 1 Four-term contingency illustrating negative reinforcement.

312

Negative Reinforcement

Ask sibling tomake sandwich

Sibling makessandwich, child

consumes

Older siblingpresent when

child gets home

Child missed lunchat school

R SR+SDEO

Effect on Frequencyof Similar Responses

under Similar Conditionsin the Future

Buy newspaper anduse as cover

Effects of rainon clothesdiminished

Newsstand nearbyRain starts to fall and

lands on clothes

R SR-SDEO

Effect on Frequencyof Similar Responses

under Similar Conditionsin the Future

Negative Reinforcement

Positive Reinforcement

Figure 2 Four-term contingency illustrating similarities and difference between positive andnegative reinforcement.

work periods. During treatment, the students were given5 minutes of free time if they remained in their seats dur-ing 10-minute work periods, and in-seat behavior in-creased. At first glance, the free-time contingency appearsto involve negative reinforcement (termination of the in-seat requirement contingent on appropriate behavior). AsOsborne noted, however, activities (games, social inter-action, etc.) to which the students had access during freetime may have functioned as positive reinforcement.

Given the ambiguous nature of some stimuluschanges, Michael (1975) suggested that the distinctionbetween positive and negative reinforcement, based onwhether a stimulus is presented or removed, may be un-necessary. Instead, he emphasized the importance ofspecifying the type of environmental change producedby a response in terms of key stimulus features that com-prised both the “prechange” and “postchange” condi-tions. This practice, he proposed, would eliminate thenecessity of describing the transition between prechangeand postchange conditions as one involving the presen-tation or removal of stimuli and would facilitate a morecomplete understanding of functional relations betweenenvironment and behavior.

Little has changed since the publication of Michael’s(1975) article; the distinction between positive and nega-tive reinforcement continues to be emphasized in everytext on learning principles, and citations to the termnegative reinforcement have even increased in applied re-search (Iwata, 2006). In an attempt to renew the discus-sion, Baron and Galizio (2005) reiterated Michael’sposition and included some additional points of emphasis.

1The term aversive is not meant to describe an inherent characteristic of astimulus but, rather, a stimulus whose presentation functions as punish-ment or whose removal functions as negative reinforcement.

This terminological issue is a complex one that can beconsidered from several perspectives—conceptual, pro-cedural, and historical—and is unresolved at the currenttime. Readers interested in the topic are referred to a seriesof reactions to Baron and Galizio (Chase, 2006; Iwata,2006; Lattal & Lattal, 2006; Marr, 2006; Michael, 2006;Sidman, 2006) and their rejoinder (Baron & Galizio, 2006).

Negative Reinforcementversus Punishment

Negative reinforcement is sometimes confused with pun-ishment for two reasons. First, because the lay term for pos-itive reinforcement is reward, people mistakenly considernegative reinforcement as the technical term for the oppo-site of reinforcement (punishment). The terms positive andnegative, however, do not refer to “good” and “bad” but tothe type of stimulus change (presentation versus termina-tion) that follows behavior (Catania, 1998). A second sourceof confusion stems from the fact that the stimuli involvedin both negative reinforcement and punishment are con-sidered “aversive” to most people.1 Although it is true thatthe same stimulus may serve as a negative reinforcer in onecontext and as a punisher in a different context, both thenature of the stimulus change and its effect on behavior dif-fer. In a negative reinforcement contingency, a stimulus thatwas present is terminated by a response, which leads to an

313

Negative Reinforcement

increase in responding; in a punishment contingency, a stim-ulus that was absent is presented following a response,which leads to a decrease in responding. Thus, a responsethat terminates loud noise would increase as a function ofnegative reinforcement, but one that produces loud noisewould decrease as a function of punishment.

Escape and AvoidanceContingenciesIn its simplest form, negative reinforcement involves anescape contingency, in which a response terminates (pro-duces escape from) an ongoing stimulus. An early studyby Keller (1941) illustrates typical laboratory researchon escape. When a rat was placed in an experimentalchamber, and a bright light was turned on, the rat quicklylearned to a press lever, which turned off the light. Os-borne’s (1969) study on free-time contingencies may alsoserve as an example of escape learning in an applied con-text. To the extent that the important feature of the con-tingency was the termination of work requirements,in-seat behavior during 10-minute work periods produced5 minutes of escape.

Although situations involving escape are commonlyencountered in everyday life (e.g., we turn off loud noises,shield our eyes from the sun, flee from an aggressor),most behavior maintained by negative reinforcement ischaracterized by an avoidance contingency, in which aresponse prevents or postpones the presentation of a stim-ulus. Returning to the previous laboratory example, anexperimenter can add to the escape contingency anarrangement in which another stimulus such as a toneprecedes the presentation of the bright light, and a re-sponse in the presence of the tone eliminates the presen-tation of the light or postpones it until the tone is next

presented. This type of arrangement has been called dis-criminated avoidance, in which responding in the pres-ence of a signal prevents the onset of a stimulus fromwhich escape is a reinforcer. Because responses in thepresence of the tone are reinforced, whereas those in theabsence of the tone have no effect, the tone is a discrim-inative stimulus (SD) in whose presence there is an in-creased likelihood of reinforcement for responding.

Avoidance behavior also can be acquired in the ab-sence of a signal. Suppose the experimenter arranges aschedule in which the bright light turns on for 5 secondsevery 30 seconds, and a response (or some number of re-sponses) at any time during the interval resets the clockto zero. This type of arrangement is known as free-operant avoidance because the avoidance behavior is“free to occur” at any time and will delay the presenta-tion of the bright light.

Each of the three types of contingencies describedearlier was illustrated in an ingenious study by Azrin,Rubin, O’Brien, Ayllon, and Roll (1968) on posturalslouching (see Figure 3). Participants wore an apparatusthat closed an electrical circuit when slouching occurred.Closure of the switch produced an audible click, whichwas followed 3 seconds later by a 55-db tone. Posturalcorrection in the presence of the tone turned off the tone(escape) but prevented the tone if correction occurredduring the 3 seconds following the click (discriminatedavoidance). Furthermore, maintenance of correct postureprevented the click (free-operant avoidance). A hypo-thetical example involving homework management alsoillustrates these contingencies. A parent who sends a childto his or her room immediately following school and al-lows the child to leave the room only after completinghomework has arranged an escape contingency: Home-work completion produces escape from the bedroom. Aparent who first delivers a warning (e.g., “If you don’t

Figure 3 Three types of negative reinforcement contingencies used by Azrin and colleagues (1968) to maintain correct posture.

Free-Operant Avoidance

Correct posture maintained ➞ Avoid click and tone

Discriminated Avoidance

Slouching (incorrect posture) ➞ Audible click

Correct posture within 3 seconds of click ➞ Avoid tone

Escape

Slouching (incorrect posture) ➞ Audible click

Posture not corrected within 3 seconds ➞ 55-db tone

Posture corrected ➞ Tone turns off

314

Negative Reinforcement

start your homework in 10 minutes, you’ll have to do itin your bedroom”) has arranged a discriminated avoid-ance contingency: Starting homework following thewarning avoids having to do it in the bedroom. Finally, theparent who waits until later in the evening to impose the in-room requirement has arranged a free-operant avoidancecontingency: Homework completion at any time afterschool avoids having to do it in the bedroom later.

Characteristics of NegativeReinforcementResponses Acquired and Maintained by NegativeReinforcement

It a well-known fact that aversive stimulation producesa variety of responses (Hutchinson, 1977). Some of thesemay be respondent behaviors (as in reflexive actions tointense stimuli), but the focus in this chapter is on oper-ant behaviors. Recall that the presentation of an aversivestimulus serves as an EO for escape and occasions be-havior that has produced escape from similar stimulationin the past. Any response that successfully terminatesthe stimulation will be strengthened; as a result, a widerange of behaviors may be acquired and maintained bynegative reinforcement. All of these behaviors are adap-tive because they enable one to interact effectively withthe environment; some behaviors, however, are more so-cially appropriate than others. As will be seen later inthe chapter, negative reinforcement may play an impor-tant role in the development of academic skills, but italso can account for the development of disruptive ordangerous behavior.

Events That Serve as Negative Reinforcers

In discussing the types of stimuli that can strengthen be-havior through negative reinforcement, a problem ariseswhen attempting to use the same terminology as that ap-plied to the description of positive reinforcers. It is quitecommon to refer to positive reinforcers by listing thingssuch as food, money, praise, and so on. It is, however,the presentation of the stimulus that strengthens behav-ior: Food presentation, and not food per se, is a positivereinforcer. Nevertheless, we often simply list the stimu-lus and assume that “presentation” is understood. In asimilar way, to say that negative reinforcers includeshock, noise, parental nagging, and so on, is an incom-plete description. It is important to remember that a stim-

ulus described as a negative reinforcer refers to its re-moval because, as noted previously, the same stimulusserves as an EO when presented prior to behavior and aspunishment when presented following behavior.

Learning History

As is the case with positive reinforcers, negative rein-forcers influence behavior because (a) we have the in-herited capacity to respond to them or (b) their effectshave been established through a history of learning. Stim-uli whose removal strengthens behavior in the absenceof prior learning are unconditioned negative rein-forcers. These stimuli are typically noxious events suchas shock, loud noise, intense light, extremely high or lowtemperature, or strong pressure against the body. In fact,any source of pain or discomfort (e.g., a headache) willoccasion behavior, and any response that successfullyeliminates the discomfort will be reinforced. Other stim-uli are conditioned negative reinforcers, which are pre-viously neutral events that acquire their effects throughpairing with an existing (unconditioned or conditioned)negative reinforcer. A bicyclist, for example, usuallyheads for home when seeing a heavily overcast sky be-cause dark clouds have been highly correlated with badweather. Various forms of social coercion, such asparental nagging, are perhaps the most commonly en-countered conditioned negative reinforcers. For exam-ple, reminding a child to clean his bedroom may havelittle effect on the child’s behavior unless failure to re-spond is followed by another consequence such as hav-ing to stay in the room until it is clean. To the extent thatthe nagging is reliably “backed up” by sending the childto his room, the child will eventually respond simply tostop or prevent the nagging. It is interesting to note that,in the case of negative reinforcement, neutral events (darksky, nagging) function as both (a) discriminative stimulibecause responding in their presence constitutes avoid-ance of another consequence and (b) conditioned nega-tive reinforcers because, due to their pairing with anotherconsequence, they become stimuli to avoid or escape.

The Source of NegativeReinforcement

Another way to classify negative reinforcers is based onhow they are removed (i.e., their source). A distinctionwas made between socially mediated reinforcement, inwhich the consequence results from the action of an-other person, and automatic reinforcement, in which the consequence is produced directly by a response in-dependent of the actions of another. This distinction

315

Negative Reinforcement

also applies to negative reinforcement. Returning to theexample in Figure 1, we can see that termination of theconstruction noise was an instance of social negative re-inforcement (the roommate’s action closed the window).The person “being bothered” by the noise, however, couldsimply have walked across the room and closed the win-dow (automatic negative reinforcement). This exampleillustrates the fact that many reinforcers can be removedor terminated either way: One can consult a physicianwhen experiencing a headache (social) or take a painmedication (automatic), ask the teacher for help with adifficult problem (social) or persist until it is solved (au-tomatic), and so on.

Consideration of the source of negative reinforce-ment may facilitate the design of behavior change inter-ventions by determining the focus of intervention. Forexample, when faced with a perplexing work task, an em-ployee may finish it incorrectly just to get it out of theway (automatic reinforcement) or ask for help (social re-inforcement). Aside from reassigning the employee, thequickest solution would be to reinforce the employee’sseeking help by offering assistance. Ultimately, however,the supervisor would want to teach the employee the nec-essary skills to complete the work tasks independently.

Identifying the Contextof Negative Reinforcement

There are several ways to identify positive reinforcers;the difference with negative reinforcers is that equal em-phasis must be placed on the antecedent event (EO) aswell as on the reinforcing consequence because, once thebehavior occurs, the negative reinforcer may be gone andcannot be observed. The identification of EOs may bedifficult with people who have limited verbal ability andcannot tell someone they are experiencing aversive stim-ulation. These people may engage in other behaviors,such as tantrums, attempts to leave the situation, de-structive behavior, aggression, or even self-injury. Weeksand Gaylord-Ross (1981), for example, observed studentswith severe disabilities when no task, an easy task, and adifficult task were presented. Little or no problem be-havior occurred during the no-task condition, and prob-lem behavior occurred somewhat more often in thedifficult-task condition than in the easy-task condition.These results suggested that the students’ problem be-havior was maintained by escape from task demands andthat difficult tasks were more “aversive” than were easytasks. However, because the consequences that followedproblem behavior were unknown, it is possible that thebehaviors were maintained by some other consequence,such as attention, which would be a positive reinforcer.

Iwata, Dorsey, Slifer, Bauman, and Richman (1994)developed a method for identifying the types of contin-gencies that maintain problem behavior by observing peo-ple under a series of conditions that differed with respectto both antecedent and consequent events. One conditioninvolved the presentation of task demands (EO) and theremoval of demands (escape) if problem behavior oc-curred; higher rates of problem behavior under this con-dition relative to others indicated that problem behaviorwas maintained by negative reinforcement.

Smith, Iwata, Goh, and Shore (1995) extended thefindings of Weeks and Gaylord-Ross (1981) and Iwata etal. (1994) by identifying some characteristics of task de-mands that make them aversive. After first determiningthat their participants’ (people with severe disabilities)problem behavior was maintained by escape from taskdemands, Smith and colleagues examined several di-mensions along which tasks might differ: task novelty,duration of the work session, and rate of demand pre-sentation. Results of one of these analyses are shown inFigure 4, which depicts frequency distributions and cu-mulative records of problem behavior from the begin-ning to the end of sessions. These data illustrate the importance of individualized assessments in identifyingthe basis for negative reinforcement because two partic-ipants (Evelyn and Landon) showed increasing rates ofproblem behavior as work sessions progressed, whereastwo other participants (Milt and Stan) showed the oppo-site trend.

Factors That Influence the Effectiveness of NegativeReinforcement

The factors that determine whether a negative reinforce-ment contingency will be effective in changing behav-ior are similar to those that influence positivereinforcement and are related to (a) the strength of thecontingency and (b) the presence of competing contin-gencies. In general, negative reinforcement for a given re-sponse will be more effective under the followingconditions:

1. The stimulus change immediately follows the oc-currence of the target response.

2. The magnitude of reinforcement is large, referringto the difference in stimulation present before andafter the response occurs.

3. Occurrence of the target response consistently pro-duces escape from or postponement of the EO.

4. Reinforcement is unavailable for competing (non-target) responses.

316

Negative Reinforcement

121086420

Stan

50

40

30

20

10

0

Milt

25

20

15

10

5

0

Larry

25

20

15

10

5

0

Landon

40

30

20

10

0

EvelynResponses per Interval

250

200

150

100

50

0

800

600

400

200

0

400

300

200

100

0

300

200

100

0

200

100

0

Cumulative Responses

Consecutive 30-Second Intervals

Figure 4 Frequency distributions (left column)and cumulative records(right column) summed over sessions of self-injurious behavior (SIB) by five adults with develop-mental disabilities as worksessions progressed.From “Analysis of EstablishingOperations for Self-Injury Maintained byEscape” by R. G. Smith, B. A. Iwata, H.Goh, and B. A. Shore, 1995, Journal ofApplied Behavior Analysis, 28, p. 526.Copyright 1995 by the Society for theExperimental Analysis of Behavior, Inc.Reprinted by permission.

Applications of NegativeReinforcementNegative reinforcement is a fundamental principle oflearning that has been studied extensively in basic re-search (Hineline, 1977). Although many examples of es-cape and avoidance learning can be found in everydaylife, research in applied behavior analysis has heavilyemphasized the use of positive reinforcement over neg-ative reinforcement, mostly for ethical reasons, which arenoted in the final section of this chapter. Still, negative re-inforcement has been used as one means of establishinga variety of behaviors. This section illustrates several ther-apeutic uses of negative reinforcement, as well as the un-

intended role it may play in strengthening problem be-havior.

Acquisition and Maintenance of Appropriate Behavior

Chronic Food Refusal

Pediatric feeding problems are common and are espe-cially prevalent among children with developmental dis-abilities. These disorders may take a variety of forms,including selective eating, failure to consume solid foods,and complete food refusal, and may be serious enough torequire tube feeding or other artificial means to ensure

317

Negative Reinforcement

adequate nutritional intake. A large proportion of feedingproblems cannot be attributed to a medical cause but, in-stead, appear to be learned responses most likely main-tained by escape or avoidance.

Results from a number of studies have shown thatoperant learning-based interventions can be highly ef-fective in treating many childhood feeding disorders, anda study by Ahearn, Kerwin, Eicher, Shantz, andSwearingin (1996) illustrated the use of negative rein-forcement as a form of intervention. Three children ad-mitted to a hospital who had histories of chronic foodrefusal were first observed under a baseline (positive re-inforcement) condition in which food was presented andaccess to toys was available contingent on accepting food.Food refusal, however, produced escape in that it termi-nated a trial. Subsequently, the experimenters comparedthe effects of two interventions. One treatment condition(nonremoval of the spoon) involved presenting food andkeeping the spoon positioned at the child’s lower lip untilthe bite was accepted. The other treatment (physical guid-ance) involved presenting food and, if the child did not ac-cept, opening the child’s mouth so that the food could bedelivered. Both treatments involved a negative rein-forcement contingency because food acceptance termi-nated the trial by producing removal of the spoon oravoidance of the physical guidance.

Figure 5 shows the results obtained for the three children. All children exhibited low rates of acceptanceduring baseline in spite of the availability of positive re-

inforcement. The two interventions were implemented ina multiple baseline across subjects design and were com-pared in a multielement design. As can be seen in the sec-ond phase of the study, both interventions producedimmediate and large increases in food acceptance. Theseresults showed that positive reinforcement for appropri-ate behavior may have limited effects if other behaviors(refusal) produce negative reinforcement, and that nega-tive reinforcement that maintains problem behavior canbe used to establish alternative behavior.

Error-Correction Strategies

Positive reinforcement is a basic motivational compo-nent of effective instruction. Teachers commonly deliverpraise, privileges, and other forms of reward contingenton correct performance. Another common procedure,but one that has received less attention than positive re-inforcement, involves the correction of student errorsby repeating a learning trial, having the student practicecorrect performance, or giving the student additionalwork. To the extent that correct performance avoidsthese remedial procedures, improvements may be justas much a function of negative reinforcement as positivereinforcement.

Worsdell and colleagues (2005) examined the relativecontributions of these contingencies during behavioralacquisition. The learning task involved reading words presented on flash cards, and the intervention of interest

100

80

60

40

20

0

DifferentialSR+ of

Acceptance

AlternatingTreatments

Withdrawalof NS

Physical Guidance

PhysicalGuidance (PG)

Nonremovalof Spoon (NS)

PG

SR+

Calvin

100

80

60

40

20

0 Pam

Donna

Sessions0 15 30 45 60

100

80

60

40

20

0

Per

cen

tage

of

Tri

als

wit

h A

ccep

tan

ce

Figure 5 Percentage of trials in which threechildren with histories of chronic food refusalaccepted bites during a baseline condition ofpositive reinforcement andtwo treatment conditions,nonremoval of the spoonand physical guidance, bothof which involved a negativereinforcement contingency.From “An Alternating TreatmentsComparison of Two Intensive Interven-tions for Food Refusal” by W. H. Ahearn,M .E. Kerwin, P. S. Eicher, J. Shantz, andW. Swearingin, 1996, Journal of AppliedBehavior Analysis, 29, p. 326. Copyright1996 by the Society for the ExperimentalAnalysis of Behavior, Inc. Reprinted bypermission.

318

Negative Reinforcement

was the correct repetition of misread words. As noted bythe authors, the procedure provided additional practiceof correct responses but also represented an avoidancecontingency. To separate these effects (in Study 3), the au-thors implemented two error correction conditions. In the“relevant” condition, which combined the effects of prac-tice and negative reinforcement, students were promptedto pronounce the misread word correctly five times con-tingent on an error. In the “irrelevant” condition, studentswere prompted to repeat an unrelated, nontarget wordfive times contingent on an error. The irrelevant conditioncontained only the negative reinforcement contingencybecause repetition of irrelevant words provided no prac-tice in correctly reading misread words.

Figure 6 shows the results of Study 3, expressed as the cumulative number of words mastered by the 9 partici-pants. All participants’ performance improved during botherror-correction conditions relative to baseline, when noerror-correction procedure was in effect. Performance by 3 participants (Tess, Ariel, and Ernie) was better during rel-evant error correction. However, Mark’s performance wasclearly superior during irrelevant error correction, and per-formance of the remaining 5 participants (Hayley, Becky,Kara, Maisy, and Seth) was similar in both conditions. Thus,all participants showed improvement in reading performanceeven when they practiced irrelevant words, and most par-ticipants (6 of 9) did just as well or better practicing irrele-vant rather than relevant words. These results suggest that the

100

80

60

40

20

0

Tess

BL Error Correction

Relevant

Irrelevant

50

40

30

20

10

0

Becky

BL Error Correction60

50

40

30

20

10

0

Seth

BL Error Correction

60

50

40

30

20

10

0

Ariel

50

40

30

20

10

0

Kara

50

40

30

20

10

0

Ernie

60

50

40

30

20

10

0

Hayley

50

40

30

20

10

0

Maisy

100

80

60

40

20

0

Mark

10 20 30 40 50 60 10 20 30 40 50 60 10 20 30 40 50 60Sessions

Cu

mu

lati

ve N

um

ber

of

Wo

rds

Mas

tere

d

Figure 6 Cumulativenumber of words readcorrectly during baseline(positive reinforcementfor correct responses)and during two error-correction conditions;one in which correctresponses avoidedrepeated practice of themisread (relevant) word,and another in whichcorrect responsesavoided practice of anunrelated (irrelevant)word. Improvements inperformance in the “irrel-evant” condition indicatedthat negative reinforce-ment plays a role in error-correction procedures.From: “Analysis of ResponseRepetition as an Error-CorrectionStrategy During Sight-WordReading” by A. S. Worsdell, B. A.Iwata, C. L. Dozier, A. D. Johnson, P.L. Neidert, and J. L. Thomason,2005, Journal of Applied BehaviorAnalysis, 38, p. 524. Copyright 2005by the Society for the ExperimentalAnalysis of Behavior, Inc. Reprintedby permission.

319

Negative Reinforcement

success of many remedial (error-correction) procedures maybe due at least in part to negative reinforcement.

Acquisition and Maintenance of Problem Behavior

Well-designed instructional procedures maintain a highdegree of on-task behavior and lead to improved learn-ing. Occasionally, however, the presentation of task de-mands may function as an EO for escape behavior due tothe difficult or repetitive nature of the work requirements.Initial forms of escape may include lack of attention ormild forms of disruption. To the extent that positive rein-forcement for compliance is less than optimal, attempts toescape may persist and may even escalate to more severeforms of problem behavior. In fact, research on the as-sessment and treatment of problem behaviors has shownthat escape from task demands is a common source ofnegative reinforcement for property destruction, aggres-sion, and even self-injury.

O’Reilly (1995) conducted an assessment of a per-son’s episodic aggressive behavior. The participant wasan adult with severe mental retardation who attended avocational day program. To determine whether aggres-sive behavior was maintained by positive versus nega-tive reinforcement, O’Reilly observed the participantunder two conditions, which were alternated in a multi-element design. In one condition (attention), a therapistignored the participant (EO) except to deliver reprimandsfollowing aggression (positive reinforcement). In the sec-ond condition (demand), a therapist presented difficulttasks to the participant (EO) and briefly terminated thetrial following aggression (negative reinforcement).

As Figure 7 shows, aggressive behavior occurredmore often in the demand condition, indicating that it was

maintained by negative reinforcement. Because anecdo-tal reports suggested that the participant also was morelikely to be aggressive following nights when he had notslept well, the data for both conditions were further di-vided based on whether the participant slept for more orless than 5 hours the previous night. The highest rates ofaggression occurred following sleep deprivation. Thesedata are particularly interesting in that they illustrate theinfluence of two antecedent events on behavior maintainedby negative reinforcement: Work tasks functioned as EOsfor escape but even more so in the absence of sleep.

Behavioral Replacement Strategies

Problem behaviors maintained by negative reinforce-ment can be treated in a number of ways. One strategyis to strengthen a more socially appropriate replacementbehavior using negative reinforcement, as illustrated ina study by Durand and Carr (1987). After determiningthat the “stereotypic” behaviors of four special educa-tion students were maintained by escape from task de-mands, the authors taught the students an alternativeresponse (“Help me”), which was followed by assistancewith the task at hand. As can be seen in Figure 8, all students engaged in moderate-to-high levels of stereo-typy during baseline. After being taught to use the phrase“Help me,” the students began to exhibit that behavior,and their stereotypy decreased.

Results of the Durand and Carr (1987) study showedthat an undesirable behavior could be replaced with adesirable one; however, the replacement behavior mightbe considered less than ideal because it did not neces-sarily facilitate better task performance. This was shown

100

90

80

70

60

50

40

30

20

10

0

Per

cen

tage

of

Inte

rval

s o

f A

ggre

ssio

n

1 5 10 15Sessions

Demand (Home)Attention (Voc Facility)

Attention (Home)Demand (Voc Facility)

Demand (Home)Attention (Voc Facility)

Demand < 5 hrs sleepDemand > 5 hrs sleepAttention < 5 hrs sleepAttention > 5 hrs sleep

Figure 7 Data showing that the effects of work tasks as EOs for escape-maintained aggression by an adult malewith severe mental retardation were exac-erbated by sleep deprivation.From “Functional Analysis and Treatment of Escape-Maintained Aggression Correlated with Sleep Deprivation” byM. F. O’Reilly, 1995, Journal of Applied Behavior Analysis, 28,p. 226. Copyright 1995 by the Society for the ExperimentalAnalysis of Behavior, Inc. Reprinted by permission.

320

Negative Reinforcement

40

20

0

Ro

ckin

g

Baseline Treatment

40

20

0

Ro

ckin

g

Baseline TreatmentKen

Jim

40

20

0Han

d F

lap

pin

g

Baseline TreatmentBob

40

20

0Han

d F

lap

pin

g

Baseline TreatmentLen

Per

cen

tage

of

Inte

rval

s

1 5 10 15 20 25Sessions

Figure 8 Percentage of intervals of stereotypicbehaviors maintained by escape from taskdemands of four specialeducation students duringbaseline and treatment inwhich the students weretaught an alternativeresponse (“Help me”) toattain assistance with thetask at hand. Shaded barsshow students’ use of the“Help me” response.From “Social Influences on ‘Self-Stimulatory’ Behavior: Analysis and Treatment Application” by V. M. Durand and E. G. Carr, E. G., 1987, Journal of AppliedBehavior Analysis, 20, 128. Copyright1987 by the Society for theExperimental Analysis of Behavior,Inc. Reprinted by permission.

in a subsequent study by Marcus and Vollmer (1995).After collecting baseline data on a young girl’s compli-ant and disruptive behavior, the authors compared theeffects of two treatments in a reversal design. In one con-dition, which was called DNR (differential negative re-inforcement) communication, the girl was given a brief

break from the task when she said “Finished.” In the sec-ond condition, called DNR compliance, the girl wasgiven a break after complying with an instruction (thecriterion for a break was later increased to compliancewith three instructions). The results of this comparison(see Figure 9) showed that both treatments produced

4.0

3.0

2.0

1.0

0

Dis

rup

tio

ns

per

Min

ute

BL

100

50

0

5 10 15 20 25 30 35 40Sessions

DNR(Communication)

BL

Compliance

Disruption

DNR(Compliance)

Per

cen

tage

of

Inst

ruct

ion

sw

ith

Co

mp

lian

ce

Figure 9Disruptions and compli-ance by a 5-year-old girlduring baseline and twodifferential negative rein-forcement conditions.From “Effects of Differential NegativeReinforcement on Disruption andCompliance” by B. A. Marcus and T. R. Vollmer, 1995, Journal ofApplied Behavior Analysis, 28,p. 230. Copyright 1995 by theSociety for the ExperimentalAnalysis of Behavior, Inc.Reprinted by permission.

321

Negative Reinforcement

SummaryDefinition of Negative Reinforcement

1. Negative reinforcement involves the termination, reduc-tion, or postponement of a stimulus contingent on the oc-currence of a response, which leads to an increase in thefuture occurrence of that response.

2. A negative reinforcement contingency involves (a) an es-tablishing operation (EO) in whose presence escape is re-inforcing, (b) a discriminative stimulus (SD) in whosepresence a response is more likely to be reinforced, (c) theresponse that produces reinforcement, and (d) terminationof the event that served as the EO.

3. Positive and negative reinforcement are similar in that bothlead to an increase in responding; they differ in that posi-tive reinforcement involves contingent stimulus presenta-tion, whereas negative reinforcement involves contingentstimulus termination.

4. Negative reinforcement and punishment differ in that (a)negative reinforcement involves contingent stimulus ter-mination, whereas punishment involves contingent stim-ulation, and (b) negative reinforcement leads to an increasein responding, whereas punishment leads to a decrease inresponding.

Escape and Avoidance Contingencies

5. An escape contingency is one in which responding termi-nates an ongoing stimulus. An avoidance contingency isone in which responding delays or prevents the presenta-tion of a stimulus.

6. In discriminated avoidance, responding in the presence ofa signal prevents stimulus presentation; in free-operantavoidance, responding at any time prevents stimulus pre-sentation.

Characteristics of Negative Reinforcement

7. Any response that successfully terminates aversive stim-ulation will be strengthened; as a result, a wide range of be-haviors may be acquired and maintained by negativereinforcement.

8. Negative reinforcement may play an important role in thedevelopment of academic skills, but it also can accountfor the development of disruptive or dangerous behavior.

9. Unconditioned negative reinforcers are stimuli whose re-moval strengthens behavior in the absence of prior learn-ing. Conditioned negative reinforcers are stimuli whoseremoval strengthens behavior as a result of previous pair-ing with other negative reinforcers.

10. Social negative reinforcement involves stimulus termina-tion through the action of another person. Automatic neg-ative reinforcement involves stimulus termination as adirect result of a response.

11. Identification of negative reinforcers requires the specifi-cation of the stimulus conditions in effect prior to and fol-lowing responding.

12. In general, negative reinforcement for a given responsewill be more effective when (a) the stimulus change im-mediately follows the occurrence of the target response, (b)the magnitude of reinforcement is large, (c) the target re-sponse consistently produces escape from or postpone-ment of the EO, and (d) reinforcement is unavailable forcompeting responses.

Applications of Negative Reinforcement

13. Although negative reinforcement is a fundamental prin-ciple of learning that has been studied extensively inbasic research, applied behavior analysis has heavily

marked reductions in disruptive behavior. However, onlythe DNR compliance condition produced an increase intask performance.

Ethical Considerations in the Use of NegativeReinforcementEthical concerns about the use of positive and negative re-inforcement are similar and arise from the severity of theantecedent event (EO) that occasions behavior. Most EOsfor behavior maintained by positive reinforcement can becharacterized as deprivation states, which, if severe, canconstitute undue restriction of rights. By contrast, most

EOs for behavior maintained by negative reinforcementcan be viewed as aversive events. Extremely noxiousevents, when presented as antecedent stimuli, cannot bejustified as part of a typical behavior change program.

Another concern with negative reinforcement is thatthe presence of aversive stimuli can itself generate be-haviors that compete with the acquisition of desired be-havior (Hutchinson, 1977; Myer, 1971). For example, asocially withdrawn child, when placed in the midst ofothers, may simply scream and run away instead of play-ing with the peers, and running away is incompatible withsocial interaction. Finally, undesirable side effects typi-cally associated with punishment might also be observedwhen implementing behavior change programs based onnegative reinforcement.

322

Negative Reinforcement

cially appropriate replacement behavior through nega-tive reinforcement.

Ethical Considerations in the Use of Negative Reinforcement

18. Ethical concerns about the use of positive and negative re-inforcement are similar and arise from the severity of theantecedent event (EO) that occasions behavior. Most EOsfor behavior maintained by negative reinforcement can beviewed as aversive events. Extremely noxious events, whenpresented as antecedent stimuli, cannot be justified as partof a typical behavior change program.

19. Another concern with negative reinforcement is that thepresence of aversive stimuli can itself generate behaviorsthat compete with the acquisition of desired behavior.

emphasized the use of positive reinforcement over neg-ative reinforcement.

14. Applied researchers have explored the therapeutic uses ofnegative reinforcement in treating pediatric feeding problems.

15. Improvements in student performance as a result of errorcorrection that involve repeating a learning trial, havingthe student practice correct performance, or giving thestudent additional work may be a function of negativereinforcement.

16. The presentation of task demands during instruction mayfunction as an EO for escape; initial forms of escape mayinclude lack of attention or mild forms of disruption. Tothe extent that positive reinforcement for compliance isless than optimal, escape behaviors may persist and mayeven escalate.

17. One strategy for treating problem behaviors maintainedby negative reinforcement is to strengthen a more so-

323

Schedules of Reinforcement

Key Terms

adjunctive behaviorsalternative schedule (alt)chained schedule of

reinforcement (chain)compound schedule of

reinforcementconcurrent schedule (conc)conjunctive schedule (conj)continuous reinforcement (CRF)differential reinforcement of

diminishing rates (DRD)

differential reinforcement of highrates (DRH)

differential reinforcement of lowrates (DRL)

fixed interval (FI)fixed ratio (FR)intermittent schedule of

reinforcement (INT)limited holdmatching lawmixed schedule (mix)

multiple schedule (mult)postreinforcement pauseprogressive schedule of

reinforcementratio strainschedule of reinforcementschedule thinningtandem schedule (tand)variable interval (VI)variable ratio (VR)

Behavior Analyst Certification Board® BCBA® & BCABA®Behavior Analyst Task List©, Third Edition

Content Area 9: Behavior Change Procedures

9-2 (b) Use appropriate parameters and schedules of reinforcement.

9-6 Use differential reinforcement.

9-24 Use the matching law and recognize factors influencing choice.

© 2006 The Behavior Analyst Certification Board, Inc.,® (BACB®) all rights reserved. A current version of thisdocument may be found at www.bacb.com. Requests to reprint, copy, or distribute this document and ques-tions about this document must be submitted directly to the BACB.

From Chapter 13 of Applied Behavior Analysis, Second Edition. John O. Cooper, Timothy E. Heron, William L. Heward. Copyright © 2007by Pearson Education, Inc. All rights reserved.

324

A schedule of reinforcement is a rule that de-scribes a contingency of reinforcement, thoseenvironmental arrangements that determine

conditions by which behaviors will produce reinforce-ment. Continuous reinforcement and extinction providethe boundaries for all other schedules of reinforcement.A schedule of continuous reinforcement (CRF) pro-vides reinforcement for each occurrence of behavior. Forexample, a teacher using a continuous schedule of rein-forcement would praise a student each time she identifieda sight word correctly. Examples of behaviors that tendto produce continuous reinforcement include turning ona water faucet (water comes out), answering a telephoneafter it rings (a voice is heard), and putting money into avending machine (a product is obtained). During extinc-tion (EXT), no occurrence of the behavior produces re-inforcement.

IntermittentReinforcementBetween continuous reinforcement and extinction manyintermittent schedules of reinforcement (INT) are pos-sible in which some, but not all, occurrences of the be-havior are reinforced. Only selected occurrences ofbehavior produce reinforcement with an intermittentschedule of reinforcement. CRF is used to strengthen be-havior, primarily during the initial stages of learning newbehaviors. Applied behavior analysts use intermittent re-inforcement to maintain established behaviors.

Maintenance of Behavior

Maintenance of behavior refers to a lasting change in be-havior. Regardless of the type of behavior change tech-nique employed or the degree of success duringtreatment, applied behavior analysts must be concernedwith sustaining gains after terminating a treatment pro-gram. For example, Mary is in the seventh grade and tak-ing French, her first foreign language class. After a fewweeks, the teacher informs Mary’s parents that she is fail-ing the course. The teacher believes that Mary’s prob-lems in French have resulted from lack of daily languagepractice and study. The parents and teacher decide thatMary will record a tally on a chart kept on the family bul-letin board each evening that she studies French for 30minutes. Mary’s parents praise her practice and study ac-complishments and offer encouragement. During a fol-low-up meeting 3 weeks later, the parents and teacherdecide that Mary has done so well that the tally procedurecan be stopped. Unfortunately, a few days later Mary isonce again falling behind in French.

A successful program was developed to establishdaily French language practice. However, gains did notmaintain after removing the tally procedure. The par-ents and the teacher did not establish intermittent rein-forcement procedures. Let us review what happened andwhat could have happened. Continuous reinforcementwas used correctly to develop daily study behavior.However, after the study behavior was established andthe tally procedure removed, the parents should havecontinued to praise and encourage daily practice andgradually offer fewer encouragements. The parentscould have praised Mary’s accomplishments after everysecond day of daily practice, then every fourth day, thenonce per week, and so on. With the intermittent praise,Mary might have continued daily practice after remov-ing the tally procedure.

Progression to NaturallyOccurring Reinforcement

A major goal of most behavior change programs is the de-velopment of naturally occurring activities, stimuli, orevents to function as reinforcement. It is more desirablefor people to read because they like to read, rather thanto obtain contrived reinforcement from a teacher or par-ent; to engage in athletics for the enjoyment of the activ-ity, rather than for a grade or because of a physician’sdirective; to help around the house for the personal sat-isfaction it brings, rather than to earn an allowance. In-termittent reinforcement is usually necessary for theprogression to naturally occurring reinforcement. Eventhough some individuals spend hours each day practicinga musical instrument because they enjoy the activity,chances are good that this persistent behavior developedgradually. At first the beginning music student needs agreat deal of reinforcement to continue the activity: “Youreally practiced well today,” “I can’t believe how wellyou played,” “Your mother told me you received a firstplace in the contest—that’s super!” These social conse-quences are paired with other consequences from teach-ers, family members, and peers. As the student developsmore proficiency in music, the outside consequencesoccur less frequently, intermittently. Eventually, the stu-dent spends long periods making music without receiv-ing reinforcement from others because making music hasitself become a reinforcer for doing that activity.

Some might explain the transition of our music stu-dent from an “externally reinforced person” to a “self-reinforced musician” as the development of intrinsicmotivation, which seems to imply that something insidethe person is responsible for maintaining the behavior.This view is incorrect from a behavioral standpoint. Ap-plied behavior analysts describe intrinsic motivation as reinforcement that is received by manipulating the

Schedules of Reinforcement

325

physical environment. Some individuals ride bicycles, gobackpacking, read, write, or help others because manip-ulations of the environment provide reinforcement forengaging in those activities.

Defining Basic Intermittent Schedules of ReinforcementRatio and Interval Schedules

Applied behavior analysts directly or indirectly embedratio and interval intermittent schedules of reinforce-ment in most treatment programs, especially ratio sched-ules (Lattal & Neef, 1996). Ratio schedules require anumber of responses before one response produces re-inforcement. If the ratio requirement for a behavior is10 correct responses, only the 10th correct response pro-duces reinforcement. Interval schedules require an elapseof time before a response produces reinforcement. If theinterval requirement is 5 minutes, reinforcement is pro-vided contingent on the first correct response that oc-curs after 5 minutes has elapsed since the last reinforcedresponse.

Ratio schedules require a number of responses to beemitted for reinforcement; an elapse of time does notchange the number contingency. The participant’s re-sponse rate, however, determines the rate of reinforce-ment. The more quickly the person completes the ratiorequirement, the sooner reinforcement will occur. Con-versely, interval schedules require an elapse of time be-fore a single response produces reinforcement. The totalnumber of responses emitted on an interval schedule is ir-relevant to when and how often the reinforcer will be de-livered. Emitting a high rate of response during an intervalschedule does not increase the rate of reinforcement. Re-inforcement is contingent only on the occurrence of oneresponse after the required time has elapsed. The avail-ability of reinforcement is time-controlled with intervalschedules, and rate of reinforcement is “self-controlled”with ratio schedules, meaning that the more quickly theindividual completes the ratio requirement, the soonerreinforcement will occur.

Fixed and Variable Schedules

Applied behavior analysts can arrange ratio and intervalschedules to deliver reinforcement as a fixed or a vari-able contingency. With a fixed schedule, the responseratio or the time requirement remains constant. With avariable schedule, the response ratio or the time require-

ment can change from one reinforced response to an-other. The combinations of ratio or interval and fixed orvariable contingencies define the four basic schedules ofintermittent reinforcement: fixed ratio, variable ratio,fixed interval, and variable interval.

The following sections define the four basic sched-ules of intermittent reinforcement, provide examples ofeach schedule, and present some well-established sched-ule effects derived from basic research.

Fixed Ratio Defined

A fixed ratio (FR) schedule of reinforcement requiresthe completion of a number of responses to produce a re-inforcer. For example, every fourth correct (or target) re-sponse produces reinforcement on an FR 4 schedule. AnFR 15 schedule means that 15 responses are required toproduce reinforcement. Skinner (1938) conceptualizedeach ratio requirement as a response unit. Accordingly,the response unit produces the reinforcer, not just the lastresponse of the ratio.

Some business and industrial tasks are paid on an FRschedule (e.g., piecework). A worker might receive a paycredit after completing a specified number of tasks (e.g.,assembling 15 pieces of equipment or picking a box of or-anges). A student might receive either a happy face afterlearning 5 new sight words or a certain number of pointsafter completing 10 math problems.

De Luca and Holborn (1990) reported a comparisonof obese and nonobese children’s rate of pedaling an ex-ercise bicycle under baseline and FR schedules of rein-forcement. The baseline and FR conditions used the sameduration of exercise. After establishing a stable rate ofpedaling during baseline, De Luca and Holborn intro-duced an FR schedule that matched the rate of rein-forcement produced during baseline. All participantsincreased their rate of pedaling with the introduction ofthe FR schedule.

Fixed Ratio Schedule Effects

Consistency of Performance

FR schedules produce a typical pattern of responding:(a) After the first response of the ratio requirement, theparticipant completes the required responses with littlehesitation between responses; and (b) a postrein-forcement pause follows reinforcement (i.e., the partic-ipant does not respond for a period of time followingreinforcement). The size of the ratio influences theduration of the postreinforcement pause: Large ratio

Schedules of Reinforcement

326

requirements produce long pauses; small ratios produceshort pauses.

Rate of Response

FR schedules often produce high rates of response. Quickresponding on FR schedules maximizes the delivery of re-inforcement because the quicker the rate of response, thegreater the rate of reinforcement. People work rapidlywith a fixed ratio because they receive reinforcement withthe completion of the ratio requirements. Computer key-boarders (typists) who contract their services usuallywork on an FR schedule. They receive a specified amountfor the work contracted. A typist with a 25-page manu-script to complete is likely to type at the maximum rate.The sooner the manuscript is typed, the sooner paymentis received, and the more work the typist can complete ina day.

The size of the ratio can influence the rate of re-sponse on FR schedules. To a degree, the larger the ratiorequirement, the higher the rate of response. A teachercould reinforce every third correct answer to arithmeticfacts. With this ratio requirement, the student might com-plete 12 problems within the specified time, producing re-inforcement four times. The student might complete moreproblems in less time if the teacher arranged reinforce-ment contingent on 12 correct answers rather than 3. Thehigher ratio is likely to produce a higher rate of response.The rate of response decreases, however, if the ratio re-quirements are too large. The maximum ratio is deter-mined in part by the participant’s past FR history ofreinforcement, motivating operations, the quality of thereinforcer, and the procedures that change the ratio re-quirements. For example, if ratio requirements are raisedgradually over an extended period of time, extremely highratio requirements can be reached.

Figure 1 summarizes the schedule effects typicallyproduced by FR schedules of reinforcement.

Variable Ratio Defined

A variable ratio (VR) schedule of reinforcement re-quires the completion of a variable number of responsesto produce a reinforcer. A number representing the aver-age (e.g., mean) number of responses required for rein-forcement identifies the VR schedule. For example, witha VR 10 schedule every tenth correct response on the av-erage produces reinforcement. Reinforcement can comeafter 1 response, 20 responses, 3 responses, 13 responses,or n responses, but the average number of responses re-quired for reinforcement is 10 (e.g., 1 + 20 + 3 + 13 + 18= 55; 55/5 = 10).

The operation of a slot machine, the one-armed ban-dit, provides a good example of a VR schedule. Thesemachines are programmed to pay off only a certain pro-portion of the times they are played. A player cannot pre-dict when the next operation of the machine will pay off.The player might win 2 or 3 times in succession and thennot win again for 20 or more plays.

De Luca and Holborn (1992) examined the effectsof a VR schedule on three obese and three nonobese chil-dren’s rate of pedaling an exercise bicycle. The childrencould use the exercise bicycle Monday to Friday duringeach week of the analysis, but received no encourage-ment to do so. The participants received the instruction to“exercise as long as you like” to initiate the baseline con-dition. De Luca and Holborn introduced the VR scheduleof reinforcement after establishing a stable baseline rateof pedaling. They calculated the baseline mean number ofpedal revolutions per minute and programmed the firstVR contingency at approximately 15% faster pedalingthan the baseline mean. The children received points on

Schedules of Reinforcement

Definition: Reinforcement delivered contingent on emission of a specified number of responses.

Stylized Graphic Curveof Cumulative Responses:

Schedule Effects: After reinforcement a postreinforcement pauseoccurs. After the pause theratio requirement is completed with a high rate of responseand very little hesitationbetween responses. The sizeof the ratio influencesboth the pause and the rate.

Res

po

nse

s

ba

c

Time

a = postreinforcement pause

b = high rate of response "run"

c = reinforcer delivered upon emission of nth

responseFigure 1 Summary of FR schedule effects during ongoing reinforcement.

327

0

Baseline VR 80 VR 115 VR 130 VR 130BL

SCOTTNonobese

140

120

100

160

806040

0

Baseline VR 85 VR 115 VR 125 VR 125BL

SHAWNNonobese

13011090

150

705040

0

Baseline VR 85 VR 115 VR 125 VR 125BL

STEVENonobese

13011090

150

705040

0

Baseline VR 70 VR 95 VR 100 VR 100BL

PETERObese

120

100

140

80

60

40

0

Baseline VR 80 VR 105 VR 120 VR 120BL

PAULObese

120100

140

806040

0

Baseline VR 70 VR 90 VR 110 VR 110BL

PERRYObese

120100

140

806040

Sessions

1 5 10 15 20 25 30 35 40

Mea

n R

evo

luti

on

s p

er M

inu

te

Figure 2 Mean revolutions per minute duringbaseline, VR 1 (VR range, 70 to 85), VR 2 (VR range90 to 115), VR 3 (VR range 100 to 130), return tobaseline, and return to VR 3 phases for obese andnonobese subjects.From “Effects of a Variable-Ratio Reinforcement Schedule with ChangingCriteria on Exercise in Obese and Nonobese Boys” by R. V. De Luca and S. W. Holborn, 1992, Journal of Applied Behavior Analysis, 25 p. 674.Copyright 1992 by the Society for the Experimental Analysis of Behavior,Inc. Reprinted by permission.

the VR schedule to exchange for backup reinforcers. De Luca and Holborn increased the VR schedule in twoadditional increments by approximately 15% per incre-ment. All participants had systematic increases in theirrate of pedaling with each VR value, meaning that thelarger the variable ratio, the higher the rate of response.De Luca and Holborn reported that the VR schedule pro-duced higher rates of response than did the FR schedulein their previous study (De Luca & Holborn, 1990).

Figure 2 presents the participants’ performances underbaseline and VR (i.e., VR ranges 70 to 85, 90 to 115, 100to 130) conditions.

Student behaviors usually produce reinforcementfollowing the completion of variable ratios. Usually astudent cannot predict when the teacher will call on himto give an answer, and receive reinforcement. Goodgrades, awards, promotions—all may come after an unpredictable number of responses. And in checking

Schedules of Reinforcement

328

Definition: Reinforcer is delivered after the emission of a variable number of responses.

Stylized Graphic Curveof Cumulative Responses:

Schedule Effects: Ratio requirements are completed with a very high rate of response and little hesitation between responses. Postreinforcement pauses are not a characteristicof the VR schedule. Rate of response is influenced by the size of the ratio requirements.

Res

po

nse

s

b

a

Time

a = high, steady rate of responding

b = reinforcement delivered after a varying number of required responses are emitted

Figure 3 Summary of VR schedule effects during ongoing reinforcement.

seatwork, the teacher might reinforce a student’s workafter the completion of 10 tasks, another student’s workafter 3 tasks, and so on.

Variable Ratio Schedule Effects

Consistency of Performance

VR schedules produce consistent, steady rates of re-sponse. They typically do not produce a postreinforce-ment pause, as do FR schedules. Perhaps the absence ofpauses in responding is due to the absence of informationabout when the next response will produce reinforce-ment. Responding remains steady because the next re-sponse may produce reinforcement.

Rate of Response

Like the FR schedule, the VR schedule tends to produce aquick rate of response. Also similar to the FR schedule,the size of the ratio influences the rate of response. To a de-gree, the larger the ratio requirement, the higher the rate ofresponse. Again like FR schedules, when variable ratio re-quirements are thinned gradually over an extended periodof time, participants will respond to extremely high ratiorequirements. Figure 3 summarizes the schedule effectstypically produced by VR schedules of reinforcement.

Variable Ratio Schedules in Applied Settings

Basic researchers use computers to select and programVR schedules of reinforcement. VR schedules used in ap-plied settings are seldom implemented with a planned andsystematic approach. In other words, the reinforcer is de-livered by chance, hit or miss in most interventions. This

nonsystematic delivery of reinforcement is not an effec-tive use of VR schedules. Teachers can select and preplanVR schedules that approximate the VR schedules used inbasic research. For example, teachers can plan variableratios by (a) selecting a maximum ratio for a given activ-ity (e.g., 15 responses) and (b) using a table of randomnumbers to produce the specific variable ratios for theschedule of reinforcement. A table of random numbersmight produce the following sequence of ratios: 8, 1, 1, 14,3, 10, 14, 15, and 6, producing a VR 8 schedule of rein-forcement (on the average each 8th response produces thereinforcer) with the ratios ranging from 1 to 15 responses.

Teachers can apply the following VR procedures asindividual or group contingencies of reinforcement foracademic or social behavior:

Tic-Tac-Toe VR Procedure1. The teacher establishes a maximum number for the

individual student or group. The larger the maxi-mum number selected, the greater the odds againstmeeting the contingency. For example, 1 chanceout of 100 has less chance of being selected than 1 chance out of 20.

2. The teacher gives the individual or group a tic-tac-toe grid.

3. Students fill in each square of the grid with a num-ber no greater than the maximum number. For ex-ample, if the maximum number is 30, the scoresheet might look like this:

Schedules of Reinforcement

329

4. The teacher fills a box or some other type of con-tainer with numbered slips of paper (with numbersno higher than the maximum number). Each num-ber should be included several times; for example,five 1s, five 2s, five 3s.

5. Contingent on the occurrence of the target behav-ior, students withdraw one slip of paper from thebox. If the number on the paper corresponds with anumber on the tic-tac-toe sheet, the students markout that number on the grid.

6. The reinforcer is delivered when students havemarked out three numbers in a row—horizontally,vertically, or diagonally.

For example, a student might withdraw one slip ofpaper for each homework assignment completed. Se-lecting an activity from the class job board (e.g., teacher’shelper, collecting milk money, running the projector)could serve as the consequence for marking out threenumbers in a row.

Classroom Lottery VR Procedure 1. Students write their names on index cards after

successfully completing assigned tasks.

2. Students put signature cards into a box located onthe teacher’s desk.

3. After an established interval of time (e.g., 1 week),the teacher draws a signature card from the box anddeclares that student the winner. The lottery canhave first, second, and third place, or any number ofwinners. The more cards students earn, the greateris the chance that one of their cards will be picked.

Teachers have used classroom lotteries with a vari-ety of student accomplishments, such as nonassigned bookreading. For example, for each book read, students writetheir names and the titles of the book they have read on acard. Every 2 weeks the teacher picks one card from thebox and gives the winning student a new book. To makethe book an especially desirable consequence, the teacherlets students earn the privilege of returning the book tothe school, inscribed with the student’s name, class, anddate (e.g., Brian Lee, fifth grade, donated this book to theHigh Street Elementary School Library on May 22, 2007).

Desk Calendar VR Procedure 1. Students receive desk calendars with loose-leaf

date pages secured to the calendar base.

2. The teacher removes the loose-leaf date pagesfrom the calendar base.

3. The teacher establishes a maximum ratio for thestudents.

4. The teacher numbers index cards consecutivelyfrom 1 to the maximum ratio. Multiple cards are

included for each number (e.g., five 1s, five 2s). Ifa large average ratio is desired, the teacher in-cludes more large numbers; for small average ra-tios, the teacher uses smaller numbers.

5. The teacher uses a paper punch to punch holes inthe index cards for attaching the cards to the cal-endar base.

6. The teacher or student shuffles the index cards toquasi-randomize the order and attaches the indexcards to a calendar base face down.

7. Students produce their own VR schedules by turn-ing over one index card at a time. After meetingthat ratio requirement, students flip the second cardto produce the next ratio, and so on.

Students can use the desk calendar base to programVR schedules for most curriculum area (e.g., arithmeticfacts). For example, after receiving an arithmetic work-sheet, the student flips the first card. It has a 5 written onit. After completing five problems, she holds up her handto signal her teacher that she has completed the ratio re-quirement. The teacher checks the student’s answers, pro-vides feedback, and presents the consequence for correctproblems. The student flips the second card; the ratio re-quirement is 1. After completing that single problem, shereceives another consequence and flips the third card.This time the ratio is 14. The cycle continues until all ofthe cards in the stack are used. New cards can then beadded or old cards reshuffled to create a new sequence ofnumbers. The average of the numbers does not changein the reshuffling.

Fixed Interval Schedules

A fixed interval (FI) schedule of reinforcement providesreinforcement for the first response following a fixed du-ration of time. With an FI 3-minute schedule, the first re-sponse following the elapse of 3 minutes produces thereinforcer. A common procedural misunderstanding withthe FI schedule is to assume that the elapse of time aloneis sufficient for the delivery of a reinforcer, assuming thatthe reinforcer is delivered at the end of each fixed intervalof time. However, more time than the fixed interval canelapse between reinforced responses. The reinforcer isavailable after the fixed time interval has elapsed, and itremains available until the first response. When the firstresponse occurs sometime after the elapse of a fixed in-terval, that response is immediately reinforced, and thetiming of another fixed interval is usually started with thedelivery of the reinforcer. This FI cycle is repeated untilthe end of the session.

Actual examples of FI schedules in everyday life aredifficult to find. However, some situations do approxi-mate and in reality function as FI schedules. For

Schedules of Reinforcement

330

example, mail is often delivered close to a fixed time eachday. An individual can make many trips to the mailbox tolook for mail, but only the first trip to the mailbox fol-lowing the mail delivery will produce reinforcement.Many textbook examples of FI schedules, such as themail example, do not meet the definition of an FI sched-ule; but the examples do appear similar to an FI schedule.For example, receiving a paycheck as wages for work bythe hour, day, week, or month is contingent on the first re-sponse on payday that produces the paycheck. Of course,receiving the paycheck requires many responses duringthe interval that eventually lead to receiving the paycheck.In a true FI schedule, responses during the interval donot influence reinforcement.

FI schedules are relatively easy to use in appliedsettings. A teacher could make reinforcement availableon an FI 2-minute schedule for correct answers on anarithmetic worksheet. The teacher or student could usean electronic timer with a countdown function to sig-nal the elapse of the 2-minute interval. The student’sfirst correct answer following the interval produces re-inforcement, and then the teacher resets the timer foranother 2-minute interval. Similarly, the teacher coulduse small timing instruments such as the Gentle Re-minder ([email protected]) and MotivAiders(www.habitchange.com) that vibrate to signal the elapseof an interval.

Fixed Interval Schedule Effects

Consistency of Performance

FI schedules typically produce a postreinforcement pausein responding during the early part of the interval. An ini-tially slow but accelerating rate of response is evident to-

ward the end of the interval, usually reaching a maximumrate just before delivery of the reinforcer. This graduallyaccelerating rate of response toward the end of the inter-val is called an FI scallop because of the rounded curvesthat are shown on a cumulative graph (see Figure 4).

FI postreinforcement pause and scallop effects can beseen in many everyday situations. When college studentsare assigned a term paper, they typically do not rush to thelibrary and start to work on the paper immediately. Moreoften they wait a few days or weeks before starting towork. However, as the due date approaches, their work onthe assignment increases in an accelerating fashion, andmany are typing the final draft just before class. Cram-ming for a midterm or final examination is another ex-ample of the FI scallop effect.

These examples with the reinforcement pause andscallop effects appear to be produced by FI schedules ofreinforcement. They are not, however, because like thepaycheck example, college students must complete manyresponses during the interval to produce the term paperor a good grade on the examinations, and the term paperand examinations have deadlines. With FI schedules, re-sponses during the interval are irrelevant, and FI sched-ules have no deadlines for the response.

Why does an FI schedule produce a characteristicpause and scallop effect? After adjustment to an FI sched-ule, participants learn (a) to discriminate the elapse oftime and (b) that responses emitted right after a reinforcedresponse are never reinforced. Therefore, extinction dur-ing the early part of the interval might account for thepostreinforcement pause. The effects of FI and FR sched-ules of reinforcement are similar in that both schedulesproduce postreinforcement pauses. However, it is im-portant to recognize the different characteristics of be-havior that emerge under each schedule. Responses under

Schedules of Reinforcement

Definition: The first correct response after a designated and constant amount of time produces the reinforcer.

Stylized Graphic Curveof Cumulative Responses:

Schedule Effects: FI schedules generate slow to moderate rates of responding with a pause in responding following reinforcement. Responding begins to accelerate toward the end of the interval.

Res

po

nse

s

b

a

Time

a = postreinforcement pauseb = increase in response

rates as interval progresses and reinforcer becomes available

c = reinforcer delivered contingent on first correct response after interval

c

Figure 4 Summary of FI schedule effects during ongoing reinforcement.

331

Definition: The first correct response following varying intervals of time produces the reinforcer.

Stylized Graphic Curveof Cumulative Responses:

Schedule Effects: A VI schedule generates a slow to moderate response rate that is constant and stable. There are few, if any, post-reinforcement pauses with VI schedules.

Res

po

nse

s

b

a

Time

a = steady response rate; few, if any, postreinforcement pauses

b = reinforcer deliveredFigure 5 Summary of VI schedule effects during ongoing reinforcement.

an FR schedule are emitted at a consistent rate until com-pleting the ratio requirement, whereas responses underan FI schedule begin at a slow rate and accelerate towardthe end of each interval.

Rate of Responding

Overall, FI schedules tend to produce a slow to moder-ate rates of response. The duration of the time intervalinfluences the postreinforcement pause and the rate ofresponse; to a degree, the larger the fixed interval re-quirement, the longer the postreinforcement pause andthe lower the overall rate of response.

Variable Interval Schedules

A variable interval (VI) schedule of reinforcement pro-vides reinforcement for the first correct response fol-lowing the elapse of variable durations of time. Thedistinguishing feature of VI schedules is that “the inter-vals between reinforcement vary in a random or nearlyrandom order” (Ferster & Skinner, 1957, p. 326). Be-havior analysts use the average (i.e., mean) interval oftime before the opportunity for reinforcement to describeVI schedules. For example, in a VI 5-minute schedulethe average duration of the time intervals between rein-forcement and the opportunity for subsequent reinforce-ment is 5 minutes. The actual time intervals in a VI5-minute schedule might be 2 minutes, 5 minutes, 3 min-utes, 10 minutes, or n minutes (or seconds).

An example of VI reinforcement in everyday situa-tions occurs when one person telephones another personwhose line is busy. This is a VI schedule because a vari-able interval of time is necessary for the second person toconclude the telephone conversation and hang up so thatanother call can be connected. After that interval the firstdialing of the second person’s number will probably pro-

duce an answer (the reinforcer). The number of responses(attempts) does not influence the availability of rein-forcement in a VI schedule; no matter how many timesthe busy number is dialed, the call will not be completeduntil the line is free. And the time interval is unpredictablein a VI schedule: The busy signal may last for a short orlong time.

Variable Interval Schedule Effects

Consistency of Performance

A VI schedule of reinforcement tends to produce a con-stant, stable rate of response. The slope of the VI sched-ule on a cumulative graph appears uniform with fewpauses in responding (see Figure 5). A VI schedule typ-ically produces few hesitations between responses. Forexample, pop quizzes at unpredictable times tend to oc-casion more consistent study behavior from students thando quizzes scheduled at fixed intervals of time. Further-more, students are less apt to engage in competing off-task behaviors during instructional and study periodswhen a pop quiz is likely. The pop quiz is used often asan example of a VI schedule because the performance ef-fect is similar to a VI performance. The pop quiz doesnot represent a true VI schedule, however, because of therequired responses during the interval, and the deadlinefor receiving reinforcement.

Rate of Responding

VI schedules of reinforcement tend to produce low tomoderate rates of response. Like the FI schedule, the av-erage duration of the time intervals on VI schedules in-fluences the rate of response; to a degree, the larger theaverage interval, the lower the overall rate of response.Figure 5 summarizes the schedule effects typically pro-duced by VI schedules during ongoing reinforcement.

Schedules of Reinforcement

332

Variable Interval Schedules in Applied Settings

Basic researchers use computers to select and program VIschedules of reinforcement, as they do with VR sched-ules. Teachers seldom apply VI schedules in a plannedand systematic way. For example, a teacher might set anelectronic countdown timer with varied intervals of timeranging from 1 minute to 10 minutes without any priorplan as to which intervals or which order will be used.This set-them-as-you-go selection of intervals approxi-mates the basic requirements for a VI schedule; however,it is not the most effective way of delivering reinforce-ment on a VI schedule. A planned, systematic applica-tion of varied intervals of time should increase theeffectiveness of a VI schedule.

For example, applied behavior analysts can selectthe maximum time interval, whether in seconds or min-utes, that will maintain performance and still be appro-priate for the situation. Preferably, applied behavioranalysts will use data from a direct assessment to guidethe selection of the maximum VI interval, or at the leastclinical judgment based on direct observation. Analystscan use a table of random numbers to select the varied in-tervals between 1 and the maximum interval, and thenidentify the VI schedule by calculating an average valuefor the VI schedule. The VI schedule may need adjust-ments following the selection of time intervals. For ex-ample, if a larger average interval of time appearsreasonable, the teacher can replace some of the smallerintervals with larger ones. Conversely, if the average ap-pears too large, the teachers can replace some of thehigher intervals with smaller ones.

Interval Schedules with a Limited Hold

When a limited hold is added to an interval schedule,reinforcement remains available for a finite time follow-ing the elapse of the FI or VI interval. The participantwill miss the opportunity to receive reinforcement if atargeted response does not occur within the time limit.For example, on an FI 5-minute schedule with a limitedhold of 30 seconds, the first correct response followingthe elapse of 5 minutes is reinforced, but only if the re-sponse occurs within 30 seconds after the end of the 5-minute interval. If no response occurs within 30 seconds,the opportunity for reinforcement has been lost and a newinterval begins. The abbreviation LH identifies intervalschedules using a limited hold (e.g., FI 5-minute LH 30-second, VI 3-minute LH 1-minute). Limited holds withinterval schedules typically do not change the overall re-sponse characteristics of FI and VI schedules beyond apossible increase in rate of response.

Martens, Lochner, and Kelly (1992) used a VI sched-ule of social reinforcement to increase the academic en-gagement of two 8-year-old boys in a third-gradeclassroom. The classroom teacher reported that the boyshad serious off-task behaviors. The experimenter worean earphone connected to a microcassette recorder con-taining a 20-second fixed-time cueing tape. The cueingtape was programmed for a VI schedule of reinforcementin which only some of the 20-second intervals providedthe opportunity for reinforcement in the form of verbalpraise for academic engagement. If the boys were notacademically engaged when the VI interval timed out,they lost that opportunity for reinforcement until the nextcue. Thus, this VI schedule entailed a very short limitedhold for the availability of reinforcement. Following base-line, the experimenter delivered contingent praise on aVI 5-minute or VI 2-minute schedule that alternated dailyon a quasi-random basis. Both boys’ academic engage-ment on the VI 5-minute schedule resembled their base-line engagement. Both students had a higher percentageof academic engagement on the VI 2-minute schedulethan they had during baseline and VI 5-minute condi-tions. Figure 6 presents percentages of academic en-gagement across baseline and VI conditions.

Thinning IntermittentReinforcementApplied behavior analysts often use one of two proce-dures for schedule thinning. First, they thin an existingschedule by gradually increasing the response ratio orthe duration of the time interval. If a student has answeredaddition facts effectively and responded well to a CRFschedule for two or three sessions, the teacher might thinthe reinforcement contingency slowly from one correctaddition fact (CRF) to a VR 2 or VR 3 schedule. The stu-dent’s performance should guide the progression from adense schedule (i.e., responses produce frequent rein-forcement) to a thin schedule (i.e., responses produce lessfrequent reinforcement). Applied behavior analystsshould use small increments of schedule changes duringthinning and ongoing evaluation of the learner’s perfor-mance to adjust the thinning process and avoid the lossof previous improvements.

Second, teachers often use instructions to clearlycommunicate the schedule of reinforcement, facilitatinga smooth transition during the thinning process. Instruc-tions include rules, directions, and signs. Participantsdo not require an awareness of environmental contin-gencies for effective intermittent reinforcement, but in-structions may enhance the effectiveness of interventionswhen participants are told what performances producereinforcement.

Schedules of Reinforcement

333

100

75

50

25

0

Per

cen

t E

nga

gem

ent

Baseline Treatment

2 4 6 8 10 12 14 16 18 20 22 24

Sessions

V15

V12

Bob

100

75

50

25

0

Per

cen

t E

nga

gem

ent

Baseline Treatment

2 4 6 8 10 12 14 16 18 20 22 24

Sessions

V15

V12

Mark

Figure 6 Percentage ofacademic engagementfor each child across allconditions in Experiment 2.From “The Effects of Variable-IntervalReinforcement on Academic Engage-ment: A Demonstration of Matching Theory” by B. K. Martens, D. G. Lochner,and S. Q. Kelly, 1992, Journal of AppliedBehavior Analysis, 25, p. 149. Copyright1992 by the Society for the ExperimentalAnalysis of Behavior, Inc. Reprinted bypermission.

Ratio strain can result from abrupt increases in ratiorequirements when moving from denser to thinner rein-forcement schedules. Common behavioral characteris-tics associated with ratio strain include avoidance,aggression, and unpredictable pauses in responding. Ap-plied behavior analysts should reduce the ratio require-ment when ratio strain is evident. The analyst can againgradually thin ratio requirements after recovering the be-havior. Small and gradual increases in ratio requirementshelp to avoid the development of ratio strain. Ratio strainwill occur also when the ratio becomes so large that thereinforcement cannot maintain the response level or theresponse requirement exceeds the participant’s physio-logical capabilities.

Variations on BasicIntermittent Schedules of ReinforcementSchedules of DifferentialReinforcement of Rates of Responding

Applied behavior analysts frequently encounter behav-ior problems that result from the rate that people performcertain behaviors. Responding too infrequently, or toooften, may be detrimental to social interactions or aca-

demic learning. Differential reinforcement provides anintervention for behavior problems associated with rateof response. Differential reinforcement of particular ratesof behavior is a variation of ratio schedules. Delivery ofthe reinforcer is contingent on responses occurring at arate either higher than or lower than some predeterminedcriterion. The reinforcement of responses higher than apredetermined criterion is called differential reinforce-ment of high rates (DRH). When responses are rein-forced only when they are lower than the criterion, theschedule provides differential reinforcement of lowrates (DRL). DRH schedules produce a higher rate ofresponding. DRL schedules produce a lower rate ofresponding.

Applied behavior analysts use three definitions ofDRH and DRL schedules. The first definition states thatreinforcement is available only for responses that are sep-arated by a given duration of time. This first definition issometimes called spaced-responding DRH or spaced-responding DRL. An interresponse time (IRT) identifiesthe duration of time that occurs between two responses.IRT and rate of response are functionally related. LongIRTs produce low rates of responding; short IRTs pro-duce high rates of responding. Responding on a DRHschedule produces reinforcement whenever a responseoccurs before a time criterion has elapsed. If the time cri-terion is 30 seconds, the participant’s response producesreinforcement only when the IRT is 30 seconds or less.

Schedules of Reinforcement

334

Under the DRL schedule, a response produces rein-forcement when it occurs after a time criterion haselapsed. If the stated DRL time criterion is again 30 sec-onds, a response produces reinforcement only when theIRT is 30 seconds or greater.

This first definition of DRH and DRL as IRT sched-ules of reinforcement has been used almost exclusively inlaboratory settings. There are two apparent reasons forits lack of application in applied settings: (a) Most ap-plied settings do not have sufficient automated equip-ment to measure IRT and to deliver reinforcement usingan IRT criterion; and (b) reinforcement is delivered usu-ally, but not necessarily, following each response thatmeets the IRT criterion. Such frequent reinforcementwould disrupt student activity in most instructional set-tings. However, with increased use of computers for tu-torial and academic response practice, opportunitiesincreasingly should become available for using IRT-basedschedules of reinforcement to accelerate or decelerateacademic responding. Computers can monitor the pausesbetween academic responses and provide consequencesfor each response meeting the IRT criterion, with littledisruption in instructional activity.

Based on the laboratory procedures for programmingDRL schedules presented previously, Deitz (1977) la-beled and described two additional procedures for usingdifferential reinforcement of rates of responding in ap-plied settings: full-session DRH or DRL and intervalDRH or DRL. Deitz initially used the full-session andinterval procedures as a DRL intervention for problembehaviors. The full-session and interval procedures, how-ever, apply also for DRH.

A DRH full-session schedule provides reinforcementif the total number of responses during the session meetsor exceeds a number criterion. If the participant emitsfewer than the specified number of responses during thesession, the behavior is not reinforced. The DRL full-session schedule is procedurally the same as the DRHschedule, except reinforcement is provided for respond-ing at or below the criterion limit. If the participant emitsmore than the specified number of responses during thesession, reinforcement is not delivered.

The interval definition for DRH and DRL schedulesstates that reinforcement is available only for responsesthat occur at a minimum or better rate of response overshort durations of time during the session. To apply an in-terval DRH schedule, the applied behavior analyst orga-nizes the instructional session into equal intervals of timeand dispenses a reinforcer at the end of each intervalwhen the student emits a number of responses equal to,or greater than, a number criterion. The interval DRLschedule is procedurally like the DRH interval schedule,except that reinforcement is provided for responding at orbelow the criterion limit.

The differential reinforcement of diminishingrates (DRD) schedule provides reinforcement at the endof a predetermined time interval when the number of re-sponses is less than a criterion that is gradually decreasedacross time intervals based on the individual’s perfor-mance (e.g., fewer than five responses per 5 minutes,fewer than four responses per 5 minutes, fewer than threeresponses per 5 minutes, etc.). Deitz and Repp (1973)used a group DRD contingency to reduce off-task talk-ing of 15 high school senior girls. They set the first DRDcriterion limit at five or fewer occurrences of off-tasktalking during each 50-minute class session. The DRLcriterion limits were then gradually reduced to three orfewer, one or fewer, and finally no responses. The stu-dents earned a free Friday class when they kept off-tasktalking at or below the DRD limit Monday throughThursday.

The previous example of a DRD schedule used anidentical procedure as described for the full-sessionDRL. DRD is also a procedural variation on intervalDRL schedules described by Deitz (1977) and Deitz andRepp (1983). The typical procedure for using an inter-val DRL as an intervention for problem behavior pro-vided reinforcement contingent on emitting one or noresponses per brief interval. After the problem behaviorstabilizes at the initial criterion, the applied behavior an-alyst maintains the maximum criterion of one or no re-sponses per interval, but increases the duration of thesession intervals to further diminish the behavior. In-creasing the duration of session intervals continues grad-ually until the problem behavior achieves a terminal lowrate of responding.

Later Deitz and Repp (1983) programmed the inter-val DRL with a criterion greater than one response per in-terval, then gradually diminished the maximum numberof responses per interval while the duration of the inter-val remained constant (e.g., fewer than five responses per5 minutes, fewer than four responses per 5 minutes, fewerthan three responses per 5 minutes, etc.). The DRD sched-ule and the interval DRL schedule that use a maximumnumber criterion greater than one per interval are differ-ent terms for the same procedure. Full-session and inter-val DRL have a long history of application in appliedbehavior analysis. DRD offers applied behavior analystsa new, and perhaps improved, label for the interval DRLprocedure.

Progressive Schedules of Reinforcement

A progressive schedule of reinforcement systematicallythins each successive reinforcement opportunity inde-pendent of the participant’s behavior. Progressive ratio

Schedules of Reinforcement

335

(PR) and progressive interval (PI) schedules of rein-forcement change schedule requirements using (a) arith-metic progressions to add a constant amount to eachsuccessive ratio or interval or (b) geometric progressionsto add successively a constant proportion of the preced-ing ratio or interval (Lattal & Neef, 1996). Progressiveschedules of reinforcement are often used for reinforcerassessment and behavioral intervention as described inthe following sections.

Using Progressive Schedules for Reinforcer Assessment

Applied behavior analysts typically use a dense sched-ule of reinforcement (e.g., CRF) during reinforcer as-sessment while presenting preferred stimuli to increase ormaintain existing behavior. However, Roane, Lerman,and Vorndran (2001) cautioned that “reinforcement ef-fects obtained during typical reinforcer assessments mayhave limited generality to treatment efficacy when sched-ule thinning and other complex reinforcement arrange-ments are used” (p. 146). They made an important clinicalpoint by showing that two reinforcers could be equally ef-fective for dense schedules of reinforcement, but differ-entially effective when the schedule of reinforcementrequires more responses per reinforcement. Progressiveschedules of reinforcement provide an assessment pro-cedure for identifying reinforcers that will maintain treat-ment effects across increasing schedule requirements.During the session, progressive schedules are typicallythinned to the “breaking point,” when the participantstops responding. Comparing the breaking points andcorresponding number of responses associated with eachreinforcer can identify relative reinforcement effects.

Using Progressive Schedules for Intervention

Applied behavior analysts have used progressive sched-ules to develop self-control (e.g., Binder, Dixon, &Ghezzi, 2000; Dixon & Cummins, 2001). For example,Dixon and Holcomb (2000) used a progressive sched-ule to develop cooperative work behaviors and self-control of six adults dually diagnosed with mentalretardation and psychiatric disorders. The adults partic-ipated in two groups comprised of three men in Group1 and three women in Group 2. During a natural baselinecondition, the groups received instruction to exchangeor share cards to complete a cooperative task of sortingplaying cards into piles by categories (i.e., hearts withhearts, etc.). Dixon and Holcomb terminated a naturalbaseline session for the group when one of the adultsquit sorting cards.

The groups received points for working on the card-sorting task during the choice baseline condition and theself-control training condition. Groups exchanged theirpoints earned for items such as soda pop or cassette play-ers, ranging in values from 3 points to 100 points.

During the choice baseline conditions, the group’sparticipants could choose an immediate 3 points beforedoing the card sorting or a delayed 6 points after sortingthe cards. Both groups chose the immediate smaller num-ber of points rather than the larger amount following adelay to reinforcement.

During self-control training, the participants wereasked while working on a cooperative task, “Do you want3 points now, or would you like 6 points after sorting thecards for Z minutes and seconds?” (pp. 612–613). Thedelay was initially 0 seconds for both groups. The pro-gressive delay to reinforcement ranged from an increaseof 60 seconds to 90 seconds following each session thatthe group performance met the exact criterion for num-ber of seconds of task engagement. The terminal goalsfor the delay to reinforcement were 490 seconds forGroup 1 and 772 seconds for Group 2. Both groupsachieved these delay-to-reinforcement goals. Followingthe introduction of the progressive delay procedure, bothgroups improved their cooperative work engagement andthe self-control necessary to select progressively largerdelays to reinforcement that resulted in more pointsearned. Figure 7 shows the performance of both groupsof adults during natural baselines, choice baselines, andself-control training conditions.

Compound Schedules of ReinforcementApplied behavior analysts combine the elements of con-tinuous reinforcement (CRF), the four intermittent sched-ules of reinforcement (FR, VR, FI, VI), differentialreinforcement of various rates of responding (DRH,DRL), and extinction (EXT) to form compound sched-ules of reinforcement. Elements from these basic sched-ules can occur

• successively or simultaneously;

• with or without discriminative stimuli; and

• as a reinforcement contingency for each element in-dependently, or a contingency formed by the combi-nation of all elements (Ferster & Skinner, 1957).

Concurrent Schedules

A concurrent schedule (conc) of reinforcement occurswhen (a) two or more contingencies of reinforcement (b) operate independently and simultaneously (c) for two

Schedules of Reinforcement

336

52048044040036032028024020016012080400

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33

900800700600500400300200100

0

N.B. C.B. S.C.T.

Peter left study

Goal

Gro

up

En

gage

men

t (s

eco

nd

s)

Sessions

Goal

Group 1

Group 2

= no reinforcement

= failed attempt at large reinforcer

= small reinforcer

= large reinforcer

Figure 7 Number of seconds ofengagement in the concurrent delayactivity of cooperative card sortingduring natural baseline (N.B.), choicebaseline (C.B.), and self-control training(S.C.T.) for each group of participants.Filled circles represent performance atexactly the criterion level, and X datapoints represent the number of secondsof engagement below the criterion.From “Teaching Self-Control to Small Groups of DuallyDiagnosed Adults” by M. R. Dixon and S. Holcomb, 2000,Journal of Applied Behavior Analysis, 33, p. 613.Copyright 1992 by the Society for the ExperimentalAnalysis of Behavior Inc. Reprinted by permission.

or more behaviors. People in the natural environmenthave opportunities for making choices among concur-rently available events. For example, Sharon receives aweekly allowance from her parents contingent on doingdaily homework and cello practice. After school she canchoose when to do homework and when to practice thecello, and she can distribute her responses between thesetwo simultaneously available schedules of reinforcement.Applied behavior analysts use concurrent schedules forreinforcer assessment and for behavioral interventions.

Using Concurrent Schedules for Reinforcer Assessment

Applied behavior analysts have used concurrent sched-ules extensively to provide choices during the assessmentof consequence preferences and the assessment of re-sponse quantities (e.g., force, amplitude) and reinforcerquantities (e.g., rate, duration, immediacy, amount). Re-sponding to concurrent schedules provides a desirableassessment procedure because (a) the participant makeschoices, (b) making choices during assessment approxi-mates the natural environment, (c) the schedule is effec-tive in producing hypotheses about potential reinforcersoperating in the participant’s environment, and (d) theseassessments require the participant to choose betweenstimuli rather than indicating a preference for a givenstimulus (Adelinis, Piazza, & Goh, 2001; Neef, Bicard,& Endo, 2001; Piazza et al., 1999).

Roane, Vollmer, Ringdahl, and Marcus (1998) pre-sented 10 items to a participant, 2 items at a time. The

participant had 5 seconds to select 1 item by using areaching response to touch the selected item. As a con-sequence for the selection, the participant received theitem for 20 seconds. The analyst verbally prompted a re-sponse if the participant did not respond within 5 sec-onds, waiting another 5 seconds for the occurrence of aprompted response. Items were eliminated from the as-sessment (a) if they were not chosen during the first fivepresentations or (b) if they were chosen two or fewertimes during the first seven presentations. The partici-pant made a total of 10 choices among the remainingitems. The number of selections out of the 10 opportuni-ties served as a preference index.

Using Concurrent Schedules for Intervention

Applied behavior analysts have used concurrent sched-ules extensively for improving vocational, academic, andsocial skills in applied settings (e.g., Cuvo, Lerch,Leurquin, Gaffaney, & Poppen, 1998; Reid, Parsons,Green, & Browning, 2001; Romaniuk et al., 2002). Forexample, Hoch, McComas, Johnson, Faranda, and Guen-ther (2002) arranged two concurrent response alterna-tives for three boys with autism. The boys could play inone setting with a peer or sibling, or play alone in an-other area. Hoch and colleagues manipulated the dura-tion of access to toys (i.e., reinforcer magnitude) andpreference (i.e., reinforcer quality). In one condition, themagnitude and quality of the reinforcer was equal in bothsettings. In the other condition, the magnitude and qual-ity of the reinforcer was greater for play in the setting

Schedules of Reinforcement

337

100

80

60

40

20

0

Per

cen

tage

of

Res

po

nse

sA

llo

cate

d t

o P

eer

EqualMagnitude

Unequal Magnitude—Paired

EqualMagnitude

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39

100

80

60

40

20

0

Per

cen

tage

of

Res

po

nse

sA

llo

cate

d t

o N

ove

l P

eers

EqualMagnitude

Unequal Magnitude—Paired

EqualMagnitude

1 2 3 4 5 6 7 8 9 10 11

Robbie

100

80

60

40

20

0

Per

cen

tage

of

Res

po

nse

sA

llo

cate

d t

o S

ibli

ng

UnequalQuality—Unpaired

Unequal Quality—Paired

Equal Quality(High)

2 4

EqualQuality(Low)

EqualQuality(Low)

6 8 10 12 14 16 18 20

Abe

Sessions

Figure 8 Percentage of responses allo-cated to the play area with the peer across experimental sessions (top panel) and in natural-setting probes with different peers in the classroom (middle panel) for the analysis ofmagnitude of reinforcement with Robbie, and thepercentage of responses allocated to the playarea with the sibling across experimental ses-sions for the analysis of quality of reinforcementwith Abe (bottom panel).From “The Effects of Magnitude and Quality of Reinforcement onChoice Responding During Play Activities” by H. Hoch, J. J.McComas, L. Johnson, N. Faranda, and S. L. Guenther, 2002,Journal of Applied Behavior Analysis, 35, p. 177. Copyright 1992 bythe Society for the Experimental Analysis of Behavior Inc. Reprintedby permission.

with a peer or sibling than in the play-alone setting. Withthe introduction of the condition with greater magnitudeand quality of the reinforcer, the boys allocated more playresponses to the setting with the peer or sibling, ratherthan playing alone. The magnitude and quality of the re-inforcer influenced choices made by the three boys.Figure 8 reports the percentage of responses allocated tothe concurrent play areas.

Concurrent Performances: Formalizing the Matching Law

Cuvo and colleagues (1998) reported that concurrentschedules typically produce two response patterns. Withconcurrent interval schedules (conc VI VI, conc FI FI),participants “typically do not allocate all of their re-sponses exclusively to the richer schedule [i.e., the sched-ule producing the higher rate of reinforcement]; rather,they distribute their responding between the two sched-ules to match or approximate the proportion of rein-forcement that is actually obtained on each independentschedule” (p. 43). Conversely, with concurrent ratio

schedules (conc VR VR, conc FR FR), participants aresensitive to the ratio schedules and tend to maximize re-inforcement by responding primarily to the ratio that pro-duces the higher rate of reinforcement.

Williams (1973) identified three types of interactionsfound with concurrent schedules. First, when similar re-inforcement is scheduled for each of the concurrent re-sponses, the response receiving the higher frequency ofreinforcement will increase in rate whereas a corre-sponding decrease will occur in the response rate of theother behavior. Second, when one response produces re-inforcement and the other produces punishment, re-sponses associated with punishment will decrease inoccurrence. That decrease may produce a higher rate ofresponse for the behavior producing reinforcement. Third,with a concurrent schedule programmed for one responseto produce reinforcement and the other response to pro-duce avoidance of an aversive stimulus, the rate of avoid-ance responding will accelerate with an increase in theintensity or the frequency of the aversive stimulus. Asavoidance responding accelerates, typically respondingon the reinforcement schedule will then decrease.

Schedules of Reinforcement

338

Schedules of Reinforcement

The characteristics of performance on concurrentschedules as detailed previously by Cuvo and colleaguesand Williams are consistent with the relationships formal-ized by Herrnstein (1961, 1970) as the matching law. Thematching law addresses response allocation to choices avail-able with concurrent schedules of reinforcement. Basically,the rate of responding typically is proportional to the rateof reinforcement received from each choice alternative.

Discriminative Schedules of Reinforcement

Multiple Schedules

A multiple schedule (mult) presents two or more basicschedules of reinforcement in an alternating, usually ran-dom, sequence. The basic schedules within the multipleschedule occur successively and independently. A dis-criminative stimulus is correlated with each basic sched-ule, and that stimulus is present as long as the scheduleis in effect.

Academic behaviors can become sensitive to the con-trol of multiple schedules of reinforcement. A studentmight respond to basic arithmetic facts with her teacher,and also with her tutor. With the teacher, the student re-sponds to arithmetic facts during small-group instruction.The tutor then provides individual instruction and prac-tice on the facts. This situation follows a multiple sched-ule because there is one class of behavior (i.e., mathfacts), a discriminative stimulus for each contingency ineffect (i.e., teacher/tutor, small group/individual), and dif-ferent conditions for reinforcement (i.e., reinforcementis less frequent in group instruction). In another everydayexample of the multiple schedule, Jim helps his motherand father clean house on Friday afternoons and Saturdaymornings. Jim cleans his grandmother’s bedroom andbathroom on Friday afternoons and the family room anddownstairs bathroom on Saturday mornings. Jim receives$5 per week for cleaning his grandmother’s rooms butdoes not receive money for cleaning the family room ordownstairs bathroom. Again, there is one class of behav-iors of interest (i.e., cleaning the house), a cue for eachcontingency in effect (i.e., grandmother’s rooms on Fri-days or other rooms on Saturdays), and different sched-ules of reinforcement associated with the different cues(i.e., $5 for grandmother’s rooms and no money for theother rooms).

Chained Schedules

A chained schedule (chain) is similar to a multipleschedule. The multiple and chained schedules have twoor more basic schedule requirements that occur succes-sively, and have a discriminative stimulus correlated with

each independent schedule. A chained schedule differsfrom a multiple schedule in three ways. First, the basicschedules in a chain schedule always occur in a specificorder, never in the random or unpredictable order of mul-tiple schedules. Second, the behavior may be the samefor all elements of the chain, or different behaviors maybe required for different elements in the chain. Third,conditioned reinforcement for responding in the first el-ement in a chain is the presentation of the second ele-ment; conditioned reinforcement for responding in thesecond element is presentation of the third element, andso on until all elements in the chain have been completedin a specific sequence. The last element normally pro-duces unconditioned reinforcement in a laboratory set-ting, or unconditioned or conditioned reinforcement inapplied settings.

The following example shows an elaborate sequenceof different behaviors that must occur in a specific order.To service a bicycle headset, the mechanic will completea chain with 13 components: (1) Disconnect the frontbrake cable; (2) remove handlebar and stem; (3) removefront wheel; (4) remove locknut; (5) unscrew adjustingrace; (6) take fork out of frame; (7) inspect races; (8) grease and replace bearing balls for lower stack; (9) grease and replace bearing balls for upper race; (10) grease threads of steering column; (11) put fork intoframe and thread the screwed race; (12) return lockwasher; (13) adjust and lock the headset. The final out-come (i.e., a clean, greased, and adjusted bicycle headset)is contingent on the completion of all 13 components.

Nondiscriminative Schedules of Reinforcement

Mixed Schedules

The mixed schedule (mix) uses a procedure identical tothe multiple schedules, except the mixed schedule has nodiscriminative stimuli correlated with the independentschedules. For example, with a mix FR 10 FI 1 sched-ule, reinforcement sometimes occurs after the comple-tion of 10 responses and sometimes occurs with the firstcorrect response after a 1-minute interval from the pre-ceding reinforcement.

Tandem Schedules

The tandem schedule (tand) uses a procedure identicalto the chained schedule, except, like the mix schedule, thetandem schedule does not use discriminative stimuli with

339

Schedules of Reinforcement

the elements in the chain. After a participant makes 15 re-sponses on a tand FR 15 FI 2, then the first correct re-sponse following an elapse of 2 minutes producesreinforcement.

Antecedent stimuli appear to relate functionally tomost occurrences of behaviors in natural environments.Perhaps, therefore, the mixed and tandem schedules havelittle applied application at this time. However, basic re-search has produced considerable data concerning the ef-fects of mixed and tandem schedules on behavior. It maybecome more apparent how applied behavior analystscan effectively apply mixed and tandem schedules in as-sessment, intervention, and analysis as the knowledgebase of applied behavior analysis continues to develop.

Schedules Combining the Numberof Responses and Time

Alternative Schedules

An alternative schedule (alt) provides reinforcementwhenever the requirement of either a ratio schedule oran interval schedule—the basic schedules that comprisethe alt—is met, regardless of which of the componentschedule’s requirements is met first. With an alt FR 50 FI5-minute schedule, reinforcement is delivered whenevereither of these two conditions has been met: (a) 50 cor-rect responses, provided the 5-minute interval of timehas not elapsed; or (b) the first response after the elapseof 5 minutes, provided that fewer than 50 responses havebeen emitted.

For instance, a teacher using an alt FR 25 FI 3-minuteschedule of reinforcement assigns 25 math problems andassesses the student’s correct and incorrect answers fol-lowing the elapse of 3 minutes. If the student completesthe 25 problems before the elapse of 3 minutes, theteacher checks the student’s answers and provides a con-sequence consistent with the FR 25 schedule. However,if the ratio requirement of 25 math problems has not beencompleted after an elapse of 3 minutes, the first correctanswer following the 3 minutes produces reinforcement.The alternative schedule offers the advantage of a sec-ond chance for reinforcement if the student has not metthe FR requirement in a reasonable amount of time. TheFI provides reinforcement for one response, and that onereinforced response might encourage continued re-sponding with the new start of the FR requirement.

Conjunctive Schedules

A conjunctive schedule (conj) of reinforcement is in ef-fect whenever reinforcement follows the completion ofresponse requirements for both a ratio schedule and an in-

terval schedule of reinforcement. For example, a studentbehavior produces reinforcement when at least 2 minuteshave elapsed and 50 responses have been made. Thisarrangement is a conj FI 2 FR 50 schedule of reinforce-ment. With the conjunctive schedule of reinforcement,the first response following the conclusion of the timeinterval produces reinforcement if the criterion number ofresponses has been completed.

A 14-year-old boy with autism had higher rates ofaggression with two of his four therapists during in-struction. The higher rates of aggression were directedtoward the two therapists who previously worked withthe boy at a different treatment facility. Progar and col-leagues (2001) intervened to reduce the levels of ag-gression with the therapists from the different facility tothe levels that occurred with the other two therapists inthe current setting. The boy’s aggression occurred in de-mand situations (e.g., making his bed) and was escapemaintained. The initial intervention used three conse-quences: (1) a 10-minute chair time-out for attempts tochoke, (2) escape extinction, and (3) differential-rein-forcement-other-behavior for the omission of aggressionduring the 10-minute sessions. This intervention wasidentical to the treatment used with the boy at the otherfacility. It was ineffective in reducing the boy’s aggres-sion in the current setting.

Because of the ineffectiveness of the initial inter-vention, Progar and colleagues added a conj FR VI-DROschedule of reinforcement to their initial intervention.They delivered edible reinforcers contingent on com-pleting a three-component task such as dusting orstraightening objects (i.e., an FR 3 schedule) and theomission of aggression for an average of every 2.5 min-utes (i.e., the VI-DRO 150-second). An occurrence of ag-gression reset the conj schedule. (Note: Resetting thisconj schedule used a standard procedure because any oc-currence of the problem behavior during a DRO intervalimmediately resets the time to the beginning of the in-terval.) Progar and colleagues demonstrated that the conjFR VI-DRO schedule produced a substantial reductionin aggression directed toward the two therapists previ-ously from the other treatment facility.

Duvinsky and Poppen (1982) found that human per-formance on a conjunctive schedule is influenced by theratio and interval requirements. When task requirementsare high in relationship to the interval requirements, peo-ple are likely to work steadily on the task throughout thetime available. However, people are likely to engage inbehaviors other than the task requirements when there isa large time interval and a low ratio requirement.

Table 1 provides a summary of the characteristics ofcompound schedules of reinforcement.

340

Schedules of Reinforcement

Table 1 Summary and Comparison of Basic Dimensions Defining Compound Schedules of Reinforcement

Compound Schedule Name

Dimension Concurrent Multiple Chained Mixed Tandem Alternative Conjunctive

Number of basic 2 or more 2 or more 2 or more 2 or more 2 or more 2 or more 2 or moreschedules of rein-forcement in effect

Number of response 2 or more 1 1 or more 1 1 or more 1 1classes involved

Discriminative stimuli Possible Yes Yes No No Possible Possibleor cues associated with each compo- nent schedule

Successive presen- No Yes Yes Yes Yes No Notation of basic schedules

Simultaneous pre- Yes No No No No Yes Yessentation of basic schedules

Reinforcement lim- No No Yes No Yes No Yesited to final compo-nent of basic schedule

Reinforcement for Yes Yes No Yes No Yes Noindependent compo-nents of basic schedule

Perspectives on Applying Schedules of Reinforcementin Applied SettingsApplied Research with Intermittent Schedules

Basic researchers have systematically analyzed the ef-fects of intermittent schedules of reinforcement on theperformance of organisms (e.g., Ferster & Skinner,1957). Their results have produced well-establishedschedule effects. These schedule effects have strong gen-erality across many species, response classes, and labo-ratories. However, a review of the applied literature onschedule effects (e.g., Journal of Applied BehaviorAnalysis, 1968 to 2006) will show that applied behavioranalysts have not embraced the analysis of schedule ef-fects with enthusiasm, as have basic researchers. Con-sequently, schedule effects have not been documentedclearly in applied settings. Uncontrolled variables in ap-plied settings, such as the following, influence a partic-

ipant’s sensitivity and insensitivity to the schedule of re-inforcement:

1. Instructions given by the applied behavior analyst,self-instructions, and environmental aids (e.g., cal-endars, clocks) make human participants resistantto temporal schedule control.

2. Past histories of responding to intermittent sched-ules of reinforcement can affect current schedulesensitivity or insensitivity.

3. Immediate histories from schedules of reinforce-ment may affect current schedule performancesmore than remote past histories.

4. Sequential responses required in many applied ap-plications of intermittent schedules of reinforce-ment (e.g., work leading to the paycheck, studyingfor a pop quiz) are uncommon applications ofschedules of reinforcement, particularly with inter-val schedules.

5. Uncontrolled establishing operations in conjunc-tion with schedules of reinforcement in appliedsettings will confound schedule effects.

341

Some well-established schedule effects found inbasic research were presented earlier in this chapter. Ap-plied behavior analysts, however, should use caution inextrapolating these effects to applied settings, for the fol-lowing reasons:

1. Most applied applications of schedules of rein-forcement only approximate true laboratory sched-ules of reinforcement, especially the intervalschedules that may occur rarely in natural environ-ments (Nevin, 1998).

2. Many uncontrolled variables in applied settingswill influence a participant’s sensitivity and insen-sitivity to the schedule of reinforcement (Madden,Chase, & Joyce, 1998).

Applied Research with Compound Schedules

Applied researchers have seldom analyzed the effects ofcompound reinforcement schedules, with the notable ex-ceptions of concurrent schedules and, to a lesser degree,chained schedules. Applied researchers should includethe analysis of compound schedules in their researchagendas. A better understanding of the effects of com-pound schedules on behavior will advance the develop-ment of applied behavior analysis and its applications.This perspective is important because compound sched-ules of reinforcement act directly on human behavior,and they influence behavior also by interacting with otherenvironmental variables (e.g., antecedent stimuli, moti-vating operations) (Lattal & Neef, 1996).

Applied Research with Adjunctive Behavior

This chapter has stressed the effects of schedules of re-inforcement on the specific behaviors that produce rein-forcement. Other behaviors can occur when an individualresponds to a given contingency of reinforcement. Theseother behaviors occur independently of schedule control.Typical examples of such behaviors include normal timefillers, such as doodling, smoking, idle talking, drinking.Such behaviors are called adjunctive behaviors, orschedule-induced behaviors, when the frequency of thesetime-filling behaviors increases as a side effect of otherbehaviors maintained by a schedule of reinforcement(Falk, 1961, 1971).

A substantial body of experimental literature hasdeveloped on many types of adjunctive behaviors withnonhuman subjects (see reviews, Staddon, 1977; Wether-ington, 1982) and some basic research with human sub-jects (e.g., Kachanoff, Leveille, McLelland, & Wayner1973; Lasiter, 1979). Common diverse examples of ad-junctive behaviors observed in laboratory experimentsinclude aggression, defecation, pica, and wheel running.Some common excessive human problem behaviorsmight develop as adjunctive behaviors (e.g., the use ofdrugs, tobacco, caffeine, and alcohol; overeating; nail bit-ing; self-stimulation; and self-abuse). These potentiallyexcessive adjunctive behaviors are socially significant,but the possibility that such excesses are developed andmaintained as adjunctive behaviors has been essentiallyignored in applied behavior analysis.

Foster (1978), in an extended communication to thereadership of the Journal of Applied Behavior Analysis,reported that applied behavior analysts have neglectedthe potentially important area of adjunctive behavior. Hestated that applied behavior analysis does not have a dataor knowledge base for adjunctive phenomena. Similarly,Epling and Pierce (1983) called for applied behavior an-alysts to extend the laboratory-based findings in adjunc-tive behavior to the understanding and control of sociallysignificant human behavior. To our knowledge, Lerman,Iwata, Zarcone, and Ringdahl’s (1994) article providesthe only research on adjunctive behavior published in theJournal of Applied Behavior Analysis from 1968 through2006. Lerman and colleagues provided an assessment ofstereotypic and self-injurious behavior as adjunctive re-sponses. Data from this preliminary study suggest thatintermittent reinforcement did not induce self-injury, butwith some individuals, stereotypic behavior showed char-acteristics of adjunctive behavior.

Foster (1978) and Epling and Pierce (1983) cautionedthat many teachers and therapists may apply interventionsdirectly to adjunctive behaviors rather than to the variablesfunctionally related to their occurrence. These direct in-terventions may be futile and costly in terms of money,time, and effort because adjunctive behaviors appear re-sistant to interventions using operant contingencies.

The condition under which adjunctive behaviors aredeveloped and maintained is a major area for future re-search in applied behavior analysis. Applied research di-rected to adjunctive behaviors will advance the science ofapplied behavior analysis and will provide an importantfoundation for improved practices in therapy andinstruction.

Schedules of Reinforcement

342

SummaryIntermittent Reinforcement

1. A schedule of reinforcement is a rule that establishes theprobability that a specific occurrence of a behavior willproduce reinforcement.

2. Only selected occurrences of behavior produce reinforce-ment with an intermittent schedule of reinforcement.

3. Applied behavior analysts use continuous reinforcementduring the initial stages of learning and for strengthen-ing behavior.

4. Applied behavior analysts use intermittent reinforcementto maintain behavior.

Defining Basic Intermittent Schedules of Reinforcement

5. A fixed ratio schedule requires a specified number of re-sponses before a response produces reinforcement.

6. A variable ratio requires a variable number of responsesbefore reinforcement is delivered.

7. A fixed interval schedule provides reinforcement for thefirst response following the elapse of a specific, constantduration of time since the last reinforced response.

8. A variable interval schedule provides reinforcement forthe first response following the elapse of variable durationof time since the last reinforced response.

9. When a limited hold is added to an interval schedule, re-inforcement remains available for a finite time followingthe elapse of the FI or VI interval.

10. Each basic schedule of reinforcement has unique responsecharacteristics that determine the consistency of re-sponding, the rate of responding, and performance duringextinction.

Thinning Intermittent Reinforcement

11. Applied behavior analysts often use one of two proceduresto thin schedules of reinforcement. An existing scheduleis thinned by gradually increasing the response ratio or bygradually increasing the duration of the time interval.

12. Applied behavior analysts should use small increments ofschedule changes during thinning and ongoing evaluationof the learner’s performance to adjust the thinning processand avoid the loss of previous improvements.

13. Ratio strain can result from abrupt increases in ratio re-quirements when moving from denser to thinner rein-forcement schedules.

Variations on Basic Intermittent Schedules of Reinforcement

14. DRH and DRL are variations of ratio schedules and spec-ify that reinforcement will be delivered contingent on re-sponses occurring above or below criterion response rates.

15. The differential reinforcement of diminishing rates sched-ule provides reinforcement at the end of a predeterminedtime interval when the number of responses is below a cri-terion. The criterion for the number of responses is grad-ually decreased across time intervals based on theindividual’s performance.

16. Progressive schedules of reinforcement systematically thineach successive reinforcement opportunity independentof the participant’s behavior.

Compound Schedules of Reinforcement

17. Continuous reinforcement, the four simple intermittentschedules of reinforcement, differential reinforcement ofrates of responding, and extinction, when combined, pro-duce compound schedules of reinforcement.

18. Compound schedules of reinforcement include concur-rent, multiple, chained, mixed, tandem, alternative, andconjunctive schedules.

Perspectives on Applying Schedules of Reinforcement in Applied Settings

19. Some well-established schedule effects found in basic re-search were presented in this chapter. Applied behavioranalysts, however, should use caution in extrapolatingthese effects to applied settings.

20. Applied researchers should include an analysis of the basicintermittent schedules and the compound schedules in theirresearch agendas. A better understanding of the scheduleeffects in applied settings will advance the development ofapplied behavior analysis and its applications.

21. The conditions under which adjunctive behaviors are de-veloped and maintained is an important area for future re-search in applied behavior analysis.

Schedules of Reinforcement

343

Punishmentby Stimulus Presentation

Key Termsbehavioral contrastconditioned punisherdiscriminative stimulus for

punishmentgeneralized conditioned punisher

negative punishmentovercorrectionpositive practice overcorrectionpositive punishmentpunisher

punishmentresponse blockingrestitutional overcorrectionunconditioned punisher

Behavior Analyst Certification Board® BCBA® & BCABA®Behavior Analyst Task List©, Third Edition

Content Area 3: Principles, Processes, and Concepts

3-5 Define and provide examples of positive and negative punishment.

3-6 Define and provide examples of conditioned and unconditioned punishment.

Content Area 9: Behavior Change Procedures

9-3 Use positive (and negative) punishment.

(a) Identify and use punishers.

(b) Use appropriate parameters and schedules of punishment.

(c) State and plan for the possible unwanted effects of the use of punishment.

© 2006 The Behavior Analyst Certification Board, Inc.,® (BACB®) all rights reserved. A current version of thisdocument may be found at www.bacb.com. Requests to reprint, copy, or distribute this document and ques-tions about this document must be submitted directly to the BACB.

From Chapter 14 of Applied Behavior Analysis, Second Edition. John O. Cooper, Timothy E. Heron, William L. Heward. Copyright © 2007by Pearson Education, Inc. All rights reserved.

344

Have you ever stubbed your toe while walkingtoo fast in a darkened room and then walkedslowly the rest of the way to the light switch?

Have you ever left a sandwich unattended at a beachparty, watched a seagull fly away with it, and then re-frained from turning your back on the next treat youpulled from the picnic basket? If you have had these orsimilar experiences, you have been the beneficiary ofpunishment.

It may strike you as strange that we would refer tosomeone who stubbed his toe or lost his sandwich as ben-efiting from the experience, as opposed to referring tothat person as a “victim” of punishment. Although manypeople consider punishment a bad thing—reinforcement’sevil counterpart—punishment is as important to learningas reinforcement. Learning from the consequences thatproduce pain or discomfort, or the loss of reinforcers, hassurvival value for the individual organism and for thespecies. Punishment teaches us not to repeat responsesthat cause us harm. Fortunately, it usually does not taketoo many stubbed toes or lost picnic treats to reduce thefrequency of the behaviors that produced those outcomes.

Although punishment is a natural phenomenon that“occurs like the wind and the rain” (Vollmer, 2002, p. 469)and is one of the basic principles of operant conditioning,it is poorly understood, frequently misapplied, and its ap-plication can be controversial. At least some of the mis-understanding and controversy surrounding the use ofpunishment in behavior change programs derives fromconfusing punishment as an empirically derived princi-ple of behavior with the variety of everyday and legalconnotations of the concept. One common meaning ofpunishment is the application of aversive consequences—such as physical pain, psychological hurt, and the loss ofprivileges, or fines—for the purpose of teaching a lessonto a person who has misbehaved so she will not repeat themisdeed. Punishment is sometimes used as an act of ret-ribution on the part of the person or agency administer-ing the punishment, or to provide “a lesson for others” onhow to behave. Punishments meted out by the legal sys-tem, such as jail time and fines, are often considered aprocess by which convicted lawbreakers must repay theirdebt to society.

These everyday and legal notions of punishment, al-though having various degrees of validity within theirown contexts, have little, if anything, to do with punish-ment as a principle of behavior. In the everyday conno-tation of punishment, most people would agree that ateacher who sends a student to the principal’s office forfooling around in class, or a police officer who issues aticket to a speeding motorist, has punished the offender.However, as a principle of behavior, punishment is notabout punishing the person; punishment is a response →

consequence contingency that suppresses the future fre-quency of similar responses. From the perspective ofboth the science and the practice of behavior analysis,the trip to the principal’s office did not punish foolingaround in class unless the future frequency at which thestudent fools around in class decreases as a function ofhis trip to the principal’s office, and the police officer’sticket did not punish speeding unless the motorist drivesabove the speed limit less often than she did before re-ceiving the ticket.

In this chapter we define the principle of punishment,discuss its side effects and limitations, identify factorsthat influence the effectiveness of punishment, describeexamples of several behavior change tactics that incor-porate punishment, discuss ethical considerations in theuse of punishment, and present guidelines for using pun-ishment effectively. In the chapter’s concluding section,we underscore the need for more basic and applied re-search on punishment and reiterate Iwata’s (1988) rec-ommendation that behavior analysts view punishment bycontingent stimulation as a default technology to be im-plemented when other interventions have failed.

Definition and Nature of PunishmentThis section presents the basic functional relation thatdefines punishment, the two operations by which pun-ishment can be implemented, discrimination effects ofpunishment, recovery from punishment, unconditionedand conditioned punishers, factors that influence the ef-fectiveness of punishment, and possible side effects ofand problems with punishment.

Operation and Defining Effectof Punishment

Like reinforcement, punishment is a two-term, behavior→ consequence functional relation defined by its effectson the future frequency of behavior. Punishment has oc-curred when a response is followed immediately by astimulus change that decreases the future frequency ofsimilar responses (Azrin & Holz, 1966).

An early study by Hall and colleagues (1971) pro-vides a straightforward example of punishment. Andrea,a 7-year-old girl with hearing impairments, “pinched andbit herself, her peers, the teacher, and visitors to the class-room at every opportunity and was so disruptive theteacher reported that academic instruction was impossi-ble” (p. 24). During an initial 6-day baseline period, An-drea bit or pinched an average of 71.8 times per day.Whenever Andrea bit or pinched anyone during the

Punishment by Stimulus Presentation

345

100

80

60

40

20

0

Nu

mb

er o

f B

ites

an

d P

inch

es p

er D

ay

5 10 15 20 25 30 35

Baseline1

Pointed Finger,“No” for Bites, Pinches1 B2 Pointed Finger, “No”2

Days

Figure 1 Number of bites andpinches by a 7-year-old girl duringbaseline and punishment (“No” pluspointing) conditions.From “The Effective Use of Punishment to ModifyBehavior in the Classroom” by R. V. Hall, S. Axelrod, M.Foundopoulos, J. Shellman, R. A. Campbell, and S. S.Cranston, 1971, Educational Technology, 11(4), p. 25.Copyright 1971 by Educational Technology. Used bypermission.

intervention condition, her teacher immediately pointedat her with an outstretched arm and shouted “No!” Onthe first day of intervention, the frequency of Andrea’sbiting and pinching decreased substantially (see Figure1). Her aggressive behavior followed a downward trend during the initial intervention phase, ending in anaverage of 5.4 incidents per day. A 3-day return to base-line conditions resulted in Andrea’s biting and pinchingat a mean rate of 30 times per day. When the teacher re-instated the intervention, pointing her finger and stating“No!” each time Andrea pinched or bit, Andrea’s problembehavior dropped to 3.1 incidents per day. During thesecond intervention phase, the teacher reported that An-drea’s classmates were no longer avoiding her, perhapsbecause their behavior of being near Andrea was pun-ished less often by bites and pinches.

It is important to point out that punishment is de-fined neither by the actions of the person delivering theconsequences (in the case of socially mediated punish-ment) nor by the nature of those consequences.1 A de-crease in the future frequency of the occurrence of thebehavior must be observed before a consequence-basedintervention qualifies as punishment. The interventionthat proved successful in reducing the frequency of An-drea’s biting and pinching—her teacher’s pointed fingerand “No!”—is classified as a punishment-based treat-ment only because of its suppressive effects. If Andreahad continued to bite and pinch at the baseline level of re-

1Although the term automatic punishment is used infrequently in the be-havior analysis literature, it is similar to automatic reinforcement. Automaticpunishment occurs when a punishing consequence (e.g., burned finger) isa socially unmediated, unavoidable outcome of a response (e.g., touchinga hot stove).

sponding when the intervention was applied, her teacher’spointing and “No!” would not have been punishment.

Because the presentation of punishers often evokesbehavior incompatible with the behavior being punished,the immediate suppressive effects of punishment can eas-ily be overestimated. Michael (2004) explained and pro-vided a good example:

The decreased frequency of the punished response that isdue to its having been followed by punishment will not beseen until after behavior evoked by the punishing stimuluschanges has ceased. Because the evocative effect of thepunishment stimulus change (as a respondent uncondi-tioned or conditioned elicitor or as operant SD or MO) is inthe same direction as the future change due to punishmentas a response consequence (weakening of the punished be-havior), the former can be easily misinterpreted as the lat-ter. For example, when a small child’s misbehavior isfollowed by a severe reprimand, the misbehavior willcease immediately, but primarily because the reprimandcontrols behavior incompatible with the misbehavior—at-tending to the adult who is doing the reprimanding, deny-ing responsibility for the misbehavior, emotional behaviorsuch as crying, etc. This sudden and total cessation of themisbehaver does not imply, however, that the future fre-quency of its occurrence has been reduced, which wouldbe the true effect of punishment. (pp. 36–37)

Another factor that contributes to the difficulty ofdetermining the true effectiveness of punishment is thatthe reduction in response rate is often confounded by ex-tinction effects caused by withholding reinforcement forthe problem behavior (something that should be part ofa punishment-based intervention whenever possible)(Iwata, Pace, Cowdery, & Miltenberger, 1994).

Punishment by Stimulus Presentation

346

Positive Punishmentand Negative Punishment

Like reinforcement, punishment can be accomplished byeither of two types of stimulus change operations.Positive punishment occurs when the presentation of astimulus (or an increase in the intensity of an already pre-sent stimulus) immediately following a behavior resultsin a decrease in the frequency of the behavior. Stubbingone’s toe on a chair leg is a form of positive punish-ment—if it suppresses the frequency of the behavior thatpreceded the toe stub—because the painful stimulation isbest described as the presence of a new stimulus. Be-havior change tactics based on positive punishment involve the contingent presentation of a stimulus immediately following occurrences of the target behav-ior. The intervention used by Andrea’s teacher consti-tuted positive punishment; the teacher’s pointed fingerand “No!” were stimuli presented or added to Andrea’senvironment.

Negative punishment involves the termination ofan already present stimulus (or a decrease in the intensityof an already present stimulus) immediately following abehavior that results in a decrease in the future frequencyof the behavior. The beach party attendee’s behavior ofturning his back on his food was negatively punishedwhen the seagull flew off with his sandwich. For a stim-ulus change to function as negative punishment, whichamounts to the removal of a positive reinforcer, a “moti-vating operation for the reinforcer must be in effect, oth-erwise removing it will not constitute punishment”(Michael, 2004, p. 36). A seagull’s flying off with a hun-gry person’s sandwich would function as punishment forhis inattentiveness but perhaps have little effect on thebehavior of a person who has eaten his fill and set thesandwich down.

Behavior change tactics based on negative punish-ment involve the contingent loss of available reinforcersimmediately following a behavior (i.e., response cost, aprocedure akin to a fine) or the removal of the opportu-nity to acquire additional reinforcers for a period of time(i.e., timeout from reinforcement, a procedure akin tobeing sidelined during a game).

A wide variety of terms are used in the behavioralliterature to refer to the two types of consequence oper-ations for punishment. Positive punishment and negativepunishment are sometimes identified as Type I punish-ment and Type II punishment, respectively (Foxx, 1982).Malott and Trojan Suarez (2004) use the term penaltyprinciple to refer to negative punishment. The BehaviorAnalyst Certification Board (BACB, 2001) and sometextbook authors (e.g., Baum, 1994; Catania, 1998;Michael, 2004; Miltenberger, 2001) use the terms positivepunishment and negative punishment, paralleling the

2As with reinforcement, the terms positive punishment and negative pun-ishment are, as Michael (2004) noted, “quite susceptible to misunder-standing. Assuming that you must receive either positive or negativepunishment, which would you prefer? As with reinforcement, you shouldcertainly not decide until you know specifically what each consists of.Would you prefer negative reinforcement or positive punishment? Ofcourse, negative reinforcement” (p. 37).

terms positive reinforcement and negative reinforcement.As with reinforcement, the modifiers positive andnegative used with punishment connote neither the in-tention nor the desirability of the behavior change pro-duced; they specify only how the stimulus change thatserved as the punishing consequence was affected—whether it is best described as the presentation of a newstimulus (positive punishment) or the termination (or re-duction in intensity or amount) of an already present stim-ulus (negative punishment).2

Positive punishment and negative reinforcement arefrequently confused. Because aversive events are associ-ated with positive punishment and with negative rein-forcement, the umbrella term aversive control is oftenused to describe interventions involving either or both ofthese two principles. Distinguishing between the twoprinciples is difficult when the same aversive event is in-volved in concurrent positive punishment and negativereinforcement contingencies. For example, Baum (1994)described how the application and threat of physical beat-ings might condition the behavior of people living in a po-lice state.

If speaking out results in a beating, then speaking out ispositively punished. If lying avoids a beating, then lyingis negatively reinforced. The two tend to go hand-in-hand; if one action is punished, there is usually some al-ternative that avoids punishment. (p. 153)

The keys to identifying and distinguishing concur-rent positive punishment and negative reinforcement con-tingencies involving the same aversive stimulus event are(a) recognizing the opposite effects the two contingencieshave on the future frequency of behavior, and (b) realiz-ing that two different behaviors must be involved becausethe same consequence (i.e., stimulus change) cannot serveas positive punishment and negative reinforcement for thesame behavior. In a positive punishment contingency, thestimulus is absent prior to a response and is presented asa consequence; in a negative reinforcement contingency,the stimulus is present prior to a response and is removedas a consequence. For example, positive punishment andnegative reinforcement contingencies operated concur-rently in a study by Azrin, Rubin, O’Brien, Ayllon, andRoll (1968) in which adults wore an apparatus throughoutthe normal working day that automatically produced a

Punishment by Stimulus Presentation

347

Punishment by Stimulus Presentation

55 dB tone contingent on slouching (sustained roundingof the shoulders or upper back for 3 seconds) and imme-diately terminated the tone when they straightened theirshoulders. Slouching produced a tone (positive punish-ment), straightening the shoulders escaped the tone (neg-ative reinforcement), and nonslouching avoided a tone(negative reinforcement).

Threatening a person with punishment if she engagesin a behavior should not be confused with punishment.Punishment is a behavior → consequence relation, and athreat of what might happen if a person subsequently be-haves in a certain way is an antecedent event to the be-havior. When the threat of punishment suppressesbehavior, it may be due to the threat functioning as anestablishing operation that evokes alternative behaviorsthat avoid the threatened punishment.

Discriminative Effects of Punishment

Punishment does not operate in a contextual vacuum. Theantecedent stimulus situation in which punishment oc-curs plays an important role in determining the environ-mental conditions in which the suppressive effects ofpunishment will be observed. The three-term contingencyfor “the operant functional relation involving punishmentcan be stated much like that involving reinforcement: (1)In a particular stimulus situation (S), (2) some kinds ofbehavior (R), when followed immediately by (3) certainstimulus changes (SP), show a decreased future frequencyof occurrence in the same or in similar situations”(Michael, 2004, p. 36).

If punishment occurs only in some stimulus condi-tions and not in others (e.g., a child gets scolded forreaching into the cookie jar before dinner only when anadult is in the room), the suppressive effects of punish-ment will be most prevalent under those conditions(Azrin & Holz, 1966; Dinsmoor, 1952). A discriminatedoperant for punishment is the product of a conditioninghistory in which responses in the presence of a certainstimulus have been punished and similar responses inthe absence of that stimulus have not been punished (orhave resulted in a reduced frequency or magnitude ofpunishment). Highway speeding is a discriminated op-erant in the repertoire of many motorists who drivewithin the speed limit in and around locations wherethey have been pulled over by the police for speedingbut continue to drive above the speed limit on roadswhere they have never seen a police cruiser.

There is no standard term or symbol in the behavioranalysis literature for an antecedent stimulus that acquiresstimulus control related to punishment. Some authors havemodified the shorthand symbol for a discriminative stim-ulus for reinforcement (SD) to indicate the antecedent stim-ulus in a three-term punishment contingency, such as the

SD- used by Sulzer-Azaroff and Mayer (1971). Other au-thors simply refer to an antecedent stimulus correlatedwith the presence of a punishment contingency as apunishment-based SD (e.g., Malott & Trojan Suarez, 2004;Michael, 2004). We have adopted SDp as the symbol forthe discriminative stimulus for punishment as proposedby O’Donnell (2001). “An SDp can be defined as a stimu-lus condition in the presence of which a response has alower probability of occurrence than it does in its absenceas a result of response-contingent punishment delivery inthe presence of the stimulus” (p. 262). Figure 2 showsthree-term contingency diagrams for discriminated oper-ants for positive punishment and negative punishment.

Recovery from Punishment

When punishment is discontinued, its suppressive effectson responding are usually not permanent, a phenomenonoften called recovery from punishment, which is analo-gous to extinction. Sometimes the rate of responding afterpunishment is discontinued will not only recover but alsobriefly exceed the level at which it was occurring prior topunishment (Azrin, 1960; Holz & Azrin, 1962). Recov-ery of responding to prepunished levels is more likely tooccur when the punishment was mild or when the personcan discriminate that the punishment contingency is nolonger active. Although the response-weakening effectsof punishment often wane when punishment is discon-tinued, so too are the response strengthening effects ofreinforcement often fleeting when previously reinforcedbehavior is placed on extinction (Vollmer, 2002). Michael(2004) noted that

Recovery from punishment is sometimes given as an ar-gument against the use of punishment to cause a de-crease in behavior. “Don’t use punishment because theeffect is only temporary.” But of course the same argu-ment could be made for reinforcement. The strengthen-ing effect of reinforcement decreases when the behavioroccurs without the reinforcement. The weakening effectof punishment decreases when the behavior occurs with-out the punishment. (p. 38)

Virtually permanent response suppression may occurwhen complete suppression of behavior to a zero rate ofresponding has been achieved with intense punishment.In their review of basic research on punishment, Azrinand Holz (1966) noted that:

Intense punishment did not merely reduce responses tothe unconditioned or operant level, but reduced them toan absolute level of zero. Since punishment is deliveredonly after a response occurs, there is no opportunity forthe subject to detect the absence of punishment unlesshe responds. If punishment is severe as to completelyeliminate the responses then the opportunity for detect-ing the absence of punishment no longer exists. (p. 410)

348

Punishment by Stimulus Presentation

SP

Grandma scolds

Seagull flies awaywith sandwich

Effect on Future Frequencyof Similar Responses

in Presence of SDSDp R

Grandma in the kitchenbefore dinner

Reach into cookie jar

Seagulls present atbeach picnic

Leave sandwichunattended

Figure 2 Three-term contingencies illustrating positive and negative punishment of discriminated operants: A response (R) emitted in the presence of a discriminative stimulus (SDp) is followed closely in timeby a stimulus change (SP) and results in a decreased frequency of similar responses in the future when theSDp is present. A discriminated operant for punishment is the product of a conditioning history in whichresponses in the presence of the SDp have been punished and similar responses in the absence of the SDp

have not been punished (or have resulted in a reduced frequency or magnitude of punishment than in thepresence of the SDp).

Unconditioned and Conditioned Punishers

A punisher is a stimulus change that immediately fol-lows the occurrence of a behavior and reduces the futurefrequency of that type of behavior. The stimulus changein a positive punishment contingency may be referred toas a positive punisher, or more simply as a punisher. Like-wise the term negative punisher can be applied to a stim-ulus change involved in negative punishment. However,this usage is clumsy because it refers to a positive rein-forcer that is removed contingent on occurrences of thetarget behavior. Therefore, when most behavior analystsuse the term punisher, they are referring to a stimuluswhose presentation functions as punishment (i.e., a pos-itive punisher). As with reinforcers, punishers can be clas-sified as unconditioned or conditioned.

Unconditioned Punishers

An unconditioned punisher is a stimulus whose pre-sentation functions as punishment without having beenpaired with any other punishers. (Some authors useprimary punisher or unlearned punisher as synonyms foran unconditioned punisher.) Because unconditioned pun-ishers are the product of the evolutionary history of aspecies (phylogeny), all biologically intact members of aspecies are more or less susceptible to punishment by thesame unconditioned punishers. Painful stimulation suchas that caused by physical trauma to the body, certain

odors and tastes, physical restraint, loss of bodily sup-port, and extreme muscular effort are examples of stim-ulus changes that typically serve as unconditionedpunishers for humans (Michael, 2004). Like uncondi-tioned reinforcers, unconditioned punishers are “phylo-genically important events that bear directly on fitness”of the organism (Baum, 1994, p. 59).

However, virtually any stimulus to which an organ-ism’s receptors are sensitive—light, sound, and temper-ature, to name a few—can be intensified to the point thatits delivery will suppress behavior even though the stim-ulus is below levels that actually cause tissue damage(Bijou & Baer, 1965). Unlike unconditioned reinforcers,such as food and water, whose effectiveness depends ona relevant establishing operation, under most conditionsmany unconditioned punishers will suppress any behav-ior that precedes their onset. For example, an organismdoes not have to be “deprived of electric stimulation” forthe onset of electric shock to function as punishment.(However, the behavior of an organism that has just re-ceived many shocks in a short period of time, particu-larly shocks of mild intensity, may be relativelyunaffected by another shock.)

Conditioned Punishers

A conditioned punisher is a stimulus change that func-tions as punishment as a result of a person’s conditioninghistory. (Some authors use secondary punisher or learned

349

Punishment by Stimulus Presentation

punisher as synonyms for a conditioned punisher.) A con-ditioned punisher acquires the capability to function as apunisher through stimulus–stimulus pairing with one ormore unconditioned or conditioned punishers. For ex-ample, as a result of its onset at or very near the sametime as an electric shock, a previously neutral stimuluschange, such as an audible tone, will become a condi-tioned punisher capable of suppressing behavior that im-mediately precedes the tone when it occurs later in theabsence of the shock (Hake & Azrin, 1965).3 If a condi-tioned punisher is repeatedly presented without the pun-isher(s) with which it was initially paired, its effectivenessas punishment will wane until it is no longer a punisher.

Previously neutral stimuli can also become condi-tioned punishers for humans without direct physical pair-ing with another punisher through a pairing processAlessi (1992) called verbal analog conditioning. This issimilar to the example of verbal pairing for the condi-tioning of a conditioned reinforcer in which Engelmann(1975) showed cut up pieces of yellow construction paperto a group of preschool children and told them, “Thesepieces of yellow paper are what big kids work for” (pp. 98–100). From that point on, many children beganworking extra hard for yellow pieces of paper. Milt-enberger (2001) gave the example of a carpenter tellinghis apprentice that if the electric saw starts to makesmoke, the saw motor may become damaged or the blademay break. The carpenter’s statement establishes smokefrom the saw as a conditioned punisher capable of de-creasing the frequency of any behaviors that immediatelyprecede the smoke (e.g., pushing too forcefully on thesaw, holding the saw at an inappropriate angle).

A stimulus change that has been paired with numer-ous forms of unconditioned and conditioned punishersbecomes a generalized conditioned punisher. For ex-ample, reprimands (“No!” “Don’t do that!”) and socialdisapproval (e.g., scowl, head shake, frowns) are gener-alized conditioned punishers for many people becausethey have been paired repeatedly with a wide range ofunconditioned and conditioned punishers (e.g., burnedfinger, loss of privileges). As with generalized condi-tioned reinforcers, generalized conditioned punishers arefree from the control of specific motivating conditionsand will function as punishment under most conditions.

At the risk of redundancy, we will again stress thecritical point that punishers, like reinforcers, are not de-

3A stimulus that becomes a conditioned punisher by being paired with an-other punisher does not have to be a neutral stimulus prior to the pairing.The stimulus could already function as a reinforcer under other conditions.For example, a blue light that has been paired repeatedly with reinforce-ment in one setting and with punishment in another setting is a conditionedreinforcer or a conditioned punisher depending on the setting.

(Morse & Kelleher, 1977). Even stimuli whose presen-tation under most conditions would function as uncon-ditioned reinforcers or punishers can have the oppositeeffect under certain conditions. For example, a bite offood will function as a punisher for a person who haseaten too much, and an electric shock may function as aconditioned reinforcer if it signals the availability of foodfor a food-deprived organism (e.g., Holz & Azrin, 1961).If a student receives smiley face stickers and praise for hisacademic work and his productivity decreases as a result,smiley face stickers and praise are punishers for that stu-dent. What might serve as a punisher at home might notbe a punisher at school. What might be a punisher underone set of circumstances might not be a punisher undera different set of circumstances. Although common ex-periences mean that many of the same stimulus eventsfunction as conditioned punishers for most people, a pun-isher for one person may be a reinforcer for another.

Factors That Influence the Effectiveness of Punishment

Reviews of basic and applied research on punishmentconsistently identify the following variables as keys tothe effectiveness of punishment: the immediacy of pun-ishment, the intensity of the punisher, the schedule or fre-quency of punishment, the availability of reinforcementfor the target behavior, and the availability of reinforce-ment for an alternative behavior (e.g., Axelrod, 1990;Azrin & Holz, 1966; Lerman & Vorndran, 2002; Matson& Taras, 1989).

Immediacy

Maximum suppressive effects are obtained when theonset of the punisher occurs as soon as possible after theoccurrence of a target response.

The longer the time delay between the occurrence of theresponse and the occurrence of the stimulus change, theless effective the punishment will be in changing the rel-evant response frequency, but not much is known aboutupper limits. (Michael, 2004, p. 36)

Intensity/Magnitude

Basic researchers examining the effects of the punishersof varying intensity or magnitude (in terms of amount orduration) have reported three reliable findings: (1) a pos-itive correlation between the intensity of the punishing

fined by their physical properties, but by their functions

350

Punishment by Stimulus Presentation

stimulus and response suppression: the greater the mag-nitude of the punishing stimulus, the more immediatelyand thoroughly it suppresses the occurrence of the be-havior (e.g., Azrin & Holtz, 1966); (2) recovery frompunishment is negatively correlated with intensity of thepunishing stimulus: the more intense the punisher, theless likely that responding will reoccur when punishmentis terminated (e.g., Hake, Azrin, & Oxford, 1967); and(3) a high-intensity stimulus may be ineffective as pun-ishment if the stimulus used as punishment was initiallyof low intensity and gradually increased (e.g., Terris &Barnes, 1969). However, as Lerman and Vorndran (2002)pointed out, relatively few applied studies have exam-ined the relation between punishment magnitude andtreatment efficacy, and that research has yielded incon-sistent results sometimes contradictory with basic re-search findings (e.g., Cole, Montgomery, Wilson, &Milan, 2000; Singh, Dawson, & Manning, 1981;Williams, Kirkpatrick-Sanchez, & Iwata, 1993). Whenselecting the magnitude of a punishing stimulus, the prac-titioner should ask: Will this amount of the punishingstimulus suppress occurrences of the problem behavior?Lerman and Vorndran (2002) recommended:

While the punishing stimulus needs to be intensiveenough for an effective application, it should not bemore intense than necessary. Until further applied re-search on magnitude is conducted, practitioners shouldselect magnitudes that have been shown to be safe andeffective in clinical studies, as long as the magnitude isconsidered acceptable and practical by those who willbe implementing treatment. (p. 443)

Schedule

The suppressive effects of a punisher are maximized by acontinuous schedule of punishment (FR 1) in which eachoccurrence of the behavior is followed by the punishingconsequence. In general, the greater the proportion of re-sponses that are followed by the punisher, the greater theresponse reduction will be (Azrin, Holz, & Hake, 1963;Zimmerman & Ferster, 1962). Azrin and Holz (1966)summarized the comparative effects of punishment oncontinuous and intermittent schedules as follows:

Continuous punishment produces more suppression thandoes intermittent punishment for as long as the punish-ment contingency is maintained. However, after the pun-ishment contingency has been discontinued, continuouspunishment allows more rapid recovery of the re-sponses, possibly because the absence of punishmentcan be more rapidly discriminated. (p. 415)

Intermittent punishment may be somewhat effectiveunder some conditions (e.g., Clark, Rowbury, Baer, &Baer, 1973; Cipani, Brendlinger, McDowell, & Usher,

1991; Romanczyk, 1977). Results of a study by Lerman,Iwata, Shore, and DeLeon (1997) demonstrated that agradual thinning of the punishment schedule might main-tain the suppressive effects of punishment that was ini-tially delivered on a continuous schedule (FR 1).Participants were five adults with profound mental retar-dation and histories of chronic self-injurious behavior(SIB) in the form of hand mouthing or head hitting. Treat-ment by punishment (timeout from reinforcement for oneparticipant and contingent restraint for the other four) de-livered on a continuous (FR 1) schedule produced markedreductions in SIB from baseline levels for all five partic-ipants. (Figure 3 shows the results for three of the fiveparticipants.) The participants were then exposed to in-termittent schedules of punishment (either fixed-interval[FI] 120 seconds or FI 300 seconds). On the FI 120-second schedule, the therapist delivered punishment con-tingent on the first SIB response after 120 seconds hadelapsed since the previous application of punishment orthe start of the session. The frequency of SIB under theintermittent schedule of punishment for all but one par-ticipant (Wayne, not shown in Figure 3) increased to base-line levels.

After reestablishing low levels of SIB for each par-ticipant with FR 1 punishment, the researchers graduallythinned the punishment schedules. For example, schedulethinning for Paul consisted of increasing the duration ofthe fixed-interval duration by 30-second increments to FI300 seconds (i.e., FR 30 seconds, FR 60 seconds, FR 90seconds, etc.). With the exception of a few sessions, Paul’sSIB remained at low levels as the punishment schedulewas progressively thinned over 57 sessions. During thefinal 11 sessions in which punishment was delivered onan FI 300-second schedule, his SIB occurred at a mean of2.4% of observed intervals (compared to 33% in base-line). A similar pattern of success with a gradually thin-ning punishment schedule was obtained for anothersubject, Wendy (not shown in Figure 3).

The effectiveness of punishment on an FI 300-second schedule for three of the five participants—Paul,Wendy, and Wayne (whose SIB remained low eventhough punishment was abruptly changed from FR 1 toFI 120 seconds and then FI 300 seconds)—enabled aninfrequent application of punishment. In practice, thiswould free therapists or staff from continuous monitor-ing of behavior.

For Melissa and Candace, however, repeated at-tempts to gradually thin the punishment schedule provedunsuccessful in maintaining the frequency of their SIBat levels attained by FR 1 punishment. Lerman and col-leagues (1997) speculated that one explanation for theineffectiveness of punishment on a fixed-interval sched-ule was that after a person has experienced an FI sched-ule for some time, the delivery of punishment could

351

Punishment by Stimulus Presentation

100

80

60

40

20

0

% I

nte

rval

s H

and

Mo

uth

ing

FI 3

00"

FI 2

70"

FI 2

40"

FI 2

10"

FI 1

80"

FI 1

50"

FI 9

0"

FI 6

0"

FI 3

0"

FR 1

"

FI 1

20"

20 40 60 80 100

Paul

BL FR 1 BL FI 120 FI 300 Schedule Thinned

100

80

60

40

20

0

% I

nte

rval

s H

and

Mo

uth

ing

FR 1

20 40 60 80 100

Melissa

BL FR 1 BL FI 120Schedule Thinned

120FR

1

FR 1

FI 3

0"FI

45"

FI 1

5"

FR 1

"

FI 1

5"

FI 1

0"

FI 5

"

FI 2

0"FI

25"

FI 1

5"

50

40

30

20

10

0

Hea

d H

its

per

Min

ute

20 40 60 80 100

Candace

Schedule Thinned

FR 1

120 140 160Sessions

BL

FR 1

BL

FI 120"

FI 5

"FR

1

FI 1

5"

FR 1

FI 1

20"

FI 7

5"

FI 4

5"FI

30"

FI 1

5"

FI 5

"

FR 1

Figure 3 Self-injurious behavior (SIB) by three adults with profound mental retar-dation during baseline and punishment delivered on continuous (FR 1) and variousfixed-interval schedules.From “Effects of Intermittent Punishment on Self-Injurious Behavior: An Evaluation of Schedule Thinning” by D. C. Lerman,B. A. Iwata, B. A. Shore, and I. G. DeLeon, 1997, Journal of Applied Behavior Analysis, 30, p. 194. Copyright 1997 by theSociety for the Experimental Analysis of Behavior, Inc. Used by permission.

function “as a discriminative stimulus for punishment-free periods, leading to a gradual overall increase in re-sponding under FI punishment” (p. 198).

Reinforcement for the Target Behavior

The effectiveness of punishment is modulated by the re-inforcement contingencies maintaining the problem be-havior. If a problem behavior is occurring at a frequency

sufficient to cause concern, it is presumably producingreinforcement. If the target response was never rein-forced, then “punishment would scarcely be possiblesince the response would rarely occur” (Azrin & Holz,1966, p. 433).

To the extent that the reinforcement maintaining theproblem behavior can be reduced or eliminated, punish-ment will be more effective. Of course, if all reinforce-ment for the problem behavior was withheld, the resulting

352

Punishment by Stimulus Presentation

extinction schedule would result in the reduction of thebehavior independent of the presence of a punishmentcontingency. However, as Azrin and Holz (1966) pointedout:

The physical world often provides reinforcement contin-gences that cannot be eliminated easily. The faster wemove through space, the quicker we get to where we aregoing, whether the movement be walking or driving anauto. Hence, running and speeding will inevitably be re-inforced. Extinction of running and speeding could beaccomplished only by the impossible procedure of elim-inating all reinforcing events that result from movementthrough space. Some other reductive method, such aspunishment must be used. (p. 433)

Reinforcement for Alternative Behaviors

Holz, Azrin, and Ayllon (1963) found that punishmentwas ineffective in reducing psychotic behavior when thatbehavior was the only means by which patients could at-tain reinforcement. However, when patients could emit analternative response that resulted in reinforcement, pun-ishment was effective in reducing their inappropriate be-havior. Summing up laboratory and applied studies thathave reported the same finding, Millenson (1967) stated:

If punishment is employed in an attempt to eliminatecertain behavior, then whatever reinforcement the unde-sirable behavior had led to must be made available via amore desirable behavior. Merely punishing school chil-dren for “misbehavior” in class may have little perma-nent effect. . . . The reinforcers for “misbehavior” mustbe analyzed and the attainment of these reinforcers per-haps permitted by means of different responses, or inother situations. . . . But for this to happen, it appearsimportant to provide a rewarded alternative to the pun-ished response. (p. 429)

A study by Thompson, Iwata, Conners, and Roscoe(1999) is an excellent illustration of how the suppressiveeffects of punishment can be enhanced by reinforcementfor an alternative response. Four adults with developmen-tal disabilities who had been referred to a day treatmentprogram for self-injurious behavior (SIB) participated inthe study. Twenty-eight-year-old Shelly, for example, expelled saliva and then rubbed it onto her hands andother surfaces (e.g., tables, windows), which led to fre-quent infections; and Ricky, a 34-year-old man with deaf-blindness, frequently hit his head and body resulting inbruises or contusions. Previous interventions such as dif-ferential reinforcement of appropriate behavior, responseblocking, and protective equipment had been ineffectivein reducing the SIB of all four participants.

The results of a functional behavior analysis witheach participant suggested that SIB was maintained by

automatic reinforcement. Reinforcer assessments wereconducted to identify materials that produced the highestlevels of contact or manipulation and the lowest levels ofSIB (e.g., wooden stringing beads, a mirrored mi-croswitch that produced vibration and music, a balloon).The researchers then conducted a punisher assessment todetermine the least intrusive consequences that producedat least a 75% reduction in SIB for each participant.

Thompson and colleagues (1999) analyzed the effectsof punishment with and without reinforcement of alterna-tive behavior with an experimental design that combinedalternating treatments, reversal, and multiple baselineacross subjects’ elements. During the no-punishment con-dition, the therapist was present in the room but did not in-teract with the participant or provide any consequencesfor SIB. Immediately following each occurrence of SIBduring the punishment condition, the therapist deliveredthe consequence that had previously been identified as apunisher for the participant. For example:

Each time Shelly expelled saliva, the therapist delivereda reprimand (“no spitting”) and briefly dried each of herhands (and any other wet surfaces) with a cloth. Ricky’shands were held in his lap for 15 s each time he engagedin SIB. Donna and Lynn both received a verbal repri-mand and had their hands held across their chests for 15 s following SIB. (p. 321)

Within each no-punishment and punishment phase,sessions of reinforcement and no-reinforcement condi-tions were alternated. During reinforcement sessions, par-ticipants had continual access to leisure materials oractivities previously identified as highly preferred; duringno-reinforcement sessions, participants had no access toleisure materials. Because Ricky never independently ma-nipulated any of the leisure materials during the reinforcerassessment but approached M&M candies on the high-est percentage of assessment trials, during the reinforce-ment sessions he received edible reinforcement for each2 seconds that he manipulated any of several items at-tached to a vest (e.g., Koosh ball, beads, fur).

Figure 4 shows the results of the study. During theno-punishment baseline phase, only Shelly’s SIB wasconsistently lower during the reinforcement sessions thanduring the no-reinforcement sessions. Although the in-troduction of punishment reduced SIB from baseline lev-els for all four participants, punishment was moreeffective during those sessions in which reinforcementfor alternative behaviors was available. Also of interest topractitioners should be the finding that fewer punisherswere delivered during the punishment condition sessionswhen reinforcement was available. (Ricky began to re-sist the hands-down restraint procedure during the pun-ishment with no reinforcement condition, and several

353

Punishment by Stimulus Presentation

1.5

1

0.5

0

Res

po

nse

s p

er M

inu

te(S

IB)

No Punish Punish No Punish Punish

Shelly

50

40

30

20

10

0

Res

po

nse

s p

er M

inu

te(S

IB)

Ricky

100

75

50

25

0

% I

nte

rval

s(S

IB)

Donna

30

20

10

0

Res

po

nse

s p

er M

inu

te(S

IB)

Lynn

No Sr+

Sr+

10 20 30 40 50 60 70Sessions

Figure 4 Self-injurious behavior by fouradults with developmental disabilities duringalternating reinforcement and no reinforce-ment conditions across no-punishment andpunishment phases.From “Effects of Reinforcement for Alternative Behavior duringPunishment for Self-Injury,” by R. H. Thompson, B. A. Iwata, J. Conners, and E. M. Roscoe, 1999, Journal of Applied Be-havior Analysis, 32, p. 323. Copyright 1999 by the Society forthe Experimental Analysis of Behavior, Inc. Used by permission.

sessions were terminated early because the therapist couldnot implement the procedure safely. Therefore, the pun-ishment with no reinforcement condition was terminatedafter seven sessions and the punishment with reinforce-ment continued for six additional sessions.)

Thompson and colleagues (1999) summarized themain findings of their study with these conclusions:

Consistent with the recommendation made by Azrin andHolz (1966), results of this study indicate that the effectsof punishment can be enhanced when reinforcement isprovided for an alternative response. Furthermore, theseresults suggest a method for increasing the effectivenessof punishment through means other than increasing the

aversiveness of the punishing stimulus, thereby resultingin the development of more effective yet less restrictiveinterventions. (p. 326)

Possible Side Effects and Problemswith Punishment

A variety of side effects and problems are often corre-lated with the applications of punishment, including theelicitation of undesirable emotional responses and ag-gression, escape and avoidance, and an increased rate ofthe problem behavior under nonpunishment conditions(e.g., Azrin & Holz, 1966; Hutchinson, 1977; Linscheid

354

Punishment by Stimulus Presentation

& Meinhold, 1990). Other problems noted include mod-eling undesirable behavior and overusing punishment be-cause of the negative reinforcement it provides for thepunishing agent’s behavior.

Emotional and Aggressive Reactions

Punishment sometimes evokes emotional and aggressivereactions that can involve a combination of respondentand conditioned operant behaviors. Punishment, espe-cially positive punishment in the form of aversive stim-ulation, may evoke aggressive behavior with respondentand operant components (Azrin & Holz, 1966 ). For ex-ample, electric shock elicited reflexive forms of ag-gression and fighting in laboratory animals (Azrin,Hutchinson, & Hake, 1963; Ulrich & Azrin, 1962; Ulrich, Wolff, & Azrin, 1962). Such pain-elicited, or res-pondent aggression, is directed toward any nearby personor object. For example, a student who is punished se-verely may begin to throw and destroy materials withinher reach. Alternatively, the student may try to attack theperson delivering the punishment. Aggressive behaviorfollowing punishment that occurs because it has enabledthe person to escape the aversive stimulation in the pastis referred to as operant aggression (Azrin & Holz, 1966).

Although basic laboratory researchers, using intenseand unavoidable punishers, have reliably produced re-spondent and operant aggression with nonhuman ani-mals, many applied studies of punishment report noevidence of aggression (e.g., Linscheid & Reichenbach,2002; Risley, 1968).

Escape and Avoidance

Escape and avoidance are natural reactions to aversivestimulation. Escape and avoidance behaviors take a widevariety of forms, some of which may be a greater problemthan the target behavior being punished. For example, astudent who is admonished repeatedly for sloppy work orcoming to class unprepared may stop coming to class al-together. A person may lie, cheat, hide, or exhibit other un-desirable behaviors to avoid punishment. Mayer, Sulzer,and Cody (1968) indicated that avoidance and escape neednot always take form in the literal sense of those terms.People sometimes escape punishing environments by tak-ing drugs or alcohol, or by simply “tuning out.”

As the intensity of a punisher increases, so does thelikelihood of escape and avoidance. For example, in astudy evaluating the effectiveness of a specially designedcigarette holder that delivered an electric shock to theuser when it was opened as an intervention for reducingcigarette smoking, Powell and Azrin (1968) found that

4Multiple schedules of reinforcement are described in Chapter entitled“Schedules of Reinforcement.”

“As the punishment intensity increased, the duration de-creased for which the subjects would remain in contactwith the contingency; ultimately, an intensity was reachedat which they refused to experience it altogether” (p. 69).

Escape and avoidance as side effects to punishment,like emotional and aggressive reactions, can be mini-mized or precluded altogether by providing the personwith desirable alternative responses to the problem be-havior that both avoid the delivery of punishment andprovide reinforcement.

Behavioral Contrast

Reynolds (1961) introduced the term behavioral con-trast to refer to the phenomenon in which a change inone component of a multiple schedule that increases ordecreases the rate of responding on that component is ac-companied by a change in the response rate in the oppo-site direction on the other, unaltered component of theschedule.4 Behavioral contrast can occur as a function ofa change in reinforcement or punishment density on onecomponent of a multiple schedule (Brethower &Reynolds, 1962; Lattal & Griffin, 1972). For example,behavioral contrast for punishment takes the followinggeneral form: (a) Responses are occurring at similar rateson two components of a multiple schedule (e.g., a pigeonpecks a backlit key, which alternates between blue andgreen, reinforcement is delivered on the same schedule onboth keys, and the bird pecks at roughly the same rate re-gardless of the key’s color); (b) responses on one com-ponent of the schedule are punished, whereas responseson the other component continue to go unpunished (e.g.,pecks on the blue key are punished, and pecks on thegreen key continue to produce reinforcement at the priorrate); (c) rate of responding decreases on the punishedcomponent and increases on the unpunished component(e.g., pecks on the blue key are suppressed and pecks onthe green key increase even though pecks on the greenkey produce no more reinforcement than before).

Here is a hypothetical applied example of a contrasteffect of punishment. A child is eating cookies beforedinner from the kitchen cookie jar at equal rates in thepresence and absence of his grandmother. One day,Grandma scolds the child for eating a cookie before din-ner, which suppresses his rate of predinner cookie eatingwhen she is in the kitchen (see Figure 2); but whenGrandma’s not in the kitchen, the boy eats cookies fromthe jar at a higher rate than he did when unsupervisedprior to punishment. Contrast effects of punishment canbe minimized, or prevented altogether, by consistently

355

Punishment by Stimulus Presentation

punishing occurrences of the target behavior in all rele-vant settings and stimulus conditions, withholding or atleast minimizing the person’s access to reinforcement forthe target behavior, and providing alternative desirablebehaviors. (With respect to our hypothetical case of thechild eating cookies before dinner, we recommend sim-ply removing the cookie jar!)

Punishment May Involve Undesirable Modeling

Most readers are familiar with the example of the parentwho, while spanking a child, says, “This will teach younot to hit your playmates!” Unfortunately, the child maybe more likely to imitate the parent’s actions, not the par-ent’s words. More than two decades of research havefound a strong correlation between young children’s ex-posure to harsh and excessive punishment and antisocialbehavior and conduct disorders as adolescents and adults(Patterson, 1982; Patterson, Reid, & Dishion, 1992;Sprague & Walker, 2000). Although the proper use of be-havior change tactics based on the principle of punish-ment does not involve harsh treatment or negativepersonal interactions, practitioners should heed Bandura’s(1969) valuable counsel in this regard:

Anyone attempting to control specific troublesome re-sponses should avoid modeling punitive forms of behav-ior that not only counteract the effects of direct trainingbut also increase the probability that on future occasionsthe individual may respond to interpersonal thwarting inan imitative manner. (p. 313)

Negative Reinforcement of the PunishingAgent’s Behavior

Negative reinforcement may be a reason for the wide-spread use of (too often ineffective and unnecessary) andreliance on (mostly misguided) punishment in child rear-ing, education, and society. When Person A delivers areprimand or other aversive consequence to Person B formisbehaving, the immediate effect is often the cessationof the troubling behavior, which serves as negative rein-forcement for Person A’s behavior. Or, as Ellen Reese(1966) said so succinctly, “Punishment reinforces thepunisher” (p. 37). Alber and Heward (2000) describedhow the natural contingencies at work in a typical class-room could strengthen a teacher’s use of reprimands fordisruptive behavior while undermining her use of con-tingent praise and attention for appropriate behavior.

Paying attention to students when they are behaving in-appropriately (e.g., “Carlos, you need to sit down rightnow!”) is negatively reinforced by the immediate cessa-tion of the inappropriate behavior (e.g., Carlos stopsrunning around and returns to his seat). As a result, theteacher is more likely to attend to student disruptions in

the future. . . . Although few teachers must be taught toreprimand students for misbehavior, many teachers needhelp increasing the frequency with which they praisestudent accomplishments. Teacher-praising behavior is usually not reinforced as effectively as teacher-reprimanding behavior. Praising a student for appropri-ate behavior usually produces no immediate effects—thechild continues to do his work when praised. Althoughpraising a student for working productively on an as-signment may increase the future likelihood of that be-havior, there are no immediate consequences for theteacher. By contrast, reprimanding a student often pro-duces an immediate improvement in the teacher’s world(if only temporary)—that functions as effective negativereinforcement for reprimanding. (pp. 178–179)

Even though the reprimands may be ineffective insuppressing future frequency of misbehavior, the imme-diate effect of stopping the annoying behavior is power-ful reinforcement that increases the frequency with whichthe teacher will issue reprimands when confronted withmisbehavior.

Examples of PositivePunishment InterventionsInterventions based on positive punishment take a widevariety of forms. We describe five in this section: repri-mands, response blocking, contingent exercise, overcor-rection, and contingent electric stimulation.

Reprimands

It may seem strange that immediately following a dis-cussion of teachers’ overreliance on and misuse of repri-mands our first example of a positive punishmentintervention is reprimands. The delivery of verbal repri-mands following the occurrence of misbehavior is with-out doubt the most common form of attempted positivepunishment. However, a number of studies have shownthat a firm reprimand such as “No!” or “Stop! Don’t dothat!” delivered immediately on the occurrence of a be-havior can suppress future responding (e.g., Hall et al.,1971, see Figure 1; Jones & Miller, 1974; Sajwaj, Culver,Hall, & Lehr, 1972; Thompson et al., 1999).

In spite of the widespread use of reprimands in aneffort to suppress undesired behavior, surprisingly fewstudies have examined the effectiveness of reprimands aspunishers. Results of a series of experiments by VanHouten, Nau, Mackenzie-Keating, Sameoto, and Colavec-chia (1982) designed to identify variables that increasedthe effectiveness of reprimands as punishers for disruptivebehavior in the classroom found that (a) reprimands de-livered with eye contact and “a firm grasp of the student’s

356

Punishment by Stimulus Presentation

shoulders” were more effective than reprimands withoutthose nonverbal components, and (b) reprimands deliv-ered in close proximity to the student were more effectivethan reprimands delivered from across the room.

The teacher who repeatedly admonishes her studentsin a mild fashion to “sit down” would be advised insteadto state once strongly, ”SIT DOWN!” When the com-mand is issued once, students are more likely to followthe direction. If the command is given repeatedly, stu-dents may habituate to the increased frequency, and thereprimand will gradually lose its effect as a punisher. Aloud reprimand, however, is not necessarily more effec-tive than a reprimand stated in a normal voice. An inter-esting study by O’Leary, Kaufman, Kass, and Drabman(1970) found that quiet reprimands that were audible onlyto the child being reprimanded were more effective in re-ducing disruptive behavior than loud reprimands thatcould be heard by many children in the classroom.

If the only way a child receives adult attention is inthe form of reprimands, it should not be surprising thatreprimands function as reinforcement for that child ratherthan as punishment. Indeed, Madsen, Becker, Thomas,Koser, and Plager (1968) found that the repeated use ofa reprimand while students were out of seat served to in-crease, rather than reduce, the behavior. Consistent withresearch on other punishing stimuli, reprimands are moreeffective as punishers when motivation for the problembehavior has been minimized and the availability of an al-ternative behavior has been maximized (Van Houten &Doleys, 1983).

A parent or teacher does not want to be in a patternof constantly reprimanding. Reprimands should be usedthoughtfully and sparingly in combination with frequentpraise and attention contingent on appropriate behavior.O’Leary and colleagues (1970) recommended that

An ideal combination would probably be frequentpraise, some soft reprimands, and very occasional loudreprimands. . . . Combined with praise, soft reprimandsmight be very helpful in reducing disruptive behaviors.In contrast, it appears that loud reprimands lead one intoa vicious cycle of more and more reprimands resultingin even more disruptive behavior. (p. 155)

Response Blocking

Response blocking—physically intervening as soon asthe person begins to emit the problem behavior to pre-vent or “block” the completion of the response—has beenshown to be effective in reducing the frequency of someproblem behaviors such as chronic hand mouthing, eyepoking, and pica (e.g., Lalli, Livezy, & Kates, 1996; Ler-man & Iwata, 1996; Reid, Parsons, Phillips, & Green,

1993). In addition to preventing the response from oc-curring by using the least amount of physical contact andrestraint possible, the therapist might issue a verbal rep-rimand or prompt to stop engaging in the behavior (e.g.,Hagopian & Adelinis, 2001).

Lerman and Iwata (1996) used response blocking intreating the chronic hand mouthing (contact betweenany part of the hand and the lips or mouth) of Paul, a 32-year-old man with profound mental retardation. Fol-lowing a baseline condition, in which Paul was seatedin a chair with no one interacting with him and noleisure materials were available, response blocking onan FR 1 schedule was implemented. A therapist sat be-hind Paul and blocked his attempts to put his hand inhis mouth. “Paul was not prevented from bringing hishand to his mouth; however, the therapist blocked thehand from entering the mouth by placing the palm ofher hand about 2 cm in front of Paul’s mouth” (p. 232).Response blocking produced an immediate and rapiddecrease in hand mouthing attempts to near-zero levels(see Figure 5).

Response blocking is often implemented as a treat-ment for SIB or self-stimulatory behavior when func-tional analysis reveals consistent responding in theabsence of socially mediated consequences, which sug-gests the possibility that the behavior is maintained byautomatic reinforcement by sensory stimuli produced bythe response. Because response blocking prevents thelearner from contacting the sensory stimuli that are nor-mally produced by the response, subsequent decreasesin responding could be due to extinction. Lerman andIwata (1996) presented their study as a potential methodfor distinguishing whether the suppressive effects of re-sponse blocking are due to punishment or extinctionmechanisms. They explained their reasoning as follows:

Depending on the mechanism through which behavior isreduced (extinction vs. punishment), different schedulesof reinforcement or punishment are in effect when agiven proportion of responses is blocked. For examplewhen every fourth response is blocked (.25), the behav-ior is exposed to either a fixed-ratio (FR) 1.3 schedule of reinforcement (if blocking functions as extinction) oran FR 4 schedule of punishment (if blocking functionsas punishment); when three out of four responses areblocked (.75), the behavior is exposed to either an FR 4schedule of reinforcement or an FR 1.3 schedule of pun-ishment. Thus, as larger proportions of responses areblocked, the reinforcement schedule becomes leaner andthe punishment schedule becomes richer. If responseblocking produces extinction, response rates should in-crease or be maintained as more responses are blocked(i.e., as the reinforcement schedule is thinned), until [theeffects of] extinction [i.e., reduced response rate] occursat some point along the progression. Conversely, if the

357

Punishment by Stimulus Presentation

5

4

3

2

1

0

Res

po

nse

s p

er M

inu

te

20 40 60

Sessions

Baseline 1.0BL .50.25.50 .67 .751.0

ResponseBlock

ResponseBlock

Figure 5 Rates of handmouthing during baselineand varying schedules ofresponse blocking.From “A Methodology for Distinguishingbetween Extinction and PunishmentEffects Associated with ResponseBlocking,” by D. C. Lerman and B. A. Iwata, 1996, Journal of AppliedBehavior Analysis, 29, p. 232. Copyright1996 by the Society for the ExperimentalAnalysis of Behavior, Inc. Used bypermission.

procedure functions as punishment, response ratesshould decrease as more responses are blocked (i.e., asthe punishment schedule becomes richer). (pp. 231–232,words in brackets added)

A condition in which all responses are blocked mightfunction as an extinction schedule (i.e., reinforcement inthe form of sensory stimuli is withheld for all responses)or as a continuous (FR 1) schedule of punishment (i.e., allresponses are followed by physical contact). As Lermanand Iwata (1996) explained, if only some responses areblocked, the situation may function as an intermittentschedule of reinforcement or as an intermittent scheduleof punishment. Therefore, comparing response ratesamong conditions in which different proportions of re-sponses are blocked should indicate whether the effectsare due to extinction or to punishment.

If response blocking functioned as extinction forPaul’s hand mouthing, an initial increase in response ratewould be expected when the blocking procedure was im-plemented for every response; however, no such increasewas observed.5 If response blocking functioned as pun-ishment, blocking every response would constitute a con-tinuous schedule of punishment and a rapid decrease inresponding would be expected; and that is exactly whatthe results showed (see data for the first response block[1.0] phase in Figure 5).

On the other hand, if response blocking functionedas extinction for Paul’s hand mouthing, then blockingsome but not all responses would place the hand

5When an extinction procedure is first implemented, an increase in re-sponding, called an extinction burst, is sometimes observed before the re-sponse rate begins to decline. The principle, procedure, and effects ofextinction are detailed in Chapter entitled “Extinction.”

6It might be argued that the suppressive effects of response blocking can-not be due to punishment or to extinction on the grounds that responseblocking occurs before the response has been emitted and punishment andextinction are both response → consequence relations (in the case of ex-tinction, the consequence is absence of the reinforcement that followed thebehavior in the past). As Lalli, Livezy, and Kates (1996) noted, responseblocking prevents the response → consequence cycle from occurring. How-ever, if the problem behavior response class is conceptualized to include theoccurrence of any portion of a relevant response, then blocking a person’shand after it begins to move toward contact with his head is a consequencewhose suppressive effects can be analyzed in terms of extinction, punish-ment, or both.

mouthing on an intermittent schedule of reinforcementand responding would be expected to increase from base-line levels. And blocking an ever-larger proportion of re-sponses thins the reinforcement schedule further causingthe response rate to rise even higher. Instead, as a greaterproportion of responses were blocked, the suppressiveeffects on Paul’s SIB became more pronounced, a resultexpected as a punishment schedule becomes denser.Overall, therefore, the results of the experiment indicatedthat response blocking functioned as punishment forPaul’s hand mouthing.

On the other hand, a systematic replication of Ler-man and Iwata’s (1996) experiment, conducted by Smith,Russo, and Le (1999), found that the frequency of eye-poking by a 41-year-old woman treated with responseblocking decreased gradually—a response pattern in-dicative of extinction. The authors concluded that,“whereas blocking may reduce one participant’s behav-ior via punishment, it may extinguish another partici-pant’s behavior” (p. 369).6

Although response blocking may be viewed as a lessrestrictive and more humane intervention than deliveringaversive stimulation after a response has occurred, it must

358

Punishment by Stimulus Presentation

be approached with great care. Side effects such as ag-gression and resistance to the response blocking proce-dure have occurred in some studies (Hagopian &Adelinis, 2001; Lerman, Kelley, Vorndran, & Van Camp,2003). Providing prompts and reinforcement for an al-ternative response can minimize resistance and aggres-sion. For example, the aggressive behavior by 26-year-oldman with moderate mental retardation and bipolar dis-order during treatment with response blocking for pica(ingesting paper, pencils, paint chips, and human feces)was reduced by supplementing response blocking with aprompt and redirection to engage in an alternative be-havior, in this case moving to an area of the room wherepopcorn was available (Hagopian & Adelinis, 2001).

Contingent Exercise

Contingent exercise is an intervention in which the per-son is required to perform a response that is not topo-graphically related to the problem behavior. Contingentexercise has been found effective as punishment for var-ious self-stimulatory, stereotypic, disruptive, aggressive,and self-injurious behaviors (e.g., DeCatanzaro & Bald-win, 1978; Kern, Koegel, & Dunlap, 1984; Luce & Hall,

7Increasing the effort or force required to perform a behavior can be an ef-fective tactic for reducing responding (Friman & Poling, 1995). There isno consensus as to whether punishment accounts for the reduced re-sponding. As with response blocking, one perspective from which increasedresponse effort can be conceptualized as a punishment procedure is to con-sider the movement necessary to come into contact with the increased ef-fort requirement as a member of the target behavior response class. In thatcase, the increased effort required to continue the response to completionis (a) a consequence for the response that brought the learner into contactwith it, and (b) aversive stimulation that functions as punishment as thefrequency of future responding decreases.

1981; Luiselli, 1984).7 In perhaps the most frequentlycited example of contingent exercise as a punisher, Luce,Delquadri, and Hall (1980) found that the repetition ofmild exercise contingent on aggressive behavior by twoboys with severe disabilities reduced it to near-zero lev-els. Figure 6 shows the results for Ben, a 7-year-old who frequently hit other children at school. Each timeBen hit someone, he was required to stand up and sitdown 10 times. Initially, Ben had to be physicallyprompted to stand; an assistant held the child’s handwhile pulling his upper body forward. Physical promptswere accompanied by verbal prompts of “stand up” and“sit down.” Soon, whenever hitting occurred, the nearestadult simply said, “Ben, no hitting. Stand up and sit down10 times” and the verbal prompts alone were sufficient

70

60

50

40

30

20

10

0

Nu

mb

er o

f H

its

5 10 15 20 25 30 35 40Days

BaselineContingent

ExerciseBaseline BaselineContingent

Exercise

Figure 6 Number of times a 7-year-old boy hit other children during 6-hour school days during baseline and contingent exercise. Xs represent responsemeasures recorded by a second observer.From “Contingent Exercise: A Mild but Powerful Procedure for Suppressing Inappropriate Verbal and Aggressive Behavior,”by S. C. Luce, J. Delquadri, and R. V. Hall, 1980, Journal of Applied Behavior Analysis, 13, p. 587. Copyright 1980 by theSociety for the Experimental Analysis of Behavior, Inc. Used by permission.

359

Punishment by Stimulus Presentation

for Ben to complete the exercise. If a hitting episode oc-curred during the contingent exercise, the procedure wasreinstated.

Overcorrection

Overcorrection is a behavior reduction tactic in which,contingent on each occurrence of the problem behavior,the learner is required to engage in effortful behaviorthat is directly or logically related to the problem. Orig-inally developed by Foxx and Azrin (1972, 1973; Foxx& Bechtel, 1983) as a method for decreasing disruptiveand maladaptive behaviors of adults with mental retar-dation in institutional settings, overcorrection combinesthe suppressive effects of punishment and the educativeeffects of positive practice. Overcorrection includes ei-ther or both of two components: restitution and positivepractice.

In restitutional overcorrection, contingent on theproblem behavior, the learner is required to repair thedamage caused by the problem behavior by returning the environment to its original state and then to engage inadditional behavior that brings the environment to a con-dition vastly better than it was prior to the misbehavior.A parent applying restitutional overcorrection with a childwho repeatedly tracks mud onto the kitchen floor mightrequire the child to first wipe up the mud and clean hisshoes and then to overcorrect the effects of his misbe-havior by mopping and waxing a portion of the floor andpolishing his shoes.

Azrin and Foxx (1971) used restitutional overcorrec-tion in their toilet training program by requiring a personwho had an accident to undress, wash her clothes, hangthem up to dry, shower, dress in clean clothing, and thenclean up a portion of the lavatory. Azrin and Wesolowki(1975) eliminated stealing of food by hospitalized adultswith mental retardation by requiring residents to returnnot only the stolen food, or the portion that remained un-eaten, but also to purchase an additional item of that foodat the commissary and give it to the victim.

Azrin and Besalel (1999) differentiated a procedurethey called simple correction from overcorrection. Withsimple correction, the learner is required, subsequent tothe occurrence of an inappropriate behavior, to restorethe environment to its previous state. For example, a sim-ple correction procedure is in effect when requiring a stu-dent who cuts to the head of the lunch line to go to theback of the line. Requiring the student to wait until allothers get in line and are served before reentering the linewould constitute a form of overcorrection in this instance.Azrin and Besalel recommended that simple correction beused to reduce behaviors that are not severe, occur infre-quently, are not deliberate, and do not severely interferewith or annoy other people.

Correction is not possible if the problem behaviorproduces an irreversible effect (e.g., a one-of-a-kind dishis broken) or if the corrective behavior is beyond the per-son’s means or skills. In such instances, Azrin and Besalel(1999) recommended that the person be required to cor-rect as much of the damage his behavior caused as pos-sible, be present at all points in the correction, and assistwith any parts of the correction that he is able to perform.For example, a child who broke a neighbor’s expensivewindow should clean up the pieces of glass, measure thewindow, contact the store for a replacement pane, be pre-sent while the new window pane is installed, and assistwith every step.

In positive practice overcorrection, contingent onan occurrence of the problem behavior, the learner is re-quired to repeatedly perform a correct form of the be-havior, or a behavior incompatible with the problembehavior, for a specified duration of time or number of re-sponses. Positive practice overcorrection entails an ed-ucative component in that it requires the person to engagein an appropriate alternative behavior. The parent whoseson tracks mud into the house could add a positive prac-tice component by requiring him to practice wiping hisfeet on the outside door mat and entering the house for 2minutes or 5 consecutive times. Overcorrection that in-cludes restitution and positive practice helps teach whatto do in addition to what not to do. The child who breaksan irreplaceable dish could be required to gently andslowly wash a number dishes, perhaps with an exagger-ated carefulness.

Researchers and practitioners have used positivepractice overcorrection to reduce the frequency of prob-lem behaviors such as toilet training (Azrin & Foxx,1971), self-stimulation and stereotypic behavior (Azrin,Kaplan, & Foxx, 1973; Foxx & Azrin, 1973), pica (Singh& Winton, 1985), bruxism (Steuart, 1993), sibling ag-gression (Adams & Kelley, 1992), and classroom dis-ruptions (Azrin & Powers, 1975). Positive practiceovercorrection has been used for academic behaviors(Lenz, Singh, & Hewett, 1991), most often to decreaseoral reading and spelling errors (e.g., Ollendick, Matson,Esveldt-Dawson, & Shapiro, 1980; Singh & Singh, 1986;Singh, Singh, & Winton, 1984; Stewart & Singh, 1986).

Positive practice overcorrection can also be appliedto reduce or eliminate behaviors that do not create per-manent response products that can be repaired or restoredto their original state. For example, Heward, Dardig, andRossett (1979) described how parents used positive prac-tice overcorrection to help their teenage daughter stopmaking a grammatical error in her speech. Eunice fre-quently used the contraction “don’t” instead of “doesn’t”with the third person singular (e.g., “He don’t want togo.”). A positive reinforcement program in which Euniceearned points she could redeem for preferred activities

360

Punishment by Stimulus Presentation

each time she used “doesn’t” correctly had little effecton her speech. Eunice agreed with her parents that sheshould speak correctly but claimed the behavior was ahabit. Eunice and her parents then decided to supplementthe reinforcement program with a mild punishment pro-cedure. Each time Eunice or her parents caught her using“don’t” incorrectly in her speech, she was required to saythe complete sentence she had just spoken 10 times in arow using correct grammar. Eunice wore a wrist counterthat reminded her to listen to her speech and to keep trackof the number of times she employed the positive prac-tice procedure.

When positive practice effectively suppresses theproblem behavior, it is not clear what behavioral mech-anisms are responsible for the behavior change. Punish-ment may result in decreased frequency of responding,because the person engages in effortful behavior as a con-sequence of the problem behavior. A reduction in the fre-quency of the problem behavior as an outcome of positivepractice may also be a function of an increased frequencyof an incompatible behavior, the correct behavior that isstrengthened in the person’s repertoire as the result of theintensive, repeated practice. Azrin and Besalel (1999)suggested that the reason why positive practice is effec-tive varies depending upon the whether the problem be-havior is “deliberate” or the result of a skill deficit:

Positive practice may be effective because of the incon-venience and effort involved, or because it provides ad-ditional learning. If the child’s errors are caused by adeliberate action, the extra effort involved in positivepractice will discourage future misbehaviors. But if themisbehavior is the result of insufficient learning, thechild will stop the misbehavior—or error—because ofthe intensive practice of the correct behavior. (p. 5)

Although specific procedures for implementing over-correction vary greatly depending upon the problem be-havior and its effects on the environment, the setting, thedesired alternative behavior, and the learner’s currentskills, some general guidelines can be suggested (Azrin& Besalel, 1999; Foxx & Bechtel, 1983; Kazdin, 2001;Miltenberger & Fuqua, 1981):

1. Immediately upon the occurrence of the problembehavior (or the discovery of its effects), in a calm,unemotional tone of voice, tell the learner that hehas misbehaved and provide a brief explanation forwhy the behavior must be corrected. Do not criti-cize or scold. Overcorrection entails a logically re-lated consequence to reduce future occurrences ofthe problem behavior; criticism and scolding donot enhance the tactic’s effectiveness and mayharm the relationship between the learner andpractitioner.

2. Provide explicit verbal instructions describing theovercorrection sequence the learner must perform.

3. Implement the overcorrection sequence as soon aspossible after the problem behavior has occurred.When circumstances prevent immediately com-mencing the overcorrection sequence, tell thelearner when the overcorrection process will beconducted. Several studies have found that overcor-rection conducted at a later time can be effective(Azrin, & Powers, 1975; Barton & Osborne, 1978).

4. Monitor the learner throughout the overcorrectionactivity. Provide the minimal number of responseprompts, including gentle physical guidance,needed to ensure that the learner performs the en-tire overcorrection sequence.

5. Provide the learner with minimal feedback forcorrect responses. Do not give too much praiseand attention to the learner during the overcorrec-tion sequence.

6. Provide praise, attention, and perhaps other formsof reinforcement to the learner each time he “spon-taneously” performs the appropriate behavior dur-ing typical activities. (Although technically notpart of the overcorrection procedure, reinforcing analternative behavior is a recommended comple-ment to all punishment-based interventions.)

Although “the results of a few minutes of correctivetraining after the undesired behavior have often led torapid and long-lasting therapeutic effects” (Kazdin, 2001,p. 220), practitioners should be aware of several potentialproblems and limitations associated with overcorrection.First, overcorrection is a labor intensive, time-consumingprocedure that requires full attention of the practitionerimplementing it. Implementing overcorrection usuallyrequires the practitioner to monitor the learner directlythroughout the overcorrection process. Second, for over-correction to be effective as punishment, the time that thelearner spends with the person monitoring the overcor-rection sequence must not be reinforcing. “If it is, it justmight be worth waxing the entire kitchen floor if Momchats with you and provides a milk and cookies break”(Heward et al., 1979, p. 63).

Third, a child who misbehaves frequently may notexecute a long list of “cleanup” behaviors just becausehe was told to do so. Azrin and Besalel (1999) recom-mended three strategies to minimize the likelihood of re-fusal to perform the overcorrection sequence: (1) remindthe learner what the more severe disciplinary action willbe and, if the refusal persists, then impose that discipline;(2) discuss the need for correction before the problembehavior occurs; and (3) establish correction as an ex-pectation and routine habit for any disruptive behavior. If

361

Punishment by Stimulus Presentation

the child resists too strongly or becomes aggressive, over-correction may not be a viable treatment. Adult learnersmust voluntarily make the decision to perform the over-correction routine.

Contingent Electric Stimulation

Contingent electric stimulation as punishment involvesthe presentation of a brief electrical stimulus immedi-ately following an occurrence of the problem behavior.Although the use of electric stimulation as treatment iscontroversial and evokes strong opinions, Duker and Seys(1996) reported that 46 studies have demonstrated thatcontingent electric stimulation can be a safe and highlyeffective method for suppressing chronic and life-threatening self-injurious behavior (SIB). One of the mostrigorously researched and carefully applied proceduresfor implementing punishment by electric stimulation for self-inflicted blows to the head or face is the Self-Injurious Behavior Inhibiting System (SIBIS) (Linscheid,Iwata, Ricketts, Williams, & Griffin, 1990; Linscheid,Pejeau, Cohen, & Footo-Lenz, 1994; Linscheid & Reichenbach, 2002). The SIBIS apparatus consists of asensor module (worn on the head) and a stimulus mod-ule (worn on the leg or arm) that contains a radio receiver,a 9 V battery, and circuitry for the generation and timingof the electric stimulus. Above–threshold blows to theface or to the head trip an impact detector in the sensormodule that transmits a radio signal to a receiver in thestimulus module that in turn produces an audible tonefollowed by electric stimulation (84 V, 3.5 mA) for 0.08seconds. “Subjectively, the experience has been describedat its extremes as imperceptible (low) and similar to hav-ing a rubber band snapped on the arm (high)” (Linscheidet al., 1990, p. 56).

Linscheid and colleagues (1990) evaluated SIBIS astreatment for five people with “longstanding, severe, andpreviously unmanageable” self-injurious behavior. One ofthe participants was Donna, a 17-year-old girl with pro-found mental retardation, no language, and no indepen-dent feeding or toileting skills. Donna’s parents reportedthat she began hitting herself with sufficient force to pro-duce lesions on her face and head more than 10 yearsprior to the study. Numerous treatments to stop Donnafrom hurting herself had been tried without success, in-cluding differential reinforcement of other behaviors(DRO), “gentle teaching,” redirection, and response pre-vention. “For example, in order to prevent head hitting inbed, her parents had to hold her arms each night until shefell asleep. This sometimes required 3 to 4 hours of un-divided attention, which the parents felt they were un-able to continue” (pp. 66–67). Although Donna wasambulatory, when the study began she was spending most

of the day with her wrists restrained to the arms of awheelchair to prevent her SIB.

The effects of SIBIS on Donna’s SIB were evaluatedwith a reversal design that included SIBIS-active andSIBIS-inactive conditions. All sessions lasted for 10 min-utes or until Donna hit her head 25 times. During SIBIS-inactive sessions, Donna wore the sensor and stimulusmodules but the stimulus module was inoperative. Dur-ing six sessions each of initial baseline and SIBIS-inactivephases, Donna hit her head at a mean rate of at least onceper second (68.1 and 70.2 responses per minute, respec-tively) (see Figure 7). In the first session in which SIBISwas applied, Donna’s SIB decreased to 2.4 head hits perminute. The mean rate of head hitting during all SIBIS-active sessions was 0.5 responses per minute (a range of0 to 5.6), compared to rates of 50.2 (1.7 to 78.9) and 32.5(0 to 48.0) responses per minute for all baseline andSIBIS-inactive sessions combined, respectively. SIBISreduced Donna’s head hitting 98.9% from baseline lev-els. Across all of the SIBIS-active sessions combined,Donna received 32 electric stimulations lasting a com-bined 2.6 seconds. No sessions during the SIBIS-activecondition had to be terminated early because of risk,whereas 100% of the baseline sessions and 64% of theSIBIS-inactive sessions were terminated because Donnahit her head 25 times.

Similar results were obtained for all five partici-pants: SIBIS treatment produced an immediate and al-most complete cessation of SIB. Although legal, ethical,and moral issues and concerns surround the use of elec-tric aversive stimulation, Linscheid and Reichenbach(2002) offered the following perspective: “While the de-cision to use an aversive treatment must be made in con-sideration of numerous factors, it is suggested that thespeed and degree of suppression of SIB must be amongthese considerations” (p. 176). Formal data and anecdo-tal reports indicated an absence of negative side effectsand the occurrence of some positive side effects. For ex-ample, Donna’s parents reported a general improvementin her overall adaptive functioning and that she no longerhad to be restrained in bed at night. Donna’s teacher re-ported that:

Since the introduction of SIBIS, it is like we have a to-tally new girl in the classroom. Donna no longer has tohave her hands restrained. She is walking around theclassroom without the wrestling helmet or cervical col-lar. She smiles more frequently and fusses a lot less. Shepays more attention to what is going on in the class-room. She reaches out for objects and people more thanshe did. (p. 68)

Although the maintenance of suppressive effects hasnot been universally demonstrated (Ricketts, Goza, &

362

Punishment by Stimulus Presentation

100

80

60

40

20

0

Res

po

nse

s p

er M

inu

te

BaselineSIBIS

Inactive SIBISSIBIS

Inactive BLSIBIS

Inactive SIBISDonna

10 20 30 40 50Sessions

Figure 7 Number of head hits per minute by a 17-year-old with a 10-year history of self-injurious behavior during baseline and SIBIS-inactive and SIBIS-active conditions.From “Clinical Evaluation of SIBIS: The Self-Injurious Behavior Inhibiting System” by T. R. Linscheid, B. A. Iwata, R. W.Ricketts, D. E. Williams, and J. C. Griffin, 1990, Journal of Applied Behavior Analysis, 23, p. 67. Copyright 1990 by theSociety for the Experimental Analysis of Behavior, Inc. Used by permission.

Matese, 1993), some have reported the long-term effec-tiveness of SIBIS (Linscheid & Reichenbach, 2002). Forexample, Salvy, Mulick, Butter, Bartlett, and Linscheid(2004) reported that the SIB of a 3-year-old female withan 18-month history of chronic and repetitive head bang-ing remained at virtually zero levels at a 1-year post-SIBIS follow-up.

Guidelines for UsingPunishment EffectivelyResearch and clinical applications have demonstrated thatpunishment can yield rapid, long-lasting suppression ofproblem behavior. Increasingly, agency policies, humansubject review procedures, and historic practices have lim-ited the use of punishment for research and treatment inclinic-based settings (Grace, Kahng, & Fisher, 1994). Pun-ishment, however, may be a treatment of choice when:(a) the problem behavior produces serious physical harmand must be suppressed quickly, (b) reinforcement-basedtreatments have not reduced the problem behavior to so-cially acceptable levels, or (c) the reinforcer maintaining

the problem behavior cannot be identified or withheld(Lerman & Vorndran, 2002).

If the decision is made to use a punishment-basedtreatment, steps should be taken to ensure that the pun-ishing stimulus is as effective as possible. The guide-lines that follow will help practitioners apply punishmentwith optimal effectiveness while minimizing undesir-able side effects and problems. We based these guide-lines on the assumption that the analyst has conducted afunctional behavior assessment to identify the variablesthat maintain the problem behavior, that the problem be-havior has been defined in a manner that minimizes am-biguity concerning its parameters and that the participantcannot avoid or escape the punisher.

Select Effective and Appropriate Punishers

Conduct Punisher Assessments

a parallel process can be applied to As with the reinforcer assessment methods,

363

Punishment by Stimulus Presentation

identify stimuli that are likely to function as punish-ers. Fisher and colleagues (1994), who differentiallyapplied a combination of empirically derived punishersand reinforcers to reduce the pica behavior of threechildren referred to an inpatient treatment facility, iden-tified two advantages of conducting punisher assess-ments. First, the sooner an effective punisher can beidentified, the sooner it can be applied to treat the prob-lem behavior. Second, data from punisher assessmentsmight reveal the magnitude or intensity of the punishernecessary for behavioral suppression, enabling thepractitioner to deliver the lowest intensity punisher thatis still effective.

Punisher assessment mirrors stimulus preferenceand/or reinforcement assessment, except that instead ofmeasuring engagement or duration of contact with eachstimulus, the behavior analyst measures negative ver-balizations, avoidance movements, and escape attemptsassociated with each stimulus. Data from the punisherassessments are then used to develop a hypothesis onthe relative effectiveness of each stimulus change as apunisher.

The decision of which among several potentially ef-fective punishers to choose should be based on the rela-tive degree of intrusiveness the punisher creates and theease with which it can be consistently and safely deliv-ered by the therapists, teachers, or parents who will implement the punishment procedure in the clinic,classroom, or home. Subsequent observation might re-veal that a consequence that is less intrusive, time-consuming, or difficult to apply might be used, as in thefollowing experience reported by Thompson and col-leagues (1999):

We conducted brief evaluations to identify an ef-fective punishment procedure for each participant.Procedures were chosen for evaluation based ontopographies of SIB, an apparent degree of minimalintrusiveness, and the ability of the experimenter tosafely and efficiently implement the procedure. Dur-ing this phase, we used brief AB designs to evaluateseveral procedures and chose the least restrictive pro-cedure that resulted in a 75% or greater decrease inSIB. For example, we initially evaluated a 15-s man-ual restraint with Shelly. During this procedure, thetherapist delivered a verbal reprimand, held Shelly’shands in her lap for 15 s, and then dried her handswith a cloth. This procedure reduced SIB to the crite-rion level. Subsequently, however, we observed acomparable decrease in SIB when the therapist sim-ply delivered a reprimand (e.g., “no spitting”) anddried Shelly’s hands (without holding her hands inher lap). Therefore, we chose to implement the repri-mand and hand-drying procedure. (p. 321)

Use Punishers of Sufficient Qualityand Magnitude

The quality (or effectiveness) of a punisher is relative toa number of past and current variables that affect the par-ticipant. For example, Thompson and colleagues foundthat 15 seconds of physical restraint was a high-qualitypunisher for Ricky, Donna, and Lynn, but the physicalrestraint was not influential for Shelly. Although stimulithat reliably evoke escape and avoidance behaviors oftenfunction as high-quality punishers, practitioners shouldrecognize that a stimulus change that effectively sup-presses some behaviors may not affect other behaviorsand that highly motivated problem behaviors may only besuppressed by a particularly high-quality punisher.

Generally, basic and applied researchers have foundthat the greater the intensity (magnitude, or amount) of apunishing stimulus, the greater the suppression of be-havior. This finding is conditional on delivering the pun-ishing stimulus at its optimum level of magnitudeinitially, rather than gradually increasing the level overtime (Azrin & Holtz, 1966). For example, Thompson andcolleagues (1999) used the previously described punisherassessment procedure to determine an optimum level ofmagnitude for the punishing stimulus: a magnitude thatproduced a 75%, or greater, decrease in self-injurious be-haviors (SIB) from baseline occurrences of four adultparticipants. The punishing stimuli meeting their crite-rion included: “Ricky’s hands were held in his lap for 15 s each time he engaged in SIB. Donna and Lynn bothreceived a verbal reprimand and had their hands heldacross their chests for 15 s following SIB” (p. 321).

Beginning with a punishing stimulus of sufficientmagnitude is important, because participants may adaptto the punishing stimulus when levels of magnitude areincreased gradually. For example, it is possible that 15seconds of movement restriction would have been inef-fective as punishment for Ricky, Donna, and Lynn’s SIBif Thompson and colleagues had begun with a punishingstimulus of 3 seconds and gradually increased its mag-nitude in 3-second intervals.

Use Varied Punishers

The effectiveness of a punishing stimulus can decreasewith repeated presentations of that stimulus. Using a vari-ety of punishers may help to reduce habituation effects. Inaddition, using various punishers may increase the effec-tiveness of less intrusive punishers. For example, Charlop,Burgio, Iwata, and Ivancic (1988) compared various pun-ishers to a single presentation of one the punishers (i.e., astern “No!” overcorrection, time-out with physical re-straint, a loud noise). Three-, 5-, and 6-year-old children

364

Punishment by Stimulus Presentation

with developmental disabilities served as participants.Their problem behaviors included aggression (Child 1),self-stimulation and destructive behavior (Child 2), andaggression and out-of-seat (Child 3). The varied-punishercondition was slightly more effective than the single-pre-sentation condition and enhanced the sensitivity of the be-havior to less intrusive punishing stimuli. Charlop andcolleagues concluded, “It appears that by presenting a var-ied format of commonly used punishers, inappropriate be-haviors may further decrease without the use of moreintrusive punishment procedures” (p. 94; see Figure 8).

Deliver the Punisher at theBeginning of a Behavioral Sequence

Punishing an inappropriate behavior as soon as it beginsis more effective than waiting until the chain of behaviorhas been completed (Solomon, 1964). Once the sequenceof responses that make up the problem behavior is initi-ated, powerful secondary reinforcers associated with com-pleting each step of the chain may prompt its continuation,thereby counteracting the inhibiting or suppressing effectsof the punishment that occurs at the end of the sequence.Therefore, whenever practical, the punishing stimulusshould be presented early in the behavioral sequence

rather than later. For example, if violent arm swinging isa reliable precursor to self-injurious eye poking, then pun-ishment (e.g., response blocking, restraint) should be de-livered as soon as arm swinging starts.

Punish Each Instance of the Behavior Initially

Punishment is most effective when the punisher followseach response. This is especially important when a pun-ishment intervention is first implemented.

Gradually Shift to an IntermittentSchedule of Punishment

Although punishment is most effective when the punisherimmediately follows each occurrence of the problem be-havior, practitioners may find a continuous schedule ofpunishment unacceptable because they lack the resourcesand time to attend to each occurrence of the behavior(O’Brien & Karsh, 1990). Several studies have found that,after responding has been reduced by a continuous sched-ule of punishment, an intermittent schedule of punish-ment may be sufficient to maintain the behavior at a

100

90

80

70

60

50

40

30

20

10

0

Per

cen

t In

terv

als

No Treatment

"No"

Varied

Time-Out

5 10 15 20 25 30 35 40 45 50Sessions

Overcorrection

Baseline Treatment Baseline Treatment Baseline Treatment Treatment

Figure 8 Percentage of intervals of occurrence of self-stimulation and destructivebehavior for a 6-year-old girl with autism.From “Stimulus Variation as a Means of Enhancing Punishment Effects,” by M. H. Charlop, L. D. Burgio, B. A. Iwata, and M. T. Ivancic, 1988, Journal of Applied Behavior Analysis, 21, p. 92. Copyright 1988 by the Society for the ExperimentalAnalysis of Behavior, Inc. Used by permission.

365

Punishment by Stimulus Presentation

socially acceptable frequency (e.g. Clark, Rowbury, Baer,& Baer, 1973; Lerman et al., 1997; Romanczyk, 1977).

We recommend two guidelines for using intermittentpunishment. First—and this is especially important—a con-tinuous (FR 1) schedule of punishment should be used todiminish the problem behavior to a clinically acceptablelevel before gradually thinning to an intermittent scheduleof punishment. Second, combine intermittent punishmentwith extinction. It is unlikely that reduced responding willcontinue under intermittent punishment if the reinforcerthat maintains the problem behavior cannot be identifiedand withheld. If these two guidelines for intermittent pun-ishment are met and the frequency of the problem behav-ior increases to an unacceptable level, return to a continuousschedule of punishment and then, after recovery of accept-ably low rates of responding, gradually shift to an inter-mittent schedule of punishment that is denser than usedpreviously (e.g., VR 2 rather than a VR 4).

Use Mediation with a Response-to-Punishment Delay

As with reinforcers, punishers that occur immediatelyfollowing a response are more effective than are punish-ers presented after a period of time has elapsed since theresponse occurred. In the context of reinforcement,Stromer, McComas, and Rehfeldt (2000) reminded usthat using continuous and intermittent schedules of rein-forcement might be just the first steps of programmingconsequences for everyday situations, because the con-sequences that maintain responding in natural environ-ments are often delayed:

Establishing the initial instances of a behavioral repertoiretypically requires the use of programmed consequencesthat occur immediately after the target response occurs.However, the job of the applied behavior analyst also in-volves the strategic use of delayed reinforcement. (p. 359)

The job of the applied behavior analyst could alsoinvolve the strategic programming of contingencies in-volving a delay-to-punishment interval. As Lerman andVorndran, (2002) noted:

Consequences for problem behavior are frequently de-layed in the natural environment. Caregivers and teach-ers often are unable to monitor behavior closely or todeliver lengthy punishers (e.g., 15-min contingent work)immediately following instances of problem behavior.Punishment also may be delayed when the individualactively resists application of the programmed conse-quences by struggling with the punishing agent or run-ning away. In some cases, problem behavior occursprimarily in the absence of the punishing agent, neces-sarily delaying programmed consequences until the be-havior is detected. (pp. 443-444)

Generally, applied behavior analysts have avoidedprogramming a delay-to-punishment interval. In their re-view of the basic and applied punishment literature, Ler-man and Vorndran (2002) found just two applied studiesthat addressed the variables related to the effective useof a delay-to-punishment (Rolider & Van Houten, 1985;Van Houten & Rolider, 1988), and noted that these ap-plied behavior analysts used a variety of techniques tomediate the delay between the occurrence of the prob-lem behavior and the punishing consequences.

[They] demonstrated the efficacy of delayed punishmentusing various mediated consequences with children withemotional and developmental disabilities. One form ofmediation involved playing audiotape recordings of thechild’s disruptive behavior that were collected earlier inthe day. The punishing consequence (physical restraint,verbal reprimands) then was delivered. In some cases,the tape recorder was clearly visible to the child whilethe recordings were being collected, and a verbal expla-nation of its role in the delivery of delayed punishmentwas provided. These factors may have served to bridgethe temporal gap between inappropriate behavior and itsconsequence (e.g., by functioning as discriminativestimuli for punishment). (p. 444)

Supplement Punishmentwith Complementary Interventions

Applied behavior analysts typically do not use punish-ment as a single intervention: they supplement punish-ment with other interventions, primarily, differentialreinforcement, extinction, and a variety of antecedent in-terventions. Basic and applied researchers consistentlyfind that the effectiveness of punishment is enhancedwhen the learner can make other responses for rein-forcement. In most circumstances applied behavior ana-lysts should incorporate differential reinforcement ofalternative behavior (DRA), differential reinforcement ofincompatible behavior (DRI), or differential reinforce-ment of other behaviors (DRO) into a treatment programto supplement a punishment. When used as a reductiveprocedure for problem behavior, differential reinforce-ment consists of two components: (1) providing rein-forcement contingent on the occurrence of a behaviorother than the problem behavior, and (2) withholding re-inforcement for the problem behavior (i.e., extinction).The study by Thompson and colleagues (1999) presentedpreviously in this chapter (see Figure 4) provides an ex-cellent example of how reinforcement of an alternativebehavior can enhance the suppressive effects of punish-ment and enable “relatively benign punishment proce-dures” to be effective even for chronic problem behaviorsthat have been resistant to change.

366

Punishment by Stimulus Presentation

We recommend that practitioners reinforce alterna-tive behaviors copiously. Additionally, the more rein-forcement the learner obtains by emitting appropriatebehaviors, the less motivated he will be to emit the prob-lem behavior. In other words, heavy and consistent dosesof reinforcement for alternative behaviors function as anabolishing operation that weakens (abates) the frequencyof the problem behavior.

There is still another important reason for recom-mending the reinforcement of alternative behaviors.Applied behavior analysts are in the business of build-ing repertoires by teaching the clients and students theyserve new skills and more effective ways of control-ling their environments and achieving success. Pun-ishment (with the exception of some overcorrectionprocedures) eliminates behaviors from a person’s reper-toire. Although the person will be better off withoutthose behaviors in her repertoire, punishment onlyteaches her what not to do, it does not teach her whatto do instead.

The effectiveness of antecedent interventions suchas functional communication training (FCT), high-probability (high-p) request sequence, and noncontin-gent reinforcement (NCR), which diminish the frequencyof problem behaviors by decreasing the effectivenessof the reinforcers that maintain the problem behaviors,can be made more effective when combined with pun-ishment. For instance, Fisher and colleagues (1993)found that, although functional communication train-ing (FCT) did not reduce to clinically significant levelsthe destructive behaviors of four participants with se-vere retardation and communication deficits, a combi-nation of FCT and punishment produced the largest andmost consistent reductions in problem behaviors.

Be Prepared for Negative Side Effects

It is difficult to predict the side effects that may resultfrom punishment (Reese, 1966). The suppression of oneproblem behavior by punishment may lead to an in-crease in other undesirable behaviors. For example, pun-ishment of self-injurious behavior may produceincreased levels of noncompliance or aggression. Pun-ishing one problem behavior may lead to a parallel de-crease in desirable behaviors. For example, requiringthat a student rewrite an ill-conceived paragraph mayresult in his ceasing to produce any academic work. Al-though punishment may produce no undesirable sideeffects, practitioners should be alert to problems such asescape and avoidance, emotional outbursts, and behav-ioral contrast, and should have a plan for dealing withsuch events should they occur.

Record, Graph, and Evaluate Data Daily

Data collection in the initial sessions of a punishment-based intervention is especially critical. Unlike some be-havior reduction procedures, such as extinction anddifferential reinforcement of alternative behavior, whosesuppressive effects are often gradual, the suppressive ef-fects of punishment are abrupt. In their classic reviewof the basic research on punishment, Azrin and Holz(1966) wrote:

Virtually all studies of punishment have been in completeagreement that the reduction of responses by punishmentis immediate if the punishment is at all effective. Whenthe data have been presented in terms of the number orresponses per day, the responses have been drastically re-duced or eliminated on the very first day in which pun-ishment was administered. When the data have beenpresented in terms of moment-to-moment changes, thereduction of responses has results with the first few de-liveries of punishment or within a few minutes. (p. 411)

The first data points from the punishment conditionsin all of the graphs presented in this chapter provide addi-tional empirical evidence for Azrin and Holz’s statementof the immediate effects of punishment. Because of theabrupt effect of punishment, the practitioner should payparticular attention to the data from the first session or twoof intervention. If a noticeable reduction of the problembehavior has not occurred within two sessions of a pun-ishment-based intervention, we recommend that the prac-titioner make adjustments to the intervention.

Frequent inspection of the data from a punishmentperspective reminds those involved of the purpose of theintervention and reveals whether the problem behavioris being reduced or eliminated as intended. When the dataindicate that a clinically or socially significant changehas occurred and has been maintained, punishment can beshifted to an intermittent schedule or perhaps terminatedaltogether.

Ethical ConsiderationsRegarding the Use of PunishmentEthical considerations regarding the use of punishmentrevolve around three major issues: the client’s right to safeand humane treatment, the professional’s responsibilityto use least restrictive procedures, and the client’s right toeffective treatment.8

8Chapter entitled “Ethical Considerations for Applied Behavior Analysts”provides a detailed discussion of ethical issues and practices for applied be-havior analysts.

367

Punishment by Stimulus Presentation

Right to Safe and Humane Treatment

Dating from the Hippocratic Oath (Hippocrates,460 BC–377 BC), the first ethical canon and responsibil-ity for any human services provider is to do no harm. Accordingly, any behavior change program, be it apunishment-based intervention to reduce life-threateningself-injurious behavior or an application of positive re-inforcement to teach a new academic skill, must be phys-ically safe for all involved and contain no elements thatare degrading or disrespectful to participants.

Treatments are deemed safe when they put neitherthe caregiver nor the target individual at physical, psy-chological, or social risk (Favell & McGimsey, 1993).Although there exists no universally accepted definitionof what constitutes humane treatment, a reasonable casecould be made that humane treatments are (a) designedfor therapeutic effectiveness, (b) delivered in a compas-sionate and caring manner, (c) assessed formatively todetermine effectiveness and terminated if effectivenessis not demonstrated, and (d) sensitive and responsive tothe overall physical, psychological, and social needs ofthe person.

Least Restrictive Alternative

A second canon of ethics for human services profes-sionals is to intrude on a client’s life only as necessary toprovide effective intervention. The doctrine of the leastrestrictive alternative holds that less intrusive proceduresshould be tried and found to be ineffective before moreintrusive procedures are implemented. Interventions canbe viewed as falling along a continuum of restrictivenessfrom least to most. The more a treatment procedure af-fects a person’s life or independence, such as his abilityto go about daily activities in his normal environment,the greater its restrictiveness. A completely unrestrictiveintervention is a logical fallacy. Any treatment must af-fect the person’s life in some way to qualify as an inter-vention. At the other end of the continuum, absoluterestrictiveness exists during solitary confinement, wherepersonal independence is unavailable. All behaviorchange interventions developed and implemented by ap-plied behavior analysts fall within these extremes.

Selecting any punishment-based intervention es-sentially rules out as ineffective all positive or positivereductive approaches based on their demonstrated in-ability to improve the behavior. For example, Gaylord-Ross (1980) proposed a decision-making model forreducing aberrant behavior that suggested that practi-tioners rule out assessment considerations, inappropriateor ineffective schedules of reinforcement, ecological

variables, and curriculum modifications before punish-ment is implemented.

Some authors and professional organizations haveadvanced the position that all punishment-based proce-dures are inherently intrusive and should never be used(e.g., Association for Persons with Severe Handicaps,1987; LaVigna & Donnellen, 1986; Mudford, 1995). Oth-ers have advanced the counter position that punishment-based procedures, because of their inherent level ofintrusiveness, should be used only as a last resort (Gay-lord-Ross, 1980).

Most people would rate interventions based on pos-itive reinforcement as less restrictive than interventionsbased on negative reinforcement; reinforcement inter-ventions as less restrictive than punishment interventions;and interventions using negative punishment as less re-strictive than those using positive punishment. However,the intrusiveness or restrictiveness of an intervention can-not be determined by the principles of behavior on whichit is based. Restrictiveness is a relative concept that de-pends on the procedural details and ultimately rests atthe level of the person with whom it is applied. A posi-tive reinforcement intervention that requires deprivationmay be more restrictive than a positive punishment pro-cedure in which a buzzer sounds each time an incorrectresponse is made, and what one person considers intru-sive may pose no discomfort for another person. Horner(1990) suggested that most people accept punishment in-terventions to reduce challenging behavior as long asthose interventions: “(a) do not involve the delivery ofphysical pain, (b) do not produce effects that require med-ical attention, and (c) are subjectively judged to be withinthe typical norm of how people in our society should treateach other” (pp. 166-167).

In their review of response-reduction interventionsbased on the principle of punishment, Friman and Poling(1995) pointed out that punishment-based tactics, such asthose that require that the person make an effortful re-sponse contingent on the occurrence of target behavior,meet Horner’s (1990) criteria for acceptable response-reduction interventions. They stated: “None of the pro-cedures caused pain or required medical attention, nordid they contrast with societal norms. For example,coaches often require disobedient players to run laps,and drill instructors make misbehaving recruits do push-ups” (p. 585).

Although least restrictive alternative practices as-sume that the less intrusive procedures are tried andfound ineffective before a more restrictive interven-tion is introduced, practitioners must balance that ap-proach against an effectiveness standard. Gast andWolery (1987) suggested that, if a choice must bemade between a less intrusive but ineffective procedure

368

Punishment by Stimulus Presentation

and a more intrusive but effective procedure, the lattershould be chosen. A Task Force on the Right to Ef-fective Treatment appointed by the Association for Be-havior Analysis provided the following perspective onthe importance of judging the ultimate restrictivenessof treatment options on the basis of their degree ofproven effectiveness:

Consistent with the philosophy of least restrictive yet effective treatment, exposure of an individual to restric-tive procedures is unacceptable unless it can be shownthat such procedures are necessary to produce safe andclinically significant behavior change. It is equally unac-ceptable to expose an individual to a nonrestrictive inter-vention (or a series of such interventions) if assessmentresults or available research indicate that other proce-dures would be more effective. Indeed, a slow-acting but nonrestrictive procedure could be considered highlyrestrictive if prolonged treatment increases risk, signifi-cantly inhibits or prevents participation in needed train-ing programs, delays entry into a more optimal social orliving environment, or leads to adaptation and the even-tual use of a more restrictive procedure. Thus, in somecases, a client’s right to effective treatment may dictatethe immediate use of quicker-acting, but temporarilymore restrictive, procedures.

A procedure’s overall level of restrictiveness is acombined function of its absolute level of restrictive-ness, the amount of time required to produce a clinicallyacceptable outcome, and the consequences associatedwith delayed intervention. Furthermore, selection of aspecific treatment technique is not based on personalconviction. Techniques are not considered either “good”or “bad” according to whether they involve the use ofantecedent rather than consequent stimuli or reinforce-ment rather than punishment. For example, positive rein-forcement, as well as punishment, can produce a numberof indirect effects, some of which are undesirable.

In summary, decisions related to treatment selectionare based on information obtained during assessmentabout the behavior, the risk it poses, and its controllingvariables; on a careful consideration of the availabletreatment options, including their relative effectiveness,risks, restrictiveness, and potential side effects; and onexamination of the overall context in which treatmentwill be applied. (Van Houten et al., 1988, pp. 113, 114)

Right to Effective Treatment

Ethical discussions regarding the use of punishment re-volve most often around its possible side effects andhow experiencing the punisher may cause unnecessarypain and possible psychological harm for the person.Although each of these concerns deserves careful con-sideration, the right to effective treatment raises anequally important ethical issue, especially for persons

who experience chronic, life-threatening problem be-haviors. Some maintain that failing to use a punish-ment procedure that research has shown to be effectivein suppressing self-destructive behavior similar to theirclient’s is unethical because doing so withholds a po-tentially effective treatment and risks maintaining adangerous or uncomfortable state for the person. Forexample, Baer (1971) stated that, “[Punishment] is alegitimate therapeutic technique that is justifiable andcommendable when it relieves persons of the evengreater punishments that result from their own habitualbehavior” (p. 111).

For some clients, punishment-based intervention maybe the only means of reducing the frequency, duration, ormagnitude of chronic and dangerous behaviors that haveproven resistant to change using positive reinforcement,extinction, or positive reductive approaches such as dif-ferential reinforcement of alternative behavior. Such cir-cumstances “may justify, if not necessitate, the use ofpunishment” (Thompson et al., 1999, p. 317). As Iwata(1988) explained, if all other less-intrusive treatmentswith a sufficient research base to promise a legitimatechance of success have failed, the use of punishment pro-cedures is the only ethical option:

In the ideal world, treatment failures do not occur. Butin the actual world failures do occur, in spite of our bestefforts, leaving us with these options: continued occur-rence of the behavior problem toward some devastatingendpoint, restraint, sedating drugs, or aversive contin-gencies. I predict that if we apply our skills to maximizethe effectiveness of positive reinforcement programs, wewill succeed often. After following such a course, myfurther prediction is that if we reach the point of havingto decide among these ultimate default options, theclient’s advocate, the parent, or, if necessary, the courtswill select or use the option of aversive contingencies.Why? Because under such circumstances it is the onlyethical action. (pp. 152-153)

Punishment Policyand Procedural Safeguards

Armed with knowledge from the experimental litera-ture, real-world variables and contingencies with whichto contend (i.e., chronic, life-threatening problems), andpractices and procedures rooted in ethical codes of con-duct, practitioners can consider punishment approaches,when necessary, to provide meaningful programs forthe persons in their care. One mechanism for ensuringthat best practice approaches are used (Peters & Heron,1993) is to adopt and use dynamic policies and proce-dures that provide clear guidelines and safeguards topractitioners.

369

Punishment by Stimulus Presentation

Figure 9 Suggested components in an agency’s policy and procedures guidelines to help ensure the ethical, safe, and effective use of punishment.

Policy Requirements• Intervention must conform to all local, state, and federal statutes.• Intervention must conform to the policies and codes of ethical conduct of relevant

professional organizations.• Intervention should include procedures for strengthening and teaching alternative

behaviors.• Intervention must include plans for generalization and maintenance of behavior change

and criteria for eventual termination or reduction of punishment.• Informed consent must be obtained from the client or a parent or legal advocate before

intervention begins.Procedural Safeguards• Before intervention begins, all relevant staff must be trained in (a) the technical details of

properly administering the punishment procedure, (b) procedures for ensuring thephysical safety of the client and staff and the humane treatment of the client, and (c)what to do in the event of negative side effects such as emotional outbursts, escape andavoidance aggression, and noncompliance.

• Supervision and feedback of staff administering the punishment intervention must beprovided and, if necessary, booster training sessions provided.

Evaluation Requirements• Each occurrence of the problem behavior must be observed and recorded.• Each delivery of the punisher must be recorded and client’s reactions noted.• Periodic review of the data (e.g., daily, weekly) by a team of parent/advocates, staff, and

technical consultants must be conducted to ensure that ineffective treatment is notprolonged or effective treatment terminated.

• Social validity data must be obtained from client, significant others, and staff on (a)treatment acceptability and (b) the real and likely impact of any behavior change on theclient’s current circumstances and future prospects.

Agencies can help protect and ensure their clients’rights to safe, humane, least-restrictive, and effectivetreatment by developing policy and procedural guidelinesthat must be followed when any punishment-based in-tervention is implemented (Favell & McGimsey, 1993;Griffith, 1983; Wood & Braaten, 1983). Figure 9 pro-vides an outline and examples of the kinds of compo-nents that might be included in such a document.

Practitioners should also consult their local, state, ornational professional association policy statements re-garding the use of punishment. For example, the Associ-ation for the Advancement of Behavior Therapy (AABT)provides guidelines that address treatment selection, in-cluding punishment (Favell et al., 1982). The Associa-tion for Behavior Analysis adheres to the AmericanPsychological Association’s Ethics Code (2004), which,in turn, addresses treatment issues. Collectively, policystatements address implementation requirements, proce-dural guidelines and precautions, and evaluation meth-

Concluding PerspectivesWe conclude this chapter with brief comments on threecomplementary perspectives regarding the principle ofpunishment and the development and application of in-terventions involving punishment by contingent aversivestimulation. We believe that applied behavior analysiswould be a stronger, more competent discipline, and itspractitioners would be more effective if (a) punishment’snatural role and contributions to survival and learning arerecognized and appreciated, (b) more basic and appliedresearch on punishment is conducted, and (c) treatmentsfeaturing positive punishment are viewed as default tech-nologies to be used only when all other methods havefailed.

ods that an agency should use when implementing anypunishment-based intervention.

370

Punishment by Stimulus Presentation

Punishment’s Natural and Necessary Role in Learning Should Be Recognized

Behavior analysts should not shy away from punishment.Positive and negative punishment contingencies occurnaturally in everyday life as part of a complex mix ofconcurrent reinforcement and punishment contingencies,as Baum (1994) illustrated so well in this example:

Life is full of choices between alternatives that offerdifferent mixes of reinforcement and punishment.Going to work entails both getting paid (positive reinforcement) and suffering hassles (positive punish-ment), whereas calling in sick may forfeit some pay(negative punishment), avoid the hassles (negative rein-forcement), allow a vacation (positive reinforcement),and incur some workplace disapproval (positive punish-ment). Which set of relations wins out depends onwhich relations are strong enough to dominate, and thatdepends on both the present circumstances and the per-son’s history of reinforcement and punishment. (p. 60)

Punishment is a natural part of life. Vollmer (2002)suggested that the scientific study of punishment shouldcontinue because unplanned and planned punishmentoccur frequently and that planned, sophisticated appli-cations of punishment are within the scope of inquiry forapplied behavior analysts. Whether punishment is so-cially mediated, planned or unplanned, or conducted bysophisticated practitioners, we agree with Vollmer that ascience of behavior should study punishment.

Scientists interested in the nature of human behaviorcannot ignore or otherwise obviate the study of punish-ment. There should be no controversy. Scientists andpractitioners are obligated to understand the nature ofpunishment if for no other reason than because punish-ment happens. (p. 469)

More Research on PunishmentIs Needed

Although many of the inappropriate and ineffective ap-plications of punishment are the result of misunder-standing, other misapplications no doubt reflect ourincomplete knowledge of the principle as a result of in-adequate basic and applied research (Lerman & Vorn-dran, 2002).

Most of our knowledge about and recommendationsfor applying punishment are derived from basic researchconducted more than 40 years ago. Although sound sci-entific data that describe still-relevant questions have anunlimited shelf life, much more basic research on the

mechanisms and variables that produce effective pun-ishment is needed.

Basic laboratory research allows for the control ofvariables that is difficult or impossible to attain in appliedsettings. Once mechanisms are revealed in basic research,however, applications to real-world challenges can be de-vised and adapted with more confidence regarding the degree of potential effectiveness and limitations. Practi-tioners must recognize when, how, why, and under whatconditions punishment techniques produce behavioralsuppression for the people with whom they work.

We support Horner’s (2002) call for practical appli-cations of punishment research in field settings. It is im-portant to determine how punishment works in situationsin which environmental variables may not be as well con-trolled and in which the practitioner delivering the pun-isher might not be a trained professional. Educational andclinical applications of punishment also demand that be-havior analysts understand the individual, contextual, andenvironmental variables that produce effective applica-tion. Without a clear and refined knowledge base aboutthese variables and conditions, applied behavior analysisas a science of behavior cannot make a legitimate claimto have achieved a comprehensive analysis of its own basicconcepts (Lerman & Vorndran, 2002; Vollmer, 2002).

Interventions Featuring PositivePunishment Should Be Treated as Default Technologies

Iwata (1988) recommended that punishment-based in-terventions involving the contingent application of aver-sive stimulation be treated as default technologies. Adefault technology is one that a practitioner turns to whenall other methods have failed. Iwata recommended thatbehavior analysts not advocate for the use of aversivetechnologies (because advocacy is not effective, not nec-essary, and not in the best interests of the field), but thatthey be involved in research and the development of ef-fective aversive technologies.

We must do the work because, whether or not we like it,default technologies will evolve whenever there is failureand, in the case of aversive stimulation, we are in aunique position to make several contributions. First, wecan modify the technology so that it is effective and safe.Second, we can improve it by incorporating contingen-cies of positive reinforcement. Third, we can regulate itso that application will proceed in a judicious and ethicalmanner. Last, and surely most important, by studying theconditions under which default technologies arise, aswell as the technologies themselves, we might eventu-ally do away with both. Can you think of a better fate forthe field of applied behavior analysis? (p. 156)

371

Punishment by Stimulus Presentation

SummaryDefinition and Nature of Punishment

1. Punishment has occurred when a stimulus change imme-diately follows a response and decreases the future fre-quency of that type of behavior in similar conditions.

2. Punishment is defined neither by the actions of the per-son delivering the consequences nor by the nature of thoseconsequences. A decrease in the future frequency of the occurrence of the behavior must be observed before a consequence-based intervention qualifies as punishment.

3. Positive punishment has occurred when the frequency ofresponding has been decreased by the presentation of astimulus (or an increase in stimulus intensity) immediatelyfollowing a behavior.

4. Negative punishment has occurred when the frequency ofresponding has been decreased by the removal of a stim-ulus (or a decrease in the stimulus intensity) immediatelyfollowing a behavior.

5. Because aversive events are associated with positive pun-ishment and with negative reinforcement, the term aversivecontrol is often used to describe interventions involvingeither or both of these two principles.

6. A discriminative stimulus for punishment, or SDp, is a stim-ulus condition in the presence of which a response classoccurs at a lower frequency than it does in the absence ofthe SDp as a result of a conditioning history in which re-sponses in the presence of the SDp have been punished andsimilar responses in the absence of that stimulus have notbeen punished (or have resulted in a reduced frequency ormagnitude of punishment).

7. A punisher is a stimulus change that immediately followsthe occurrence of a behavior and reduces the future fre-quency of that type of behavior.

8. An unconditioned punisher is a stimulus whose presenta-tion functions as punishment without having been pairedwith any other punishers.

9. A conditioned punisher is a stimulus that has acquired itspunishing capabilities by being paired with unconditionedor conditioned punishers.

10. A generalized conditioned punisher will function as pun-ishment under a wide range of motivating operations be-cause of its previous pairing with numerous unconditionedand conditioned punishers.

11. In general, the results of basic and applied research showthat punishment is more effective when

• the onset of the punisher occurs as soon as possible afterthe occurrence of a target response,

• the intensity of the punisher is high,

• each occurrence of the behavior is followed by the pun-ishing consequence,

• reinforcement for the target behavior is reduced, and

• reinforcement is available for alternative behaviors.

12. Punishment sometimes causes undesirable side effects andproblems, such as the following:

• Emotional and aggressive reactions to aversive stimulation

• Escape and avoidance behaviors

• Behavioral contrast: reduced responding from punish-ment in one situation may be accompanied by in-creased responding in situations in which responses gounpunished

• The modeling of undesirable behavior

• Overuse of punishment caused by the negative rein-forcement of the punishing agent’s behavior (i.e., theimmediate cessation of problem behavior)

Examples of Positive Punishment Interventions

13. Reprimands: Used sparingly, a firm reprimand such as“No!” can suppress future responding.

14. Response blocking: When the learner begins to emit theproblem behavior, the therapist physically intervenes toprevent or “block” the completion of the response.

15. Overcorrection is a punishment-based tactic in which, con-tingent on each occurrence of the problem behavior, thelearner is required to engage in effortful behavior that is di-rectly or logically related to the problem.

16. In restitutional overcorrection, the learner must repair thedamage caused by the problem behavior and then bringthe environment to a condition vastly better than it wasprior to the misbehavior.

17. In positive practice overcorrection, the learner repeatedlyperforms a correct form of the behavior, or a behavior in-compatible with the problem behavior, for a specified timeor number of responses.

18. Contingent electric stimulation can be a safe and effectivemethod for suppressing chronic and life-threatening self-injurious behavior.

Guidelines for Using Punishment Effectively

19. To apply punishment with optimal effectiveness whileminimizing undesirable side effects, a practitioners should:

• Select effective and appropriate punishers: (a) conductpunisher assessments to identify the least intrusive pun-isher that can be applied consistently and safely; (b) usepunishers of sufficient quality and magnitude; (c) use avariety of punishers to combat habituation and increasethe effectiveness of less intrusive punishers.

• If problem behavior consists of a response chain, deliverthe punisher as early in the response sequence as possible.

• Punish each occurrence of the behavior.

372

Punishment by Stimulus Presentation

• Gradually shift to an intermittent schedule of punish-ment if possible.

• Use mediation with a response-to-punishment delay.• Supplement punishment with complementary interven-

tions, in particular, differential reinforcement, extinc-tion, and antecedent interventions.

• Watch and be prepared for unwanted side effects.• Record, graph, and evaluate data daily.

Ethical Considerations Regarding the Use of Punishment

20. The first ethical responsibility for any human services pro-fessional or agency is to do no harm. Any interventionmust be physically safe for all involved and contain no el-ements that are degrading or disrespectful to the client.

21. The doctrine of the least restrictive alternative holds thatless intrusive procedures (e.g., positive reductive ap-proaches) must be tried first and found to be ineffectivebefore more intrusive procedures are implemented (e.g., apunishment-based intervention).

22. A client’s right to effective treatment raises an importantethical issue. Some maintain that the failure to use a pun-ishment procedure that research has shown to suppressself-destructive behavior similar to the client’s is unethi-cal because it withholds a potentially effective treatmentand may maintain a dangerous or uncomfortable state forthe person.

23. Agencies and individuals providing applied behavioranalysis services can help ensure that applications ofpunishment-based interventions are safe, humane, ethical,and effective by creating and following a set of policy stan-dards, procedural safeguards, and evaluation requirements.

Concluding Perspectives on Punishment

24. Applied behavior analysts should recognize and appreci-ate the natural role of punishment and the importance ofpunishment to learning.

25. Many misapplications of punishment reflect the field’s in-complete knowledge of the principle. More basic and ap-plied research on punishment is needed.

26. Iwata (1988) recommended that punishment-based in-terventions involving the contingent application of aver-sive stimulation be treated as default technologies; thatis, as interventions to be used only when other methodshave failed. He argued that applied behavior analystsshould not advocate for the use of aversive technologies,but instead must be involved in conducting research onsuch interventions to (a) make them effective and safe,(b) improve them by incorporating contingencies of pos-itive reinforcement, (c) regulate their judicious and eth-ical application, and (d) study the conditions under whichdefault technologies are used so as to eventually makethem unnecessary.

373

Punishment by Removal of a Stimulus

Key Terms

bonus response costcontingent observationexclusion time-outhallway time-out

nonexclusion time-outpartition time-outplanned ignoringresponse cost

time-out from positivereinforcement

time-out ribbon

Behavior Analyst Certificiation Board® BCBA® & BCABA®Behavior Analyst Task List©, Third Edition

Content Area 3: Principles, Processes and Concepts

3-5 Define and provide examples of (positive and) negative punishment.

3-6 Define and provide examples of conditioned and unconditioned punishment.

Content Area 9: Behavior Change Procedures

9-3 Use (positive and) negative punishment:

(a) Identify and use punishers.

(b) Use appropriate parameters and schedules of punishment.

(c) State and plan for the possible unwanted effects of the use of punishment.

© 2006 The Behavior Analyst Certification Board, Inc.,®(BACB®) all rights reserved. A current version of thisdocument may be found at www.bacb.com. Requests to reprint, copy, or distribute this document and ques-tions about this document must be submitted directly to the BACB.

From Chapter 15 of Applied Behavior Analysis, Second Edition. John O. Cooper, Timothy E. Heron, William L. Heward. Copyright © 2007by Pearson Education, Inc. All rights reserved.

374

Punishment by the contingent removal of a stim-ulus is referred to as negative punishment. Innegative punishment, an environmental change

occurs such that a stimulus is removed subsequent to theperformance of a behavior, and the corresponding futurefrequency of the preceding behavior is reduced. By con-trast, in positive punishment, a stimulus is presented, andthe corresponding future frequency of that behavior is re-duced. Table 1 shows the distinction between positive andnegative punishment with respect to an environmentalstimulus change.

Negative punishment occurs in two principal ways:time-out from positive reinforcement and response cost(see the shaded area of Table 1). This chapter defines andoperationalizes time-out from positive reinforcement andresponse cost, provides examples of how these punish-ment procedures are used in applied settings, and offersguidelines to help practitioners implement time-out andresponse cost effectively.

Definition of Time-OutTime-out from positive reinforcement, or simply time-out, is defined as the withdrawal of the opportunity toearn positive reinforcement or the loss of access to pos-itive reinforcers for a specified time, contingent on theoccurrence of a behavior. The effect of either of theseprocedures is the same: The future frequency of the tar-get behavior is reduced. Implicit in the definition of time-out are three important aspects: (a) the discrepancybetween the “time-in” and the time-out environment,(b) the response-contingent loss of access to reinforce-ment, and (c) a resultant decrease in the future frequencyof the behavior. Contrary to popular thinking, time-outis not exclusively defined, nor does it require only re-moving an individual to an isolated or secluded setting.Although such a removal procedure may accurately de-scribe isolation time-out (Costenbader & Reading-Brown, 1995), it is by no means the only way by whichtime-out can be used. Technically, time-out as a nega-

tive punishment procedure removes a reinforcing stim-ulus for a specified time contingent on the occurrenceof a behavior, the effect of which is to decrease the fu-ture frequency of the behavior.

Time-out can be viewed from procedural, concep-tual, and functional perspectives. Procedurally, it is thatperiod of time when the person is either removed from areinforcing environment (e.g., a student is removed fromthe classroom for 5 minutes) or loses access to reinforcerswithin an environment (e.g., a student is ineligible to earnreinforcers for 5 minutes). Conceptually, the distinctionbetween the time-in and the time-out environment is ofparamount importance. The more reinforcing the time-in setting is, the more effective time-out is likely to be asa punisher. Stated otherwise, the greater the differencebetween the reinforcing value of time-in and absence ofthat reinforcing value in the time-out setting, the moreeffective time-out will be. From a functional perspective,time-out involves the reduced frequency of the future oc-currence of the behavior. Without a reduction in fre-quency of future behavior, time-out is not in effect, evenif the person is procedurally removed from the setting orloses access to reinforcers. For instance, if a teacher re-moves a student from the classroom (presumably thetime-in setting), but upon the student’s return, the prob-lem behavior continues, effective time-out cannot beclaimed.

Time-Out Procedures for Applied SettingsThere are two basic types of time-out: nonexclusion andexclusion. Within each type, several variations allow thepractitioner flexibility in deciding a course of action forreducing behavior. As a rule, nonexclusion time-out isthe recommended method of first choice because practi-tioners are ethically bound to employ the most powerful,but least restrictive, alternative when deciding on eligiblevariations.

Table 1 Distinction Between Positive and Negative Punishment

Future frequency Stimulus change

Stimulus presented Stimulus removed

Behavior is reduced Positive punishment Negative punishment(e.g., “No!,” overcorrection) (e.g., time-out, response

cost)

Punishment by Removal of a Stimulus

375

Punishment by Removal of a Stimulus

Nonexclusion Time-Out

Nonexclusion time-out means that the participant is notcompletely removed physically from the time-in setting.Although the person’s position relative to that settingshifts, he or she remains within the environment. Nonex-clusion time-out occurs in any one of four ways: plannedignoring, withdrawal of a specific reinforcer, contingentobservation, and the time-out ribbon. Each variation hasa common element: Access to reinforcement is lost, butthe person remains within the time-in setting. For in-stance, if a student misbehaves during outdoor recess, hewould be required to stand next to the adult playgroundmonitor for a period of time.

Planned Ignoring

Planned ignoring occurs when social reinforcers—usu-ally attention, physical contact, or verbal interaction—are removed for a brief period, contingent on theoccurrence of an inappropriate behavior. Planned ignor-ing assumes that the time-in setting is reinforcing andthat all extraneous sources of positive reinforcement canbe eliminated.

Operationally, planned ignoring can involve system-atically looking away from the individual, remaining quiet,or refraining from any interaction whatsoever for a specifictime (Kee, Hill, & Weist, 1999). Planned ignoring has theadvantage of being a nonintrusive time-out procedure thatcan be applied quickly and conveniently.

For example, let us suppose that during a group ther-apy session for drug rehabilitation a client begins to express her fascination with stealing money to buy narcotics. If at that point the other members of the groupbreak eye contact and do not respond to her verbaliza-tions in any way until her comments are more consistentwith the group’s discussion, then planned ignoring is ineffect. In this case, because the members of the groupparticipated in the reductive procedure, it would betermed peer-mediated time-out (Kerr & Nelson, 2002).

Withdrawal of a Specific Positive Reinforcer

Bishop and Stumphauzer (1973) demonstrated that thewithdrawal of a specific positive reinforcer contingenton an inappropriate behavior decreased the level of thatbehavior. In their study, the contingent termination oftelevision cartoons successfully reduced the frequencyof thumb sucking in three young children. Unknown tothe children, a remote on/off switch was attached to thetelevision. Baseline data indicated that each child emit-ted a high rate of thumb sucking while viewing the car-toons. During this variation of time-out, the televisionwas immediately turned off when thumb sucking oc-curred and was turned back on when thumb sucking

stopped. The procedure was effective in reducing thumbsucking not only in the treatment location (an office), butalso during a story period at school.

West and Smith (2002) reported on a novel groupapplication of this time-out variation. They mounted afacsimile traffic signal light to a cafeteria wall and riggedthe light with a sensor to detect noise at various thresh-old levels. For instance, if an acceptable level of conver-sation was evident, a green light registered on the trafficsignal and prerecorded music played (i.e., the time-incondition was in effect). The music remained available aslong as the students engaged in an appropriate level ofconversation. As the conversational level increased, thetraffic light changed from green to yellow, visibly warn-ing the students that any further increase in noise wouldlead to red light onset and music loss (i.e., time-out). Themusic was switched off automatically for 10 seconds afterthe sensor registered the noise above a certain decibelthreshold shown by the red light. Under this group pro-cedure, the inappropriate behavior was reduced.

A group time-out contingency has several advan-tages. First, it can be used in an existing environment;special provisions for removing students are not needed.Second, using an electronic device automatically signalswhen time-in and time-out are in effect. In the West andSmith study, students soon discriminated that continuedinappropriate loud conversation resulted in the loss of thereinforcer (e.g., music).

Contingent Observation

In contingent observation, the person is repositionedwithin an existing setting such that observation of ongoingactivities remains, but access to reinforcement is lost. Ateacher uses contingent observation when upon the oc-currence of an undesirable behavior the teacher redirectsthe offending student to sit away from the group, and re-inforcement for a specific time is withheld (Twyman, John-son, Buie, & Nelson, 1994). In short, the student is told to“sit and watch” (White & Bailey, 1990). When the con-tingent observation period ends, the student rejoins thegroup and is able to earn reinforcement for appropriate be-havior. Figure 1 shows the effects of White and Bailey’s(1990) use of contingent observation in reducing the num-ber of disruptive behaviors in two inclusive physical edu-cation classes. When contingent observation was in effect,the number of disruptive behaviors across two classroomsdecreased and remained near zero levels.

Time-Out Ribbon

The time-out ribbon is defined as a colored band that isplaced on a child’s wrist and becomes discriminative forreceiving reinforcement (Alberto, Heflin, & Andrews,

376

Punishment by Removal of a Stimulus

Figure 1 Number of disruptive behaviors per 10-minute observation period. The numbers above the datapoints represent the number of times “sit and watch” wasimplemented during the class.From “Reducing Disruptive Behaviors of Elementary Physical Education Studentswith Sit and Watch” by A. G. White and J. S. Bailey, 1990, Journal of AppliedBehavior Analysis, 23, p. 357. Copyright 1990 by the Society for the ExperimentalAnalysis of Behavior, Inc. Reprinted by permission.

2002). When the ribbon is on the child’s wrist, she is el-igible to earn reinforcers. If the child misbehaves, theribbon is removed, and all forms of social interactionwith the child are terminated for a specific period (e.g.,2 minutes).

Foxx and Shapiro (1978) used the time-out ribbonto reduce the disruptive behavior of four elementary stu-dents. Contingent on an inappropriate behavior, the rib-

bon was removed from the student, and all forms of so-cial interaction with the offender were terminated for 3 minutes. The student was permitted to remain in theroom, however. If inappropriate behavior still occurredafter 3 minutes, the time-out was extended until the mis-behavior was finished. Figure 2 shows that when the time-out ribbon plus reinforcement was in effect, themean percentage of disruptive behavior was markedlyreduced for the four students.

Laraway, Snycerski, Michael, and Poling (2003) dis-cussed the removal of the time-out ribbon in the contextof motivating operations (MOs). In their view, removingthe time-out ribbon functioned as a punisher because ofits relationship to the existing establishing operation(EO). In a retrospective analysis of Foxx and Shapiro’s(1978) study, Laraway and colleagues stated, “Thus, theEOs for the programmed reinforcers in time-in also es-tablished the punishing effect of the ribbon loss (i.e.,functioned as EOs for ribbon loss as a punishing event)and abated misbehaviors that resulted in ribbon loss”(p. 410). With respect to the role of MOs as they relate tothe time-out ribbon, Laraway and colleagues went on tostate:

In commonsense terms, losing the opportunity to earn aconsequence is only important if you currently “want”that consequence. Therefore, MOs that increase the rein-forcing effectiveness of particular objects or events alsoincrease the punishing effectiveness of making those ob-jects or events unavailable (i.e., time-out). . . . a singleenvironmental event can have multiple and simultaneousmotivating effects. (p. 410)

Exclusion Time-Out

The distinguishing feature of exclusion time-out is thatthe person is removed from the environment for a spec-ified period, contingent on the occurrence of the targeted

Classes

Per

cen

t o

f D

isru

pti

ve B

ehav

ior

Figure 2 The mean percentage of time spent in disruptive classroom behavior by foursubjects. The horizontal broken lines indicatethe mean for each condition. The arrow marks a1 day probe (Day 39) during which the time-outcontingency was suspended. A follow-up obser-vation occurred on Day 63.From “The Timeout Ribbon: A Nonexclusionary Timeout Procedure”by R. M. Foxx and S. T. Shapiro, 1978, Journal of Applied BehaviorAnalysis, 11, p. 131. Copyright 1978 by the Society for theExperimental Analysis of Behavior, Inc. Reprinted by permission.

20018016014012010080604020

0

AlternativeEducationClass

3 4 24 6 3 531 5 3 61

Base-line

(343)

Behavior Checklist Sit and Watch

20018016014012010080604020

0

Fourth-GradeClass

3

Baseline

Nu

mb

er o

f D

isru

pti

ve B

ehav

iors

per

10 M

inu

te S

essi

on

71 62 4 57 7 6

5 10 15 20 25 30 35 40 45 50Days

377

inappropriate behavior. Exclusion time-out can be con-ducted in classroom settings in three ways: (a) The student can be removed to a time-out room, (b) the stu-dent can be separated from the rest of the group by a par-tition, or (c) the student can be placed in the hallway.

Time-Out Room

A time-out room is any confined space outside the par-ticipant’s normal educational or treatment environmentthat is devoid of positive reinforcers and in which the per-son can be safely placed for a temporary period. Thetime-out room should preferably be located near the time-in setting and should have minimal furnishing (e.g., achair and a table). It should have adequate light, heat, andventilation but should not have other potentially rein-forcing features available (e.g., pictures on the wall, tele-phone, breakable objects). The room should be secure,but not locked.

A time-out room has several advantages that make itattractive to practitioners. First, the opportunity to ac-quire reinforcement during time-out is eliminated or re-duced substantially because the time-out environment isphysically constructed to minimize such an occurrence.Second, after a few exposures to the time-out room, stu-dents learn to discriminate this room from other rooms inthe building. The room assumes conditioned aversiveproperties, thus increasing the probability that the time-in setting will be viewed as more desirable. Finally, therisk of a student hurting other students in the time-in set-ting is reduced when the offending student is removed tothis space.

However, the practitioner must also weigh severaldisadvantages of using a time-out room. Foremost is thenecessity of escorting the student to the time-out room.From the time the individual is informed that time-out isin effect until the time she is actually placed in the setting,resistance can be encountered. Practitioners should an-ticipate and be fully prepared to deal with emotional out-bursts when using this exclusion time-out variation. Inaddition, unlike the nonexclusion options mentioned pre-viously, removing an individual from the time-in envi-ronment prohibits that individual from access to ongoingacademic or social instruction. Missed instructional timeshould be minimized, and in instances in which time-outhas been used excessively and in which it has been shownto serve as a negative reinforcer for the teacher, it shouldbe reconsidered altogether (Skiba & Raison, 1990). Fur-ther, in the time-out room the person may engage in be-haviors that should be stopped but that go undetected(e.g., self-destructive or self-stimulatory behaviors). Fi-nally, practitioners must be sensitive to the public’s per-ception of the time-out room. Even the most benigntime-out room can be viewed with dread by persons who

might be misinformed about its purpose or place in anoverall behavior management program.

Partition Time-Out

In partition time-out, the person remains within thetime-in setting, but his view within the setting is restrictedby a partition, wall, cubicle, or similar structure. A teacherwould use partition time-out by directing a student, con-tingent on a misbehavior, to move from his assigned seatto a location behind an in-class cubicle for a specifiedtime period. Although partition time-out has the advan-tage of keeping the student within the time-in setting—presumably to hear academic content and the teacherpraising other students for appropriate behavior—it alsocan be disadvantageous. That is, the person may still beable to obtain covert reinforcement from other students.If he does receive reinforcement from students, it is un-likely that the disruptive behavior will decrease. Also,public perceptions must be taken into account with thisform of exclusion. Even though the student remains inthe room, and the partition time-out area might be calledby an innocuous name (e.g., quiet space, office, personalspace), some parents may view any type of separationfrom other members of the class as discriminatory.

Hallway Time-Out

Hallway time-out is a particularly popular method withteachers—and perhaps parents—when dealing with dis-ruptive behavior. In this method, the student is directedto leave the classroom and sit in the hallway. Although itshares the advantages of the variations just mentioned,this approach is not highly recommended for two rea-sons: (a) The student can obtain reinforcement from amultitude of sources (e.g., students in other rooms, indi-viduals walking in the hallway), and (b) there is increasedlikelihood of escape if the student is combative on theway to time-out. Even with the door of the classroomopen, teachers are often too busy with activities in thetime-in setting to monitor the student closely in the hall-way. This approach might be more beneficial for youngerchildren who follow directions, but it is clearly inappro-priate for any student who lacks basic compliance skills.

Desirable Aspects of Time-Out

Ease of Application

Time-out, especially the nonexclusion variations, is rel-atively easy to apply. Even physically removing a stu-dent from the environment can be accomplished withcomparative ease if the teacher acts in a businesslike fash-ion and does not attempt to embarrass the student. Issuing

Punishment by Removal of a Stimulus

378

a direction privately (e.g., “Deion, you have disruptedMonique twice; time-out is now in effect”) can help theteacher handle a student who has misbehaved but whodoes not want to leave the room. If the behavior warrantstime-out, the teacher must insist that the student leave;however, that insistence should be communicated at closerange so that the student is not placed in a position ofchallenging the teacher openly to save face with a peergroup. Teachers should consult district policy on whetheran administrator should be called to remove the student.

Acceptability

Time-out, especially nonexclusion variations, meets anacceptability standard because practitioners regard it asappropriate, fair, and effective. Even so, prior to imple-mentation, practitioners should always check with theappropriate administering body to ensure compliancewith agency policy before applying time-out for majoror minor infractions.

Rapid Suppression of Behavior

When effectively implemented, time-out usually sup-presses the target behavior in a moderate-to-rapid fash-ion. Sometimes only a few applications are needed to

achieve an acceptable reduction levels. Other reductiveprocedures (e.g., extinction, differential reinforcement oflow rates) also produce decreases in behavior, but theycan be time-consuming. Many times the practitioner doesnot have the luxury of waiting several days or a week fora behavior to decrease. In such instances, time-out mer-its strong consideration.

Combined Applications

Time-out can be combined with other procedures, ex-tending its usability in applied settings. When it is com-bined with differential reinforcement, desirable behaviorcan be increased and undesirable behavior can be de-creased (Byrd, Richards, Hove, & Friman, 2002).

Using Time-Out EffectivelyEffective implementation of time-out requires that thepractitioner make several decisions prior to, during, andafter time-out implementation. Figure 3 shows Powelland Powell’s (1982) time-out implementation checklist,which can aid in the decision-making process. The fol-lowing sections expand on the major points underlyingmany of these decisions.

Figure 3 Implementation checklist for time-out.

Date Teacher’s Step Task Completed Initials1. Try less aversive techniques and document results. _____ _____2. Operationally define disruptive behaviors. _____ _____3. Record baseline on target behaviors. _____ _____4. Consider present levels of reinforcement (strengthen

if necessary). _____ _____5. Decide on time-out procedure to be used. _____ _____6. Decide on time-out area. _____ _____7. Decide on length of time-out. _____ _____8. Decide on command to be used to place child in time-out. _____ _____9. Set specific criteria that will signal discontinuance of time-out. _____ _____

10. Set dates for formal review of the time-out procedure. _____ _____11. Specify back-up procedures for typical time-out problems. _____ _____12. Write up the entire procedure. _____ _____13. Have the procedure reviewed by peers and supervisors. _____ _____14. Secure parental/guardian approval and include the written

program in the child’s IEP. _____ _____15. Explain procedure to the student and class (if appropriate). _____ _____16. Implement the procedure, take data, and review progress daily. _____ _____17. Formally review procedure as indicated. _____ _____18. Modify the procedure as needed. _____ _____19. Record results for future teachers/programs. _____ _____

From “The Use and Abuse of Using the Timeout Procedure for Disruptive Pupils” by T. H. Powell and I. Q. Pow-ell, 1982, The Pointer, 26 (2), p. 21. Reprinted by permission of the Helen Dwight Reid Educational Foundation.Published by Heldref Publications, 1319 Eighteenth St., N.W., Washington DC 20036-1802. Copyright © 1982.

Punishment by Removal of a Stimulus

379

Reinforcing and Enriching the Time-In Environment

The time-in environment must be reinforcing if time-outis to be effective. In making the time-in environment re-inforcing, the practitioner should seek ways to reinforcebehaviors that are alternative or incompatible with be-haviors that lead to time-out (e.g., using differential reinforcement of alternative behavior, differential rein-forcement of incompatible behavior). Differential rein-forcement will facilitate the development of appropriatebehaviors. Additionally, upon return from time-out, re-inforcement for appropriate behavior must be deliveredas quickly as possible.

Defining Behaviors Leading to Time-Out

Before time-out is implemented, all of the appropriateparties must be informed of the behaviors that will leadto time-out. If a teacher decides to use time-out, sheshould describe in explicit, observable terms those be-haviors that will result in time-out. For instance, merelyinforming students that disruptive behavior will result intime-out is not sufficient. Providing specific examplesand nonexamples of what is meant by disruptive behav-ior is a better course to follow. Readdick and Chapman(2000) discovered in post-time-out interviews that chil-dren are not always aware of the reasons time-out wasimplemented. Providing explicit examples and nonex-amples addresses this problem.

Defining Procedures for the Duration of Time-Out

In most applied settings such as schools and residentialand day treatment centers, the initial duration of time-out should be short. A period of 2 to 10 minutes is suf-ficient, although a short duration may be ineffectiveinitially if an individual has had a history of longer time-out periods. As a rule, time-out periods exceeding 15minutes are not likely to be effective. Further, longertime-out periods are counterproductive for several rea-sons. First, the person may develop a tolerance for thelonger duration and find ways to obtain reinforcementduring time-out. This situation is likely to occur withpeople with a history of self-stimulatory behavior. Thelonger the duration of time-out, the more opportunitythere is to engage in reinforcing activities (e.g., self-stimulation) and the less effective time-out becomes.Also, longer time-out periods remove the person fromthe educational, therapeutic, or family time-in environ-ment in which the opportunity to learn and earn re-inforcers is available. Third, given the undesirable

practical, legal, and ethical aspects of longer time-outperiods, a prudent initial course of action is for the prac-titioner to use relatively short, but consistent, time-outperiods. When time-out periods are short, student aca-demic achievement is not affected adversely (Skiba &Raison, 1990).

Defining Exit Criteria

If a person is misbehaving when time-out ends, it shouldbe continued until the inappropriate behavior ceases(Brantner & Doherty, 1983). Thus, the decision to ter-minate time-out should not be based exclusively on thepassage of time; an improved behavioral conditionshould also be used as the ultimate criterion for endingtime-out. Under no conditions should time-out be ter-minated if any inappropriate behavior is occurring.

If the practitioner anticipates that the inappropriatebehavior that led to time-out may occur at the point ofscheduled termination, two strategies can be tried. First,the practitioner can inform the person that the scheduledtime-out period (e.g., 5 minutes) will not begin until theinappropriate behavior ceases. The second alternative isto simply extend the time until the disruptive behaviorstops.

Deciding on Nonexclusion or Exclusion Time-Out

School board policy or institutional constraints may be es-tablished that set the parameters for time-out variationsthat can be used in applied settings. In addition, physicalfactors within the building (e.g., a lack of available space)may prohibit exclusionary forms of time-out. In the main,nonexclusion time-out is the preferred method.

Explaining the Time-Out Rules

In addition to posting the behaviors that will lead to time-out, the teacher should also state the rules. Minimally,these rules should focus on the initial duration of time-outand the exit criteria (i.e., rules that determine when time-out is over and what happens if ongoing inappropriatebehavior occurs when time-out is over).

Obtaining Permission

One of the most important tasks a practitioner must per-form prior to using time-out, especially the exclusionaryvariations, is to obtain permission. Because of the po-tential to misuse time-out (e.g., leaving the person intime-out too long, continuing to use time-out when it isnot effective), practitioners must obtain administrativeapproval before employing it. However, interactions

Punishment by Removal of a Stimulus

380

happen so fast in most applied settings that obtaining per-mission on an instance-by-instance basis would be un-duly cumbersome. A preferred method is for thepractitioner, in cooperation with administrators, to de-cide beforehand the type of time-out that will be imple-mented for certain offenses (e.g., nonexclusion versusexclusion), the duration of time-out (e.g., 5 minutes), andthe behaviors that will demonstrate reinstatement to thetime-in environment. Communicating these proceduresand/or policies to parents is advisable.

Applying Time-Out Consistently

Each occurrence of the undesirable behavior should leadto time-out. If a teacher, parent, or therapist is not in aposition to deliver the time-out consequence after eachoccurrence of the target behavior, it may be better to usean alternative reductive technique. Using time-out occa-sionally may lead to student or client confusion aboutwhich behaviors are acceptable and which are not.

Evaluating Effectiveness

Universally, educators and researchers call for the regu-lar evaluation of the use of time-out in applied settings(Reitman & Drabman, 1999). At the very least, data needto be obtained on the inappropriate behavior that initiallyled to time-out. If time-out was effective, the level of thatbehavior should be reduced substantially, and that re-duction should be noticed by other persons in the envi-ronment. For legal and ethical reasons, additional recordsshould be kept documenting the use of time-out; the du-ration of each time-out episode; and the individual’s be-havior before, during, and after time-out (Yell, 1994).

In addition to collecting data on the target behav-ior, it is sometimes beneficial to collect data on collat-eral behaviors, unexpected side effects, and the targetbehavior in other settings. For instance, time-out mayproduce emotional behavior (e.g., crying, aggressive-ness, withdrawal) that might overshadow positive gainsand spill over into other settings. Keeping a record ofthese episodes is helpful. Also, Reitman and Drabman(1999) suggested that for home-based applications oftime-out, parents keep a record of the date, time, andduration of time-out as well as a brief anecdotal de-scription of the effects of time-out. When reviewed withskilled professionals, these anecdotal logs provide asession-by-session profile of performance, making ad-justments to the time-out procedure more informed. Intheir view,

The timeout record and discussions centered around ityielded a wealth of information including the type, fre-quency, context, and rate of misbehavior. . . . Graphing

1We additionally recommend consideration of the Gaylord-Ross decision-making model as an option for when to use punishment procedures, in-cluding time-out (see Gaylord-Ross, 1980).

these data provided visual feedback to both the thera-pists and the clients regarding the extent to which be-havior change had occurred and facilitated the setting ofnew goals (p. 143).

Considering Other Options

Although time-out has been shown to be effective in re-ducing behavior, it should not be the method of firstchoice. Practitioners faced with the task of reducing be-havior should initially consider extinction or positive re-ductive procedures (e.g., differential reinforcemont ofother behavior, differential reinforcemont of incompati-ble ).1 Only when these less intrusive procedures havefailed should time-out be considered.

Legal and Ethical Time-Out Issues

The use of time-out in applied settings has been muchdiscussed. Although litigation surrounding the use of thisprocedure has focused primarily on institutionalized pop-ulations (e.g., Morales v. Turman, 1973; Wyatt v. Stickney,1974), the rulings in these cases have had a profound ef-fect on the use of time-out in other settings. The issues be-fore the courts focus on the protection of client rights,whether time-out represents cruel and unusual punish-ment, and the degree of public acceptability of the time-out procedure.

The upshot of major court rulings has been that a per-son’s right to treatment includes the right to be free fromunnecessary and restrictive isolation. However, rulingsalso include language that permits the use of time-out ina behavior change program as long as the program isclosely monitored and supervised and is designed to pro-tect the individual or others from bodily harm.

At least two conclusions have been drawn from thecourt rulings. First, removing a person to a locked roomis considered illegal unless it can be demonstrated thatthe seclusion is part of an overall treatment plan and un-less the program is carefully and closely monitored. Sec-ond, the duration of time-out is intended to be brief (i.e.,less than 10 minutes). Extensions can be obtained butonly from a duly constituted review committee.

Definition of Response CostTeachers who need to reduce inappropriate behavior mayfind response cost a functional alternative to time-outfrom positive reinforcement because it is quick, avoids

Punishment by Removal of a Stimulus

381

confrontations with students, and offers a treatment op-tion that may be shorter than other reductive procedures(e.g., extinction).

Response cost is a form of punishment in which theloss of a specific amount of reinforcement occurs, con-tingent on an inappropriate behavior, and results in the de-creased probability of the future occurrence of thebehavior. As negative punishment, response cost can beclassified and is defined by function. Specifically, if thefuture frequency of the punished behavior is reduced bythe response-contingent withdrawal of a positive rein-forcer, then response cost has occurred. However, if theremoval of the reinforcer increases the level of the be-havior or has no effect on it, response cost has not oc-curred. This distinction between application and effect isthe key aspect in the definition of response cost.

Response cost occurs any time a teacher reduces thenumber of minutes of recess, reclaims previously earnedor awarded stickers, or otherwise “fines” the student foran infraction, with the results being a decrease in the fu-ture frequency of the targeted behavior. Each occurrenceof the inappropriate behavior results in the loss of a spe-cific amount of positive reinforcement already held bythe individual. Response cost usually involves the loss ofgeneralized conditioned reinforcers (e.g., money, tokens),tangibles (e.g., stickers), or activities (e.g., minutes of lis-tening to music) (Keeney, Fisher, Adelinis, & Wilder,2000; Musser, Bray, Kehle, & Jenson, 2001).

Desirable Aspects of Response Cost

Response cost has several features that make it desirableto use in applied settings: its moderate-to-rapid effectson decreasing behavior, its convenience, and its abilityto be combined with other procedures.

Moderate-to-Rapid Decrease in Behavior

Like other forms of punishment, response cost usuallyproduces a moderate-to-rapid decrease in behavior. Thepractitioner does not have to wait long to determine thesuppressive effects of response cost. If the procedure isgoing to be effective in reducing behavior, a two- tothree-session trial period is usually sufficient to note theeffect.

Convenience

Response cost is convenient to implement and can beused in a variety of school- and home-based settings(Ashbaugh & Peck, 1998; Musser et al., 2001). For ex-ample, if students are expected to follow classroom pro-

cedures, rules, or contracts, then any rule infraction willproduce a fine. Each occurrence of a misbehavior meansthat an explicit fine will be levied and that a positive re-inforcer will be lost. The fine further signals that futureoccurrences of the misbehavior will result in the sameconsequence.

Hall and colleagues (1970) found that the response-contingent removal of slips of paper bearing the stu-dent’s name had a reductive effect on the number ofcomplaints an emotionally disturbed boy emitted. Priorto a reading and math session, the teacher placed fiveslips of paper bearing the boy’s name on his desk. Eachtime the boy cried, whined, or complained during read-ing or math, a slip was removed. Figure 4 shows thatwhen response cost was in effect, disruptive behaviorsdecreased markedly; when response cost was not in ef-fect, these behaviors increased.

Keeney and colleagues (2000) compared responsecost with baseline conditions and noncontingent rein-forcement after conducting a functional analysis of thedestructive behavior of a 33-year-old woman with severedevelopmental disabilities. The purpose of the functionalanalysis was to determine the conditions that maintainedthe behavior. Figure 5 (top panel) shows that a func-tional analysis revealed that the demand (escape) and attention condition produced the highest levels of de-structive behavior. The bottom panel shows a baselinecondition during which compliance to requests waspraised, and episodes of destructive behavior resulted ina 30-second break from the task at hand (escape). Dur-ing noncontingent reinforcement, caretaker proximity,positive statements, or music was available to the woman.During response cost, the music or attention was avail-able at the beginning of a session. However, any episodeof destructive behavior produced an immediate 30-sec-ond loss of music or attention.

Overall, the data show an ascending baseline, withmean percentages averaging approximately 30%. Non-contingent music did not result in a reduction in destruc-tive behavior. However, response cost (withdrawal ofmusic for 30 seconds) produced an immediate and replic-able change in destructive behavior. An important key tothis study was that the functional analysis provided theevidence of the controlling variable (demand), and a priorreinforcer assessment had identified music as a preferredreinforcer. In effect, the reinforcer preference assessmentserved as a mechanism to develop the response cost pro-gram as a treatment for the escape-maintained destruc-tive behavior.

At home, response cost is convenient to the extentthat it can be incorporated into an existing allowance pro-gram. For instance, if a child earns an allowance of $7 perweek ($1 per day) for cleaning his room and placing hisdishes in the sink after eating, noncompliance with one

Punishment by Removal of a Stimulus

382

Baseline2

Days

Nu

mb

er o

f C

ries

, W

hin

es, C

om

pla

ints

Figure 4 The number of cries, whines, andcomplaints by Billy during the 30-minutereading and arithmetic periods.From “Modification of Disrupting and Talking-Out Behavior with the Teacher as Observer and Experimenter” by R. V. Hall, R. Fox, D. Williard, L. Goldsmith, M. Emerson, M. Owen, E. Porcia, and R. Davis, 1970, paper presented at the American EducationalResearch Association Convention, Minneapolis. Reprinted withpermission.

2

1.5

1

0.5

01 3 5 7 9 11 13 15 17 19 21 23 25 27

Demand

Attention

PlayAlone

Des

tru

ctiv

e R

esp

on

ses

per

Min

ute

100

80

60

40

20

0

2 4 6 8 10 12 14 16 18 20 22 24 26 28

Baseline(BL)

Per

cen

tage

of

Tri

als

wit

hD

estr

uct

ive

Beh

avio

r

30 32 34 36

NoncontingentReinforcement

Response Cost BL Response Cost

Attention

Music Withdrawalof Music

Withdrawalof Attention

Sessions

Figure 5 Rates of destructivebehavior during functional analysis (toppanel) and treatment analysis (bottompanel) of noncontingent reinforcement(NCR) and response cost.From “The Effects of Response Cost in the Treatment ofAberrant Behavior Maintained by Negative Reinforce-ment” by K. M. Keeney, W. W. Fisher, J. D. Adelinis, and D. A. Wilder, 2000, Journal of Applied BehaviorAnalysis, 33, p. 257. Copyright 2000 by the Society forthe Experimental Analysis of Behavior, Inc. Reprintedby permission.

or both of these behaviors would lead to a monetary loss(e.g., 50¢ per infraction). In both home and school set-tings a response cost chart would provide the child feed-back on his status, while making the visual notation ofprogress convenient for teachers and parents (Bender &Mathes, 1995).

Figure 6 shows a sample response cost chart as itmight be used in a classroom or tutorial setting. To im-plement the response cost procedure, the teacher writesa column of decreasing numbers on the board. Whenevera disruptive behavior occurs, the teacher crosses off thehighest remaining number. If the teacher places the

Punishment by Removal of a Stimulus

383

Figure 6 A sample response cost chart showing the total number of minutes of free time remaining afterthree rule infractions.

34323028262422201816141210

86420

5 10 15 20 25 30

BaselineFaded Bedtime/Response Cost BL

Faded Bedtime/Response Cost

24-Hour Periods

Nu

mb

er o

f 1

5-M

inu

te I

nte

rval

s w

ith

Dis

tru

bed

Sle

ep

Figure 7 Number of 15-minuteintervals of disturbed sleep per 24-hourperiod during baseline and faded bed-time with response cost.From “Treatment of Sleep Problems in a Toddler” by R. Ashbaugh and S. Peck, 1998, Journal of AppliedBehavior Analysis, 31, p. 129. Copyright 1998 by the Society for the Experimental Analysis of Behavior,Inc. Reprinted by permission.

numbers 15 through 0 on the board and three disruptivebehaviors happened, the students had 12 minutes of freetime left that day.

Combining with Other Approaches

Response cost can be combined with other behaviorchange procedures (Long, Miltenberger, Ellingson, &Ott, 1999). For instance, response cost was combined

with a faded bedtime procedure to treat sleep problems(Ashbaugh & Peck, 1998; Piazza & Fisher, 1991). In theAshbaugh and Peck (1998) study, a systematic replicationof Piazza and Fisher (1991), two principal phases wereexamined: baseline and faded bedtime plus response cost.During baseline, the typical family ritual of placing thechild in bed was followed. That is, she was placed in bedwhen she appeared tired, and no attempts were made toawaken her if she slept during the day when she was sup-posed to be awake or if she was awake at times when shewas supposed to be asleep (e.g., the middle of the night).If the child awoke during the night, she typically went toher parents’ room and slept with them.

During faded bedtime plus response cost, an aver-age “fall-asleep” time was calculated based on the base-line data; this time was adjusted each night as a functionon the previous night’s fall-asleep time. For instance, anaverage sleep time during baseline may have been 10:30P.M., adjusted to 10 P.M., if she fell asleep within 15 min-utes of the 10:30 P.M. time from the night before. Re-sponse cost consisted of having the child get out of herbed for 30 minutes if she was not asleep within 15 min-utes of being placed in bed. She was kept awake for the30 minutes thereafter by playing with toys or talking toher parents. Also, if the child awoke during the night andattempted to sleep in her parents’ bed, she was escortedback to her own bed.

Figure 7 shows that during baseline 15-minute in-tervals of disruptive sleep averaged 24 intervals. How-ever, when faded bedtime plus response cost wasimplemented, the number of 15-minute intervals of dis-ruptive sleep fell to approximately 3 intervals. The child’sparents reported a better night’s sleep of their own, andgains were maintained a year later.

Punishment by Removal of a Stimulus

384

Response cost has been combined with differentialreinforcement of alternative behavior (DRA) to treat thefood refusal behavior of a 5-year-old child with devel-opmental delay. Kahng, Tarbox, and Wilke (2001) repli-cated the Keeney and colleagues (2000) study, exceptthat response cost was combined with DRA. During abaseline condition, trials across various food groupswere presented every 30 seconds. If food was accepted,praise was delivered. If food was refused, the behaviorwas ignored. Based on reinforcer assessments, duringDRA and response cost, books and audiotapes weremade available at the meal. If food was accepted, praisewas delivered. If food was refused, the books and au-diotapes were removed for 30 seconds. The books andaudiotapes were replaced during the next trial contin-gent on accepting food (i.e., DRA). Parents and grand-parents were also trained in the DRA plus response costprocedures. Results showed a substantial increase in thepercentage of intervals of food acceptance and a dra-matic decrease in the percentage of intervals of food re-fusal and disruption.

In sum, the findings of studies directly combiningresponse cost with other procedures have been positive.In any case, whether response cost is used alone or incombination with other procedures, practitioners whoprovide ample reinforcement during time-in occasionsare more likely to be successful. Stated differently, re-sponse cost should not be used as the sole approach tomodifying behavior because response cost, like any pun-ishment procedure, does not teach new behavior. Com-bining response cost procedures with behavior-buildingapproaches (e.g., DRA) is a good plan.

Response Cost MethodsFour methods for implementing response cost are as adirect fine, as bonus response cost, combined with posi-tive reinforcement, and within a group arrangement.

Fines

Response cost can be implemented by directly fining theperson a specific amount of positive reinforcers. A stu-dent who loses 5 minutes of free time for each occur-rence of noncompliant behavior is an example. In thiscase, response cost is applied to stimuli known to be re-inforcing and available to the person. In some situations,however, removing unconditioned and conditioned re-inforcers (e.g., food, free time) from a person would beconsidered legally and ethically inappropriate or unde-sirable. For instance, it would be inadvisable to takestructured leisure or free time from a person with de-velopmental disabilities. To avoid any potential prob-

lems, practitioners must obtain permission from the localhuman rights review committee or modify the responsecost contingency.

Bonus Response Cost

Practitioners can make additional reinforcers availablenoncontingently to the participant, specifically for re-moval with a response cost contingency. For example, ifthe school district normally allocates 15 minutes of recesseach morning for students at the elementary level, in abonus response cost method, students would have anadditional 15 minutes of recess available, subject to re-moval in prespecified amounts if classroom rules are vi-olated. The students retain the regularly scheduled recess,but each infraction or misbehavior reduces their bonusminutes by a certain number (e.g., 3 minutes). So, if astudent emitted three misbehaviors, his recess that daywould consist of his regularly scheduled time (i.e., 15minutes) plus 6 additional bonus minutes. As studentsimprove, gradually reduce the number of bonus minutesavailable.

Combining with PositiveReinforcement

Response cost can be combined with positive reinforce-ment. For example, a student might earn tokens for im-proved academic performance and simultaneously losetokens for instances of inappropriate behavior. In anotherinstance, a student could receive a point, a star, or a stickerfor each of 10 academic assignments completed duringthe morning but lose a point, a star, or a sticker for eachof 6 occurrences of calling out. In this scenario, the stu-dent would net a total of 4 points for that day (i.e., 10points earned for academic performance minus 6 pointsfor calling out equals a net of 4 points).

Musser and colleagues (2001) combined responsecost and positive reinforcement to improve the compli-ant behavior of three students with emotional distur-bance and oppositional defiant behavior. Briefly, theprocedures involved the teacher making a normal re-quest for compliance (e.g., “Take out your bookplease”). If the student complied within 5 seconds,praise was delivered. If a 30-minute period expired andno episodes of noncompliance were recorded, a stickerwas delivered to the student. If the student failed to com-ply with a request within 5 seconds, the teacher waited5 seconds and issued another directive (e.g., “Take outyour book”). The teacher then waited an additional 5 seconds for compliance. If the student complied,praise was issued. If not, one sticker was deducted for

Punishment by Removal of a Stimulus

385

noncompliance. Stickers were later exchanged for a“mystery motivator” (e.g., a backup reinforcer from agrab bag). Results for all three students showed markeddecreases in noncompliant behavior. Thus, a dual sys-tem, which was user friendly and convenient, proved tobe effective insofar as the students earned stickers for in-tervals of compliant behavior, but lost stickers for non-compliant behavior (response cost).

This third method of response cost has at least twoadvantages: (a) If all of the tokens or points are not lostvia response cost, the remaining ones can be traded forbackup reinforcers, thus adding a reinforcement compo-nent that is incorporated into many programs; and (b) re-inforcers can be resupplied by performing appropriatebehavior, thereby reducing legal and ethical concerns.The following vignette illustrates how a father might im-plement a bonus response cost procedure with his twosons to reduce their fighting at dinnertime.

Father: Boys, I’d like to talk to you.Tom and Pete: OK.Father: We have to find a way to stop the fighting and

squabbling at dinnertime. The way you two go ateach other upsets everyone in the family, and I can’teat my dinner in peace.

Tom: Well, Pete usually starts it.Pete: I do not!Tom: You do, too . . .Father: (Interrupting) That’s enough! This is just what I

mean, nit-picking at each other. It’s ridiculous and itneeds to stop. I’ve given it some thought, and here’swhat we’re going to do. Since each of you earns anallowance of $5 per week for doing householdchores, I was tempted to take it away from you thenext time either of you got into a fight. But I’ve de-cided against doing that. Instead, I’m going to makean additional $5 available to each of you, but for eachsarcastic comment or squabble you get into at thedinner table, you’ll lose $1 of that extra money. Soif you act properly, you’ll get an additional $5. If youhave two fights, you’ll lose $2 and get only $3 Doyou understand?

Pete: How about if Tom starts something and I don’t?Father: Whoever does any fighting or squabbling loses

$1. If Tom starts something and you ignore him, onlyhe loses. Tom, the same thing holds true for you. IfPete starts something and you ignore him, only Peteloses. One more thing. Dinnertime starts when youare called to the table; it ends when you are excusedfrom the table. Any other questions?

Tom: When do we begin?Father: We’ll begin tonight.

It is important to note that the father described thecontingencies completely to his sons. He told them whatwould happen if no fights occurred, and he told themwhat would happen if one sibling attempted to start afight but the other ignored him. In addition, he clearlydefined when dinnertime would begin and end. Each ofthese explanations was necessary for a complete de-scription of the contingencies. Presumably, the fatherpraised his sons for their appropriate mealtime behav-iors so that they were reinforced and strengthened.

Combining with GroupConsequences

The final way that response cost can be implemented in-volves group consequences. That is, contingent on theinappropriate behavior of any member of a group, thewhole group loses a specific amount of reinforcement.The Good Behavior Game serves as an illustration of howresponse cost can be applied to groups and presents thismethod of response cost in more depth.

Using Response CostEffectivelyTo use response cost effectively, the behavior targeted forresponse cost and the amount of the fine need to be ex-plicitly stated. Further, any rules that would apply for re-fusal to comply with the response cost procedure needto be explained. Included in the definition of target be-haviors should be the corresponding point loss for eachbehavior. However, in situations in which multiple be-haviors are subject to the response cost contingency or inwhich the degree of severity of the behavior determinesthe response cost, correspondingly greater fines should beassociated with more severe behaviors. In effect, the mag-nitude of the punishment (response cost) should fit theoffense.

According to Weiner (1962)—the originator of theterm response cost—the magnitude of the response costfine is important. It is likely that as the magnitude of thefine increases, larger reductions in the rate of the unde-sirable behavior may occur. However, if the loss of rein-forcers is too great and too rapid, the rate of the reductionof the inappropriate target behavior may be affected ad-versely. The person will soon be exhausted of his or hersupply of reinforcers, and a deficit situation will exist.As a general rule, the fine should be large enough to sup-press the future occurrence of the behavior, but not solarge as to bankrupt the person or cause the system to

Punishment by Removal of a Stimulus

386

lose its effectiveness. In short, if a client loses tokens orother reinforcers at too high a rate, he may give up andbecome passive or aggressive, and the procedure will be-come ineffective. Furthermore, fines should not bechanged arbitrarily. If a 5-minute loss of recess is im-posed on each noncompliant behavior in the morning,the teacher should not impose a 15-minute fine on thesame noncompliant behavior in the afternoon.

Determining the Immediacyof the Fines

Ideally, the fine should be imposed immediately after theoccurrence of each undesirable behavior. The morequickly response cost is applied subsequent to the oc-currence of the behavior, the more effective the proce-dure becomes. For example, Ashbaugh and Peck (1998)applied a response cost contingency successfully with ayoung toddler with sleep disruptive behavior immedi-ately after a failure-to-sleep interval passed without sleeponset.

Response Cost or Bonus Response Cost?

Deciding which variation of response cost would be themost effective to reduce behavior is usually an empiricalquestion. However, three considerations can help practi-tioners. First, the least aversive procedure should be at-tempted initially. Consistent with the principle of the leastrestrictive alternative, an effort should be made to ensurethat the minimum loss of reinforcers occurs for the min-imum amount of time. Bonus response cost may be theless aversive of the two variations because the reinforcersare not deducted directly from the person; instead they are lost from a pool of potentially available (i.e., bonus)reinforcers.

The second consideration is similar to the first andcan be stated in the form of a question: What is the po-tential for aggressive, emotional outbursts? From a so-cial validity perspective, students (and their parents)would probably find it more agreeable to lose reinforcersfrom an available reserve than from their earnings. Con-sequently, a bonus response cost procedure might be lesslikely to spark aggressive or emotional outbursts, or be of-fensive to students or their parents.

A third consideration is the need to reduce the be-havior quickly. Combative or noncompliant behaviormay be more appropriately suppressed with responsecost because the contingency directly reduces the stu-dent’s available reinforcers. The response-contingent

withdrawal of reinforcers serves to reduce the behaviorswiftly and markedly.

Ensuring Reinforcement Reserve

Positive reinforcers cannot be removed from a personwho does not have any. Prior to using response cost, thepractitioner must ensure a sufficient reinforcer reserve.Without such a reserve, the procedure is unlikely to besuccessful. For example, if a teacher used response-contingent withdrawal of free time for each occurrence ofinappropriate behavior in a highly disruptive class, thestudents could exhaust all of their available free time be-fore the end of the first hour, leaving the teacher won-dering what to do for the remainder of the day. Deductingfree time for succeeding days would hardly be beneficial.

Two suggestions apply to reduce the likelihood ofhaving no reinforcers available. First, the ratio of pointsearned to points lost can be managed. If baseline dataindicate that the inappropriate behavior occurs at a highrate, more reinforcers can be programmed for removal.Also, determining the magnitude of the fine, and statingit explicitly beforehand, is helpful. Minor infractionsmay warrant relatively small fines, whereas major in-fractions may warrant substantially higher fines. Sec-ond, if all reinforcers are lost and another inappropriatebehavior occurs, time-out might be applied. After that,when reinforcers have again been earned for appropriatebehavior, response cost can be reinstituted.

To establish the initial number of available rein-forcers, practitioners collect baseline data on the oc-currence of inappropriate behavior during the day orsession. The mean baseline figure can be increased forthe number of reinforcers to ensure that all reinforcersare not lost when response cost is in effect. Althoughno empirically verifiable guidelines are available, a pru-dent approach is to increase the number of reinforcers25% above the mean number of occurrences duringbaseline. For example, if baseline data indicate that themean number of disruptions per day is 20, the practi-tioner might establish 25 minutes of free time (the pos-itive reinforcer) as the initial level (i.e., 20 × 1.25). If thepractitioner calculates points instead of percentages, shecould add an additional 10 to 20 points to ensure an ad-equate buffer.

Recognizing the Potential forUnplanned or Unexpected Outcomes

Two situations may require the implementation of a con-tingency plan. One occurs when the repeated impositionof response cost serves to reinforce, rather than punish, the

Punishment by Removal of a Stimulus

387

undesirable behavior. When this situation arises, the prac-titioner should stop using response cost and switch to an-other reductive procedure (e.g., time-out or DRA). Thesecond situation occurs when the person refuses to giveup her positive reinforcers. To reduce the likelihood ofthis event, the practitioner should clarify the consequencesof such refusal beforehand and (a) make sure that an adequate supply of backup reinforcers is available,(b) impose an additional penalty fine for not giving up thereinforcer (e.g., the sticker) (Musser et al., 2001), and/or (c) reimburse the person with some fractional portion of thefine for complying with immediate payment.

Avoiding the Overuse of Response Cost

Response cost should be saved for those major undesir-able behaviors that call attention to themselves and needto be suppressed quickly. The teacher’s or parent’s pri-mary attention should always be focused on positive be-havior to reinforce; response cost should be a last resortand should be combined with other procedures to buildadaptive behavior.

Keeping Records

Each occurrence of response cost and the behavior thatoccasioned it should be recorded. Minimally, the analystshould record the number of times fines are imposed, thepersons to whom fines are issued, and the effects of thefines. Daily data collection helps to determine the effi-cacy of the response cost procedure. By graphing the ef-fects of the program, the behavior analyst can determinethe suppressive effect of the procedure.

Response CostConsiderationsIncreased Aggression

Response-contingent withdrawal of positive reinforcersmay increase student verbal and physical aggressiveness.The student who loses several tokens, especially withina short time, may verbally or physically assault theteacher. Emotional behaviors should be ignored when-ever possible if they accompany the implementation of response cost (Walker 1983). Still, teachers should an-ticipate this possibility and (a) preempt a decision to useresponse cost if they suspect that a worse condition willresult in the aftermath of an emotional episode, or (b) be prepared to “ride out the storm.”

Avoidance

The setting in which response cost occurs or the personwho administers it can become a conditioned aversive stim-ulus. If this situation occurs in school, the student may avoidthe school, the classroom, or the teacher by being absent ortardy. A teacher can reduce the likelihood of becoming aconditioned aversive stimulus by contingently deliveringpositive reinforcement for appropriate behavior.

Collateral Reductions of Desired Behavior

The response-contingent withdrawal of positive rein-forcers for one behavior can affect the frequency of other behaviors as well. If the teacher fines Shashona 1 minute of recess each time she calls out during mathclass, the response cost procedure may reduce not onlyher call-outs but also her math productivity. Shashonamay say to the teacher, “Since I lost my recess time forcalling out, I am not going to do my math.” She couldalso just become passive-aggressive by just sitting in herseat with arms folded and not work. Teachers and otherpractitioners should anticipate such collateral behaviorsand clearly explain the response cost rules, reinforce otherclassmates as models of appropriate behavior, and avoidface-to-face confrontations.

Calling Attention to the Punished Behavior

Response cost calls attention to the undesirable behavior.That is, upon the occurrence of an inappropriate behavior,the student is informed of reinforcer loss. The teacher’s at-tention—even in the form of notification of reinforcerloss—could serve as a reinforcing consequence. In effect,her attention may increase the frequency of future mis-behavior. For instance, a teacher may have difficulty withsome students because every mark that the teacher placeson the chalkboard indicates that a positive reinforcer hasbeen lost (e.g., minutes of free time). Further, it calls at-tention to the undesirable behavior and may inadvertentlyreinforce it. In such a situation, the teacher should changehis tactic, perhaps combining response cost with time-out.Also, to counteract the possibility of calling attention toinappropriate behavior, practitioners should ensure thatthe ratio of reinforcement to response cost contingenciesfavors reinforcement.

Unpredictability

As with other forms of punishment, side effects of re-sponse cost can be unpredictable. The effects of response

Punishment by Removal of a Stimulus

388

cost seem to be related to a number of variables that arenot fully understood and have not been well investigatedacross participants, settings, or behaviors. These variablesinclude the magnitude of the fine, the previous punish-

ment and reinforcement history of the individual, the fre-quency with which behaviors are fined, and the avail-ability of alternative responses that are eligible forreinforcement.

SummaryDefinition of Time-Out

1. Time-out from positive reinforcement, or simply time-out,is defined as the withdrawal of the opportunity to earn pos-itive reinforcement or the loss of access to positive rein-forcers for a specified time, contingent on the occurrenceof a behavior.

2. Time-out is a negative punisher, and it has the effect of reducing the future frequency of the behavior that pre-ceded it.

Time-Out Procedures for Applied Settings

3. There are two basic types of time-out: nonexclusion andexclusion.

4. Within the nonexclusion type, planned ignoring, with-drawal of a specific positive reinforcer, contingent obser-vation, and time-out ribbons are the main methods.

5. Within exclusion time-out, the time-out room, partition time-out, and hallway time-out serve as the principal methods.

6. Time-out is a desirable alternative for reducing behaviorbecause of its ease of application, acceptability, rapid sup-pression of behavior effects, and ability to be combinedwith other approaches.

Using Time-Out Effectively

7. The time-in environment must be reinforcing if time-outis to be effective.

8. Effective use of time-out requires that the behaviors lead-ing to, the duration of, and exit criteria for time-out be ex-plicitly stated. Further, practitioners must decide whetherto use nonexclusion or exclusion time-out.

9. In most applications, permission is required before time-out can be implemented.

10. Practitioners should be aware of legal and ethical consid-erations before implementing time-out. As a punishment

procedure, it should be used only after positive reductiveprocedures have failed, and with planned monitoring, su-pervision, and evaluation considerations in place.

Definition of Response Cost

11. Response cost is a form of punishment in which the lossof a specific amount of reinforcement occurs, contingenton the performance of an inappropriate behavior, and re-sults in the decreased probability of the future occurrenceof the behavior.

12. Four methods for implementing response cost are: attrac-tive procedure for practitioners, including moderate-to-rapid suppression of behavior, convenience, and its abilityto be combined with other procedures.

Response Cost Methods

13. Four methods for implementing response cost are: as a di-rect fine, as a bonus response cost, combined with positivereinforcement, and within a group arrangement.

Using Response Cost Effectively

14. To use response cost effectively, practitioners should de-termine the immediacy of the fine, decide whether bonusresponse cost is a preferred option, ensure reinforcer re-serve, recognize the potential for unplanned or unexpectedoutcomes, avoid overusing response cost, and keep goodrecords on its effects.

Response Cost Considerations

15. Implementing response cost may increase student aggres-siveness, produce avoidance responses, affect collateralreductions of desired behaviors, and call attention to thepunished behavior. The effects of response cost can also beunpredictable.

Punishment by Removal of a Stimulus

389

Motivating Operations

Key Termsabative effectabolishing operation (AO)behavior-altering effectconditioned motivating operation

(CMO)discriminative stimulus (SD)

related to punishmentestablishing operation (EO)evocative effect

function-altering effectmotivating operation (MO)recovery from punishment procedurereflexive conditioned motivating

operation (CMO-R)repertoire-altering effectreinforcer-abolishing effectreinforcer-establishing effect

surrogate conditioned motivatingoperation (CMO-S)

transitive conditioned motivatingoperation (CMO-T)

unconditioned motivatingoperation (UMO)

unpairingvalue-altering effect

Behavior Analyst Certification Board® BCBA® & BCABA®Behavior Analyst Task List,© Third Edition

Content Area 3: Principles, Processes, and Concepts

3-8 Define and provide examples of establishing operations.

© 2006 The Behavior Analyst Certification Board, Inc.,® (BACB®) all rights reserved. A current version of thisdocument may be found at www.bacb.com. Requests to reprint, copy, or distribute this document and ques-tions about this document must be submitted directly to the BACB.

This chapter was written by Jack Michael.

From Chapter 16 of Applied Behavior Analysis, Second Edition. John O. Cooper, Timothy E. Heron, William L. Heward. Copyright © 2007by Pearson Education, Inc. All rights reserved.

390

Motivating Operations

In commonsense psychology what people do atany particular moment is at least partly a functionof what they want at that moment. From a be-

havioral perspective based on Skinner’s analysis of drive(1938, 1953), wanting something may be interpreted tomean that (a) the occurrence of what is wanted would func-tion as a reinforcer at that moment, and (b) the current fre-quency of any behavior that has previously been soreinforced will increase. This chapter surveys and classi-fies variables that have these two motivating effects.

Definition andCharacteristics ofMotivating OperationsBasic Features

In their treatment of motivation, Keller and Schoenfeld(1950) identified the drive concept in terms of a relationbetween certain environmental variables, which theycalled establishing operations, and certain changes in be-havior. Although not conforming exactly to their usage,establishing operation (EO) was reintroduced in 1982(Michael, 1982, 1993) as a term for any environmentalvariable that (a) alters the effectiveness of some stimulus,object, or event as a reinforcer; and (b) alters the currentfrequency of all behavior that has been reinforced by thatstimulus, object, or event. The term establishing opera-tion (EO) is now commonly used in applied behavioranalysis (e.g., Iwata, Smith, & Michael, 2000; McGill,1999; Michael, 2000; Smith & Iwata, 1997; Vollmer &Iwata, 1991).

The term motivating operation (MO) has recentlybeen suggested as a replacement for the term estab-lishing operation (Laraway, Snycerski, Michael, & Pol-ing, 2001), along with the terms value altering andbehavior altering for the two defining effects describedpreviously. This more recent arrangement will be pre-sented in this chapter.1

The value-altering effect is either (a) an increase inthe reinforcing effectiveness of some stimulus, object, orevent, in which case the MO is an establishing operation(EO); or (b) a decrease in reinforcing effectiveness, inwhich case the MO is an abolishing operation (AO).The behavior-altering effect is either (a) an increase inthe current frequency of behavior that has been reinforced

1J. R. Kantor’s setting factor (1959d, p. 14) included motivating opera-tions as described, but also includes some events that do not fit the spe-cific definition in terms of the two effects. See Smith and Iwata (1997,pp 346–348) for a treatment of the setting factor as an antecedent influ-ence on behavior.

2The usefulness of the new term, abative, is described in detail in Laraway,Snycerski, Michael, and Poling (2001).

by some stimulus, object, or event, called an evocative ef-fect; or (b) a decrease in the current frequency of behav-ior that has been reinforced by some stimulus, object, orevent, called an abative effect.2 These relations are shownin Figure 1.

For example, food deprivation is an EO that increasesthe effectiveness of food as a reinforcer and evokes allbehavior that has been reinforced with food. Food in-gestion (consuming food) is an AO that decreases the ef-fectiveness of food as a reinforcer, and abates all behaviorthat has been followed by food reinforcement. These re-lations are shown in Figure 2.

An increase in painful stimulation is an EO that inc-reases the effectiveness of pain reduction as a reinforcerand evokes all behavior that has been reinforced by painreduction. A decrease in painful stimulation is an AO thatdecreases the effectiveness of pain reduction as a rein-forcer and abates all behavior that has been followed bypain reduction. These relations are shown in Figure 3.

Most of the statements in this chapter about value-altering and behavior-altering effects refer to relationsinvolving reinforcement rather than punishment. It is rea-sonable to assume that MOs also alter the effectivenessof stimuli, objects, and events as punishers, with either es-tablishing or abolishing effects; and also alter the currentfrequency of all behavior that has been punished by thosestimuli, objects, or events, either abating or evoking suchbehavior. At the present time, however, motivation withrespect to punishment is just beginning to be dealt within applied behavior analysis. Later in the chapter a smallsection considers the role of UMOs for punishment, butit is likely that most of what is said about MOs and rein-forcement will be extended to punishment in future treat-ments of motivation.

Additional Considerations

Direct and Indirect Effects

Behavior-altering effects are actually more complex thanhas been implied so far. The alteration in frequency canbe the result of (a) a direct evocative or abative effect ofthe MO on response frequency and (b) an indirect effecton the evocative or abative strength of relevant discrim-inative stimuli (SDs). One would also expect an MO tohave a value-altering effect on any relevant conditionedreinforcers, which, in turn, would have a behavior-alteringeffect on the type of behavior that has been reinforcedby those conditioned reinforcers. This relation will beconsidered later in connection with conditioned motivat-ing operations.

391

Figure 1 Motivating operations (MOs) and their two defining effects.

Establishing Operation (EO)

• Value-altering effect: An increase in the current effectiveness of some stimulus, object,or event as reinforcement.

• Behavior-altering effect: An increase in the current frequency of all behavior that hasbeen reinforced by that stimulus, object, or event (i.e., an evocative effect).

Abolishing Operation (AO)

• Value-altering effect: A decrease in the current effectiveness of some stimulus, object,or event as reinforcement.

• Behavior-altering effect: A decrease in the current frequency of all behavior that hasbeen reinforced by that stimulus, object, or event (i.e., an abative effect).

Not Just Frequency

In addition to frequency, other aspects of behavior canresult from a change in an MO, such as response magni-tude (a more or less forceful response), response latency(a shorter or longer time from MO or SD occurrence untilthe first response), relative frequency (a larger or smallerproportion of responses to the total number of responseopportunities), and others. However, because frequencyis the best known measure of operant relations, it will bethe behavioral alteration referred to in the remainder ofthis chapter.

A Common Misunderstanding

Sometimes the behavior-altering effect is interpreted asan alteration in frequency due to the organism’s encoun-tering a more or less effective form of reinforcement.This implies that the increase or decrease occurs onlyafter the reinforcer has been obtained. The critical ob-servation contradicting this notion is that a strong relationexists between MO level and responding in extinction—when no reinforcers are being received (see Keller &Schoenfeld, 1950, pp. 266–267 and Figure 60). In termsof an organism’s general effectiveness, an MO should

Figure 2 Motivating operations related to food.

Food Deprivation as an Establishing Operation (EO)

• Value-altering effect: An increase in the reinforcing effectiveness of food.• Behavior-altering evocative effect: An increase in the current frequency of all behavior

that has been reinforced by food.

Food Ingestion as an Abolishing Operation (AO)

• Value-altering effect: A decrease in the reinforcing effectiveness of food.• Behavior-altering abative effect: A decrease in the current frequency of all behavior that

has been reinforced by food.

Figure 3 Motivating operations related to painful stimulation.

Increase in Painful Stimulation as an Establishing Operation (EO)

• Value-altering effect: An increase in the reinforcing effectiveness of pain reduction.• Behavior-altering evocative effect: An increase in the current frequency of all behavior

that has been reinforced by pain reduction.

Decrease in Painful Stimulation as an Abolishing Operation (AO)

• Value-altering effect: A decrease in the reinforcing effectiveness of pain reduction.• Behavior-altering abative effect: A decrease in the current frequency of all behavior that

has been reinforced by pain reduction.

Motivating Operations

392

Motivating Operations

evoke the relevant behavior even if it is not at first suc-cessful. At present, the two effects (value-altering andbehavior-altering) will be considered independent in thesense that one does not derive from the other, althoughthey are probably related at the neurological level.

Current versus Future Effects: Behavior-Altering versus Repertoire-Altering Effects

As a result of an environmental history, an organism hasan operant repertoire of MO (motivating operation), SD

(discriminative stimulus), and R (response) relations.(Also present is a respondent repertoire of stimuli capa-ble of eliciting responses) MOs and SDs are componentsof the existing repertoire; they are antecedent variablesthat have behavior-altering effects. Antecedent events canevoke or abate responses, but their simple occurrence doesnot alter the organism’s operant repertoire of functional re-lations. These antecedent variables are in contrast to con-sequence variables, whose main effect is to change theorganism’s repertoire of functional relations so that theorganism behaves differently in the future. Consequencevariables include reinforcers, punishers, and the occur-rence of a response without its reinforcer (extinction pro-cedure) or without its punisher (recovery frompunishment procedure). This is what is meant whenMOs and SDs are said to alter the current frequency of allbehavior relevant to that MO; but reinforcers, punishers,and response occurrence without consequence alter thefuture frequency of whatever behavior immediately pre-ceded those consequences.

It is useful to have a different name for these twoquite different effects of a behaviorally relevant event;thus repertoire-altering effect (Schlinger & Blakely,1987) is contrasted with behavior-altering effect in thischapter. In this chapter the distinction between the twowill usually be implied by referring to current frequencyor to future frequency.

A Critical Distinction:Motivative versusDiscriminative RelationsMOs and SDs are both antecedent variables that alter thecurrent frequency of some particular type of behavior. Theyare also both operant (rather than respondent) variables inthat they control response frequency because of their rela-tion to reinforcing or punishing consequences (rather thanto a respondent unconditioned stimulus). At this point,therefore, it may be useful to review the contrasting defi-nitions of these two types of antecedent variables.

An SD is a stimulus that controls a type of behaviorbecause that stimulus has been related to the differential

availability of an effective reinforcer for that type of be-havior. Differential availability means that the relevantconsequence has been available in the presence of, andunavailable in the absence of, the stimulus. Consideringfood deprivation and pain increase as putative SDs re-quires that food and pain removal be available in the pres-ence of those conditions. This is somewhat problematicin that both can and often do occur under conditionswhen the reinforcers are not available. A true SD consti-tutes at least a probabilistic guarantee that the relevantconsequence will follow the response. Organisms maybe food deprived or subjected to painful stimulation forconsiderable periods during which the condition cannotbe alleviated.

More serious for the interpretation as an SD is thatthe unavailability of a reinforcer in the absence of thestimulus implies that the unavailable event would havebeen effective as a reinforcer if it had been obtained. It iswith respect to this requirement that most motivative vari-ables fail to qualify as discriminative stimuli. It is truethat food reinforcement is, in a sense, unavailable in theabsence of food deprivation, and pain-reduction rein-forcement is unavailable in the absence of pain, but thisis not the kind of unavailability that occurs in discrimi-nation training and that develops the evocative effect ofa true SD (Michael, 1982). In the absence of food depri-vation or painful stimulation, no MO is making food orpain removal effective as a reinforcer; therefore, there isno reinforcer unavailability in the sense that is relevant tothe discriminative relation.

However, food deprivation and painful stimulationeasily qualify as MOs, as conditions that alter the ef-fectiveness of some stimuli, objects, or events as rein-forcers, and that simultaneously alter the frequency ofthe behavior that has been reinforced by those stimuli,objects, or events.

To summarize, a useful contrast can usually be madeas follows: Discriminative stimuli are related to the dif-ferential availability of a currently effective form of re-inforcement for a particular type of behavior; motivativevariables (MOs) are related to the differential reinforcingeffectiveness of a particular type of environmental event.

Unconditioned MotivatingOperations (UMOs)For all organisms there are events, operations, and stimu-lus conditions with value-altering motivating effects thatare unlearned. Humans are born with the capacity to bemore affected by food reinforcement as a result of fooddeprivation, or more affected by pain reduction reinforce-ment as a result of pain onset or increase. Thus, food de-privation and painful stimulation are called unconditioned

393

motivating operations (UMOs)3. By contrast, needing toenter a room through a locked door establishes the key tothe door as an effective reinforcer, but this value-alteringeffect is clearly a function of a learning history involvingdoors and keys. MOs of this kind are called conditionedmotivating operations (CMOs) and will be considered indetail later in this chapter.

Note that it is the unlearned aspect of the value-altering effect that results in an MO being classified as unconditioned. The behavior-altering effect of an MO isusually learned. Said another way, we are born with thecapacity to be more affected by food reinforcement as aresult of food deprivation, but we have to learn most ofthe behavior that obtains food—asking for some, goingto where it is kept, and so forth.

Nine Main UMOs for Humans

Deprivation and Satiation UMOs

Deprivation with respect to food, water, oxygen, activ-ity, and sleep all have resultant reinforcer-establishingand evocative effects. By contrast, food and water in-gestion, oxygen intake, engaging in activity, and sleepinghave reinforcer-abolishing and abative effects.

UMOs Relevant to Sexual Reinforcement

For many nonhuman mammals, hormonal changes in thefemale are related to time passage, ambient light condi-tions, daily average temperature, or other features of theenvironment that are related phylogenically to successfulreproduction. These environmental features, or hormonalchanges, or both, can be considered UMOs that causecontact with a male to be an effective reinforcer for thefemale. They may produce visual changes in some as-pect of the female’s body and elicit chemical (olfactory)attractants that function as UMOs for the male, estab-lishing contact with a female as a reinforcer and evokingany behavior that has produced such contact. The varioushormonal changes may also evoke certain behaviors bythe female (e.g., the assumption of a sexually receptiveposture) that as stimuli function as UMOs for sexual be-havior by the male. Superimposed on this collection ofUMOs and unconditioned elicitors is a deprivation effectthat may also function as a UMO.

3The terms unconditioned and conditioned are used to modify MOs in thesame way they modify respondent-eliciting stimuli and operant reinforcersand punishers. Unconditioned motivating operations, like unconditionedeliciting stimuli for respondent behavior and unconditioned reinforcers andpunishers, have effects that are not dependent on a learning history. The ef-fects of conditioned motivating operations, like the effects of conditionedeliciting stimuli and conditioned reinforcers and punishers, are dependenton a learning history.

In the human, learning plays such a strong role inthe determination of sexual behavior that the role of un-learned environment–behavior relations has been diffi-cult to determine. The effect of hormonal changes in thefemale on the female’s behavior is unclear, as is the roleof chemical attractants on the male’s behavior. Otherthings being equal, both male and female seem to be af-fected by the passage of time since last sexual activity(deprivation), functioning as a UMO that establishes theeffectiveness of sexual stimulation as a reinforcer andthat evokes the behavior that has achieved that kind ofreinforcement. Working in the opposite direction, sexualorgasm functions as a UMO that abolishes the effective-ness of sexual stimulation as a reinforcer and abates (de-creases the frequency of) behavior that has achieved thatkind of reinforcement. In addition, tactile stimulation oferogenous regions of the body seems to function as aUMO in making further similar stimulation even moreeffective as reinforcement and in evoking all behaviorthat in the past has achieved such further stimulation.

Temperature Changes

Becoming uncomfortably cold is a UMO that establishesbecoming warmer as a reinforcer, and evokes any be-havior that has had that effect. A return to a normal tem-perature condition is a UMO that abolishes an increase inwarmth as a reinforcer and abates behavior that has hada warming effect. Becoming uncomfortably warm is aUMO that establishes a decrease in the temperature con-dition as an effective reinforcer and evokes any behaviorthat has resulted in a body-cooling effect. A return to anormal temperature condition abolishes being cooler asa reinforcer and abates body-cooling behavior.

These temperature-related UMOs could be combinedby specifying becoming uncomfortable with respect to tem-perature as a UMO that establishes a change for the betteras a reinforcer, and that evokes any behavior that hasachieved such an effect. Returning to a normal conditionwould then have appropriate UMO abolishing and abativeeffects. Still, it seems better at this point to conceptualize thesituation as involving different UMOs. Also, these UMOscould be grouped with the pain UMO and included underthe broad category of aversive stimulation, but it will beclearer at this point to consider them separately.

Painful Stimulation

An increase in painful stimulation establishes pain re-duction as a reinforcer and evokes the behavior (calledescape behavior) that has achieved such reduction. A de-crease in painful stimulation abolishes the effectivenessof pain reduction as a reinforcer and abates the behaviorthat has been reinforced by pain reduction.

Motivating Operations

394

Motivating Operations

Table 1 Nine Unconditioned Motivating Operations (UMOs) and Their Reinforcer-Establishing and Evocative Effects

Unconditioned Motivating Operation (UMO) Reinforcer-Establishing Effect Evocative Effect

Food deprivation Increases effectiveness of food ingestion as a Increases current frequency of all behavior reinforcer previously reinforced with food

Water deprivation Increases effectiveness of water ingestion as a Increases current frequency of all behavior reinforcer previously reinforced with water

Sleep deprivation Increases effectiveness of sleep as a reinforcer Increases current frequency of all behavior previously reinforced with being able to sleep

Activity deprivation Increases effectiveness of activity as a reinforcer Increases current frequency of all behavior previously reinforced with activity

Oxygen deprivation* Increases effectiveness of breathing as a Increases current frequency of all behavior reinforcer previously reinforced with being able to breathe

Sex deprivation Increases effectiveness of sex stimulation as Increases current frequency of all behavior a reinforcer previously reinforced with sexual stimulation

Becoming too warm Increases effectiveness of temperature decrease Increases current frequency of all behavior as a reinforcer previously reinforced with becoming cooler

Becoming too cold Increases effectiveness of temperature increase Increases current frequency of all behavior as a reinforcer previously reinforced with becoming warmer

Increase in painful Increases effectiveness of a decrease in pain Increases current frequency of all behavior stimulus as a reinforcer previously reinforced with a decrease in painful

stimulation

*It is not actually oxygen deprivation that functions as a UMO, but rather the buildup of carbon dioxide in the blood as a result of not being able toexcrete carbon dioxide because of not being able to breathe or because of breathing in air that is as rich in carbon dioxide as the exhaled air.

4For a discussion of Skinner’s approach to emotional predispositions inthe general context of MOs, see Michael (1993, p. 197).

In addition to establishing pain reduction as a rein-forcer and evoking the behavior that has produced pain re-duction, painful stimulation in the presence of anotherorganism evokes aggressive behavior toward that organ-ism. In some organisms, including humans, some of thisaggression may be the elicitative result of the pain func-tioning as a respondent unconditioned stimulus (US) (Ul-rich & Azrin, 1962). However, a case can be made forthe painful stimulation also functioning as a UMO thatmakes events such as signs of damage to another organ-ism effective as reinforcers and evokes the behavior thathas been reinforced by the production of such signs. Skin-ner made such a case (1953, pp. 162–170) in his analy-sis of anger, and extended the analysis to the emotionsof love and fear.4

Review of UMO Effects

Table 1 summarizes the reinforcer-establishing andevocative effects of the nine UMOs for humans. Simi-larly, Table 2 shows the reinforcer-abolishing and aba-tive effects.

The Cognitive Misinterpretation

The behavior-altering effects of UMOs on human be-havior are generally understood to some degree. In-creased current frequency of behavior that has made uswarmer as a result of becoming too cold is a part of oureveryday experience, as is the cessation of this behaviorwhen normal temperature returns. That water deprivationshould evoke behavior that has obtained water, and thatthis behavior should cease when water has been obtained,seems only reasonable. However, the variables responsi-ble for these effects are often misconstrued.

The cognitive interpretation of the behavior-alteringeffects of UMOs on the behavior of a verbally sophisti-cated individual are in terms of that individual’s under-standing (being able to verbally describe) the situationand then behaving appropriately as a result of that un-derstanding. On the contrary, the behavioral interpretationthat reinforcement automatically adds the reinforced be-havior to the repertoire that will be evoked and abated bythe relevant UMO is not always well appreciated. Fromthe behavioral perspective, the person, verbally sophisti-cated or not, does not have to “understand” anything for an MO to have value-altering and behavior-altering effects.

395

Table 2 UMOs that Decrease Reinforcer Effectiveness and Abate Relevant Behavior

Unconditioned Motivating Operation (UMO) Reinforcer-Abolishing Effect Abative Effect

Food ingestion (after Decreases effectiveness of food as a reinforcer Decreases current frequency of all behavior food deprivation) previously reinforced with food

Water ingestion (after Decreases effectiveness of water as a reinforcer Decreases current frequency of all behavior water deprivation) previously reinforced with water

Sleeping (after sleep Decreases effectiveness of sleep as a reinforcer Decreases current frequency of all behavior deprivation) previously reinforced with sleep

Being active (after Decreases effectiveness of activity as a reinforcer Decreases current frequency of all behavior activity deprivation) previously reinforced with activity

Breathing (after not Decreases effectiveness of breathing as a Decreases current frequency of all behavior being able to breathe) reinforcer previously reinforced with being able to breathe

Orgasm or sex stimula- Decreases effectiveness of sexual stimulation as Decreases current frequency of all behavior tion (after sex deprivation) a reinforcer previously reinforced with sexual stimulation

Becoming cooler (after Decreases effectiveness of temperature decrease Decreases current frequency of all behavior being too warm) as a reinforcer previously reinforced with becoming cooler

Becoming warmer (after Decreases effectiveness of temperature increase Decreases current frequency of all behavior being too cold) as a reinforcer previously reinforced with becoming warmer

Painful stimulation de- Decreases effectiveness of pain decrease as a Decreases current frequency of all behavior crease (while in pain) reinforcer previously reinforced with a decrease in pain

The cognitive misinterpretation of the behavior-altering effect encourages two forms of practical inef-fectiveness. First, insufficient effort may be made to teachappropriate behavior to individuals with limited verbalrepertoires, on the grounds that they will not be able tounderstand the relevant environment–behavior relations.Second, there may be insufficient preparation for an in-crease in whatever behavior preceded the relevant rein-forcement (often some form of inappropriate behaviorsuch as yelling and crying), again because it is notthought that the individual would understand the rela-tion between his behavior and any consequence.

Relevance of the MO to a Generality of Effects

In the applied area, the reinforcer-establishing effect ofMOs seems to be increasingly understood and used. Ed-ibles may be temporarily withheld so that they will bemore effective as reinforcers for behavior that is beingtaught; similarly with music, toys, and attention from anadult. However, it is not so widely recognized that thebehavior being taught with these reinforcers will notoccur in future circumstances, even if it was well learnedand is a part of the learner’s repertoire, unless the relevantMO is in effect. This issue is considered in the case ofpunishment where the role of the MO in the occurrenceof behavior in future circumstances is more complex andeven more likely to be overlooked.

The importance of making the stimulus conditionsduring instruction similar to those present in the settingsand situations in which generalization of the learned be-havior is desired is generally understood. However, thefact that the relevant MO must also be in effect for re-sponding to be generalized and maintained seems moreeasily overlooked.

Weakening the Effects of UMOs

For practical reasons it may be necessary to weaken theeffects of an MO. Both reinforcer-establishing and evoca-tive effects of UMOs can be temporarily weakened bythe relevant reinforcer-abolishing and abative operations.For example, food ingestion will have an abative effect onundesirable behavior being evoked by food deprivation,such as food stealing, but the behavior will return whendeprivation is again in effect. In general it is not possibleto permanently weaken the value-altering effects ofUMOs. Water deprivation will always make water moreeffective as a reinforcer, and pain increase will alwaysmake pain decrease more effective as a reinforcer. Butthe behavior-altering effects are clearly based on a historyof reinforcement, and such histories can be reversed byan extinction procedure—let the evoked response occurwithout its reinforcement. (And the abative effects of apunishment history can be reversed by allowing the response to occur without punishment—the recoveryfrom punishment procedure.) With respect to the UMOs,

Motivating Operations

396

Motivating Operations

however, the relevant reinforcer must be obtainable insome acceptable way as the undesirable behavior is beingextinguished. People cannot be expected to do withoutthe various unconditioned reinforcers controlled byUMOs.

UMOs for Punishment

An environmental variable that alters the punishing ef-fectiveness of a stimulus, object, or event, and alters thefrequency of the behavior that has been so punished, is anMO for punishment. If the value-altering effect does notdepend on a learning history, then such a variable wouldqualify as a UMO.

Value-Altering Effect

An increase in painful stimulation functions as punish-ment as long as the current level of painful stimulation isnot so high that an increase cannot occur. So the UMOmust consist of a current pain level that is still capable ofincrease, or a change to such a level from one that is sohigh that further increase is not possible. This means thatin general a pain increase will almost always function asan unconditioned punisher.5 This is also true for otherkinds of stimulation that function as unconditioned pun-ishers—some sounds, odors, tastes, and so on.

Most of the punishers that affect humans, however,are effective because of a learning history; that is, they areconditioned rather than unconditioned punishers. If thelearning history consisted of pairing conditioned pun-ishers with unconditioned punishers, then the UMOs forthose unconditioned punishers are CMOs for the condi-tioned punishers. (This UMO–CMO relation is describedin more detail in the following section on CMOs.) If theyare punishers because of a historical relation to a reducedavailability of reinforcers, then the MOs for those rein-forcers are the MOs for the conditioned punishers. Re-moving food as a punisher, or more commonly a changeto a stimulus in the presence of which food has been lessavailable, will only function as a punisher if food is cur-rently effective as a reinforcer. Thus, the MO for food re-moval as a punisher is food deprivation.

Social disapproval (expressed by a frown, a headshake, or a specific vocal response such as “No” or“Bad”) constitutes a stimulus condition in the presence

5Painful stimulation can be a conditioned punisher if it had been historicallyassociated with some other punisher, such as when pain has been evidencethat something more serious is wrong. On the other hand, painful stimula-tion can be a conditioned reinforcer if historically associated with some re-inforcer in addition to its own termination, such as when muscle pain isrelated to having had an effective exercise workout, or when painful stim-ulation has been paired with some form of sexual reinforcement.

of which the typical reinforcers provided by the disap-proving person have been withheld. However, such astimulus condition will function as punishment only ifthe MOs for those withheld reinforcers are currently ineffect. The punishment procedure called time-out fromreinforcement is similar and will only function as pun-ishment if the reinforcers that are made unavailable—from which the person is timed out—are truly effectivereinforcers at the time the person is being punished.Response cost—taking away objects such as tokens thatcan be exchanged for various reinforcers or imposing amonetary fine or a fine with respect to some kind of pointor score bank—is a more complex type of procedure in-volving a time delay between when the punishment op-eration occurs and when the decrease in reinforcers takeseffect. Still, unless the events that have been subjected todelayed removal (the things that the tokens, points, ormoney can be exchanged for) are effective as reinforcersat the time the response cost procedure occurs, punish-ment will not have occurred.

Behavior-Altering Effect

In general, observing a punishment effect is more com-plex than observing a reinforcement effect because onemust also consider the status of the variable responsiblefor the occurrence of the punished behavior. This appliesto the effects of MOs as well. The behavior-alteringevocative effect of an MO for reinforcement consists ofan increase in the current frequency of any behavior thathad been so reinforced. For example, food deprivationevokes (increases the current frequency of) all behaviorthat had been reinforced with food. The behavior-alteringeffect of an MO for punishment is a decrease in the cur-rent frequency of all behavior that had been so punished.The onset of the MO will have an abative effect with re-spect to the type of behavior that had been punished.However, the observation of such an abative effect wouldnot be possible unless the previously punished behaviorwas already occurring at a frequency sufficient that itsdecrease could be observed when the MO for punishmentoccurred. That is, the observation of the abative effect ofan MO for punishment requires the evocative effect ofan MO for reinforcement with respect to the punishedbehavior. In the absence of the MO for reinforcement,there would be no behavior to be abated by the MO forpunishment, even though it had such an abative effect.

Suppose that a time-out procedure had been used topunish some behavior that was disruptive to the therapysituation. Only if the MOs relevant to the reinforcersavailable in the situation were in effect would one expectthe time-out to function as punishment. And then only ifthose MOs were in effect would one expect to see the

397

abative effect of the punishment procedure on the dis-ruptive behavior. But only if the MO for the disruptive be-havior were also in effect would there be any disruptivebehavior to be abated. These complex behavioral rela-tions have not received much attention in the conceptual,experimental, or applied literatures, but seem to follownaturally from existing knowledge of reinforcement, pun-ishment, and motivating operations. Behavior analystsshould be aware of the possible participation of these be-havioral relations in any situation involving punishment.6

A Complication: Multiple Effects of the Same Variable

Any behaviorally important event typically has more thanone effect, and it is important both conceptually and prac-tically that the various effects all be recognized and notconfused with each other (Skinner, 1953, pp. 204–224).Multiple effects are apparent in the animal laboratorydemonstration of a simple operant chain. A food-deprivedrat is taught to pull a cord hanging from the ceiling ofthe chamber that turns on an auditory stimulus such as abuzzer. In the presence of the buzzer sound the rat is thentaught to make a lever press that delivers a food pellet.The onset of the buzzer will now have two obvious op-erant effects: (a) It is an SD that evokes the lever-pressresponse, and (b) it is a conditioned reinforcer that in-creases the future frequency of the cord-pulling response.The first is a behavior-altering evocative effect, and thesecond is a repertoire-altering reinforcement effect. Theseeffects are in the same direction, an increase in currentfrequency and an increase in future frequency, althoughnot necessarily for the same type of response.7

Similarly, a stimulus that functions as a discrim-inative stimulus (SD) related to punishment, will haveabative effects on the current frequency of some type ofresponse, and will function as a conditioned punisher thatdecreases the future frequency of the type of responsethat preceded its onset. Again, the effects are in the samedirection, but in this case both are decreases.

6As an aside, it is sometimes argued that behavioral principles are too sim-ple for the analysis of the complexities of human behavior, and thereforesome nonbehavioral—usually cognitive—approach is required. It may wellbe true that the behavioral repertoire of the people who make this argu-ment is too simple for the task. However, the principles themselves are notso simple, as can be seen from the preceding effort to understand the ef-fects of punishment MOs, or from footnote 5 with respect to the learnedfunctions of painful stimulation.7The buzzer sound will also function as a respondent conditioned elicitingstimulus for smooth muscle and gland responses typically elicited by foodin the mouth, and will condition such responses to any other stimulus thatis present at that time—an instance of higher order conditioning—but thischapter focuses on operant relations.

Environmental events that function as UMOs, likethose that function as SDs, will (in their function asUMOs) typically have behavior-altering effects on thecurrent frequency of a type of behavior, and (as conse-quences) have repertoire-altering effects on the futurefrequency of whatever behavior immediately precededthe onset of the event. An increase in painful stimulationwill, as an MO, increase the current frequency of all be-havior that has alleviated pain, and as a behavioral con-sequence decrease the future frequency of whateverbehavior preceded that instance of pain increase. In thiscase of multiple control, however, the effects will be inopposite directions.

In general, events that have a UMO evocative effectwill also function as punishment for the response imme-diately preceding the onset of the event. This statementmust be qualified somewhat for events that have suchgradual onsets (like food deprivation) that they cannoteasily function as response consequences. Table 3shows these multiple effects for the UMOs that establishthe effectiveness of certain events as reinforcers. Eventsthat have a UMO abative effect with respect to currentfrequency generally do have onsets sufficiently suddento function as behavioral consequences (e.g., food in-gestion) and will be reinforcers for the immediately pre-ceding behavior.

Applied Implications

Many behavioral interventions involve a manipulationchosen because of (a) its MO value-altering or behavior-altering effect, or (b) its repertoire-altering effect as a re-inforcer or a punisher. But irrespective of the purpose ofthe manipulation, it is important to be aware that an ef-fect in the opposite direction will also occur, which mayor may not be a problem. The fact that reinforcement isalso a form of satiation will not be a problem if the rein-forcer magnitude can be quite small. The fact that a sati-ation operation will also reinforce the behavior precedingit will not be a problem if that behavior is not undesirable.The fact that a deprivation operation with respect to anevent that will be used productively as a reinforcer couldalso function as a punisher for the behavior that precededthe deprivation operation will not be a problem if the de-privation onset is very slow, or if the behavior that is pun-ished is not a valuable part of the person’s repertoire.

Table 3 shows that any UMO used to make someevent more effective as reinforcement or to evoke the typeof behavior that has been reinforced by that event willalso function as a punisher for whatever behavior im-mediately preceded the manipulation. Restricting breath-ing ability, making the environment too cold or toowarm, or increasing painful stimulation are not likely to

Motivating Operations

398

Motivating Operations

Table 3 Contrasting the Behavior-Altering and Repertoire-Altering Effects of Environmental Events as UMOs and as Punishers

Repertoire-Altering Effect on Future Behavior Environmental Event Evocative Effect on Current Behavior as a UMO as Punishment

Deprivation of food, water, Increases the current frequency of all behavior Should be punishment, but onset is too gradual sleep, activity, or sex that has been reinforced with food, water, sleep, to function as a behavioral consequence

activity, or sex

Oxygen deprivation Increases the current frequency of all behavior The sudden inability to breathe decreases the that has been reinforced with being able to future frequency of the type of behavior that breathe preceded that instance of being unable to

breathe

Becoming too cold Increases the current frequency of all behavior Decreases the future frequency of the type of that has been reinforced by becoming warmer behavior that preceded that instance of

becoming too cold

Becoming too warm Increases the current frequency of all behavior Decreases the future frequency of the type of that has been reinforced by becoming cooler behavior that preceded that instance of

becoming too warm

Increase in painful Increases the current frequency of all behavior Decreases the future frequency of the type of stimulation that has been reinforced with pain reduction behavior that preceded that instance of pain

increase

be used deliberately to control behavior in applied set-tings, but such changes could occur for other reasons (notunder the behavior analyst’s control). For this reason it isimportant to be aware of the two different kinds of ef-fects they could have.

Similar opposite effects can be expected from ma-nipulations that worsen8 the person’s situation in anyway, even though the worsening is related to a learninghistory. Such worsening will establish improvement as areinforcer and will evoke any behavior that has been soreinforced. The nature of social attention as a reinforceris unclear as to provenance (Michael, 2000, p. 404), butit would be safe to assume that any manipulation de-signed to increase the effectiveness of attention as a re-inforcer (attention deprivation, for example) would alsofunction as a punisher for the behavior that preceded themanipulation.

Conversely, any operation (e.g., a time-out proce-dure) designed as punishment to decrease the future fre-quency of the behavior preceding the manipulation will

8Worsening here refers to any stimulus change that would function as pun-ishment for behavior that preceded it. The term punishment is not quiteappropriate in describing this CMO because reference is not being madeto a decrease in future frequency of any behavior. Similarly improvementis used to refer to a change that would function as reinforcement for be-havior that preceded it, but when no reference is being made to an increasein the future frequency of any behavior. Although worsening and improvingare useful terms in this context, they are not presented here as technicalterms.

also function as an MO in evoking any behavior that hasescaped the condition produced by the manipulation.

The behavior analyst should also recognize that anyreinforcer-abolishing operation designed to make someevent less effective as reinforcement (e.g., a satiation pro-cedure) or to abate the type of behavior that has achievedthat type of reinforcement will also function as rein-forcement for the behavior immediately preceding theoperation. Food ingestion is an abolishing operation forfood as reinforcement and abates any food-reinforced be-havior, but food ingestion also functions as reinforcementfor the behavior immediately preceding the food inges-tion. Presenting a high level of noncontingent attentionwill have a reinforcer-abolishing and an abative effect,but will function as a reinforcer for whatever behaviorprecedes the operation. Obversely, any operation de-signed to function as reinforcement will also have MOreinforcer-abolishing and behavior-abating effects.

Aversive Stimuli

Environmental events with combined MO evocative ef-fects, repertoire-altering punishment effects, and re-spondent evocative effects with respect to certain smoothmuscle and gland responses (heart rate increase, adrenalsecretion, etc.) are often referred to as aversive stimuli,where the specific behavioral function [MO, uncondi-tioned stimulus (SP), unconditioned stimulus (US)] isnot specified.

399

It is not clear at present just how close the correlationamong these several functions is, nor is it clear that theadvantages of an omnibus term of this sort outweigh thedisadvantage of its lack of specificity. It is clear that someuse of the term aversive stimulus is simply a behavioraltranslation of commonsense expressions for “unpleasantfeelings,” “unpleasant states of mind,” and so on—a formof usage that is possibly fostered by the term’s lack ofspecificity. For these reasons, aversive stimulus has notbeen used in this chapter to refer to MO or to repertoire-altering variables.

Conditioned MotivatingOperations (CMOs)Motivating variables that alter the reinforcing effective-ness of other stimuli, objects, or events, but only as a re-sult of the organism’s learning history, are calledconditioned motivating operations (CMOs). As withUMOs, CMOs also alter the momentary frequency ofall behavior that has been reinforced by those otherevents. In commonsense terms, some environmental variables, as a result of our experiences, make us wantsomething different from what we wanted prior to en-countering those variables, and induce us to try to obtainwhat we now want.

There seem to be at least three kinds of CMOs, all ofwhich were motivationally neutral stimuli prior to theirrelation to another MO or to a form of reinforcement orpunishment. Depending on their relation to the behav-iorally significant event or condition, the three kinds ofconditioned motivating operations are classified assurrogate, reflexive, or transitive. The surrogate CMO(CMO-S) accomplishes what the MO it was paired withaccomplishes (is a surrogate for that MO), the reflexiveCMO (CMO-R) alters a relation to itself (makes its ownremoval effective as reinforcement), and the transitiveCMO (CMO-T) makes something else effective as re-inforcement (rather than altering itself).

Surrogate CMO (CMO-S):A Stimulus That Has Been Paired with Another MO

CMO-S Description

The respondent conditioned stimulus (CS), operant con-ditioned reinforcer (Sr), and operant conditioned punisher(Sp) are each stimuli that acquired a form of behavioraleffectiveness by being paired with a behaviorally effec-tive stimulus. It is possible that stimuli that are paired

9The neutral stimulus could be correlated with a CMO rather than a UMO,with the same transfer of effects.

with a UMO9 will become capable of the same value-altering and behavior-altering effects as that UMO. Withrespect to its MO characteristics, such a stimulus will becalled a surrogate CMO, or a CMO-S.

This relation would be illustrated if stimuli that hadbeen temporally related to decreases in temperaturewould have MO effects similar to the temperature de-creases themselves. That is, in the presence of such stim-uli, a temperature increase would be a more effectivereinforcer, and behavior that had produced such an in-crease would occur at a higher frequency than would beappropriate for the actual temperature. Research in thisarea is discussed at length in Michael, 1993 (pp.199–202), but will not be reviewed here. Suffice it to saythat the evidence for such an effect is not strong. Also, theexistence of this type of learned motivating operation issomewhat problematic from an evolutionary perspective(Mineka, 1975). Behaving as though an MO were in ef-fect when it was not would seem to be opposed to the or-ganism’s best interest for survival. Trying to get warmerthan is necessary for the existing temperature conditionwould not seem healthy, and such behavior might dis-place more important behavior. However, evolution doesnot always work perfectly.

With sexual motivation, MOs for aggressive behav-ior, and the other emotional MOs, the issue has not beenaddressed in terms specific to the CMO because its dis-tinction from CS, Sr, and Sp has not been previously em-phasized. The surrogate CMO is only just beginning to beconsidered within applied behavior analysis (see McGill,1999, p. 396), but its effects could be quite prevalent.From a practical perspective, it may be helpful to con-sider the possibility of this type of CMO when trying tounderstand the origin of some puzzling or especially ir-rational behavior.

Weakening the Effects of Surrogate CMOs

Any relationship developed by a pairing procedure cangenerally be weakened by two kinds of unpairing, pre-senting the previously neutral stimulus without the pre-viously effective stimulus, or presenting the effectivestimulus as often in the absence as in the presence of thepreviously neutral one. For example, if the CMO-S, let’ssay a visual stimulus, that had often been paired with ex-treme cold now occurs frequently in normal temperature,its value-altering and behavior-altering effects would beweakened. Similarly, if extreme cold now occurred justas often in the absence of the CMO-S as in its presence,the effectiveness of the CMO-S would be reduced.

Motivating Operations

400

Motivating Operations

As mentioned earlier, the CMO-S is just beginningto be dealt with in applied behavior analysis (e.g., seeMcGill, 1999, p. 396), but its possible relation to problembehavior means that the behavior analyst should knowhow to weaken such CMOs.

Reflexive CMO (CMO-R): A StimulusThat Has Systematically PrecededSome Form of Worsening or Improvement

CMO-R Description

In the traditional “discriminated avoidance procedure,”10

an intertrial interval is followed by the onset of an ini-tially neutral warning stimulus, which is in turn followedby the onset of painful stimulation—usually electricshock. Some arbitrary response (one that is not part ofthe animal’s phylogenic pain–escape repertoire), such aslever pressing, terminates the painful stimulation (the an-imal escapes the pain) and restarts the intertrial interval.The same response, if it occurs during the warning stim-ulus, terminates that stimulus and the shock does notoccur on that trial. The response at this phase of the pro-cedure is said to have avoided the pain and is called anavoidance response. As a result of exposure to this pro-cedure, many organisms learn to emit the relevant re-sponse during most of the warning stimulus occurrences,and thus receive very few of the shocks.

Recall the analysis of the role of the shock as an MOfor the escape response, the reinforcement for which isshock termination. The warning stimulus has a similarfunction, except that its capacity to establish its own ter-mination as an effective form of reinforcement is of on-togenic provenance—as a result of the individual’s ownhistory involving the relation of the warning stimulus tothe onset of the painful stimulus. In other words, thewarning stimulus evokes the so-called avoidance responseas a CMO, just as the painful stimulation evokes the es-cape response as a UMO. In neither case is the relevantstimulus related to the availability of the response con-sequence, but rather to its reinforcing effectiveness.

In more general terms, any stimulus that systemati-cally precedes the onset of painful stimulation becomesa CMO-R, in that its own offset will function as a rein-forcer, and its occurrence will evoke any behavior thathas been followed by such reinforcement. This set offunctional relations is not limited to painful stimulationas a form of worsening (or even to worsening in general,

10The term discriminated arose so that this type of procedure could be dis-tinguished from an avoidance procedure with no programmed exterocep-tive stimulus except for the shock itself (also sometimes called avoidancewithout a warning stimulus).

as will be seen later). It is well known that organisms canlearn to terminate stimuli that warn of stimulus changesother than the onset of pain—stimuli that warn of a low-ered frequency of food presentation, increased effort, ahigher response ratio requirement, longer delays to food,and so forth. Such events have in common some form ofworsening, and stimuli related to such events are oftencalled conditioned aversive stimuli without specifyingany particular behavioral function.

It may be useful to repeat the argument against suchstimuli being considered discriminative stimuli (SDs). Adiscriminative stimulus is related to the current avail-ability of a type of consequence for a given type of be-havior. Availability has two components: (a) An effectiveconsequence (one whose MO is currently in effect) musthave followed the response in the presence of the stimu-lus; and (b) the response must have occurred without theconsequence (which would have been effective as a re-inforcer if it had been obtained) in the absence of thestimulus. The relation between the warning stimulus andconsequence availability does not meet the second com-ponent. In the absence of the warning stimulus, there isno effective consequence that could have failed to followthe response in an analog to the extinction respondingthat occurs in the absence of an SD. The fact that theavoidance response does not turn off the absent warningstimulus is in no sense extinction responding, but ratheris behaviorally neutral, like the unavailability of food re-inforcement for a food-satiated organism.

Now consider a stimulus that is positively correlatedwith some form of improvement. Its CMO-R effectsoccur if the stimulus establishes its own offset as an ef-fective punisher and abates any behavior that has beenso punished. The relation is quite plausible, althoughthere seems to have been little directly relevant research.

Human Examples of the CMO-R

The CMO-R plays an important role in identifying a neg-ative aspect of many everyday interactions that mightseem free from any deliberate aversiveness by one persontoward another. Typically such interactions are interpretedas a sequence of discriminative stimuli, a sequence of op-portunities for each participant to provide some form ofpositive reinforcement to the other person.

Reactions to Mands. Imagine that a stranger asksyou where a particular building on campus is lo-cated, or asks for the time. (Questions are usuallymands for verbal action.) The appropriate response isto give the information quickly or to say that youdon’t know. Typically the person who asked willsmile and thank you for the information. Also yourquestion may be reinforced by the knowledge that a

401

fellow human being has been helped. In a sense thequestion is an opportunity to obtain these reinforcersthat were not available before the question. However,the question also begins a brief period that can be con-sidered a warning stimulus, and if a response is notmade soon, a form of social worsening will occur. Theasker may repeat the question, stating it more clearly ormore loudly, and will certainly think you are strange ifyou do not respond quickly. You too would consider itsocially inappropriate for you to provide no answer.Even when no clear threat for nonresponding is impliedby the person who asked, our social history under suchconditions implies a form of worsening for continuedinappropriate behavior. Many such situations probablyinvolve a mixture of the positive and negative compo-nents, but in those cases in which answering the ques-tion is an inconvenience (e.g., the listener is in a hurry),the asker’s thanks is not a strong reinforcer, nor is help-ing one’s fellow human being. The reflexive CMO isprobably the main controlling variable.

Complication with Respectto Stimulus Termination

In the typical laboratory avoidance procedure, the re-sponse terminates the warning stimulus. In extending thistype of analysis to the human situation, it must be recog-nized that the warning stimulus is not simply the eventthat initiated the interaction. In the previous example, thereflexive CMO is not the vocal request itself, which istoo brief to be actually terminated. It is, instead, the morecomplex stimulus situation consisting of having beenasked and not having made a response during the timewhen such a response would be appropriate. The termi-nation of that stimulus situation is the reinforcement forthe response. Some social interactions with a stimulus—a facial expression, an aggressive posture—are more likethe warning stimulus of the animal laboratory that can beterminated by the avoidance response, but most involvethe more complex stimulus condition consisting of therequest and the following brief period described earlier.

Thanks and You’re Welcome. When a person doessomething for another person that is a kindness of somesort, it is customary to thank the person. What evokesthe thanking response, and what is its reinforcement? Itis clearly evoked by the person’s performing the favoror the kindness. Should performing the favor be consid-ered purely an SD, in the presence of which one can say“thanks” and receive the reinforcement consisting ofthe other person saying “you’re welcome”? In manycases, ordinary courteous remarks may involve a CMO-R component. Consider the following scenario. PersonA has his arms full carrying something out of the build-

11This procedure is often incorrectly referred to as extinguishing the avoid-ance response, but a true avoidance extinction procedure requires responseoccurrence without termination of the warning stimulus.

ing to his car. As he approaches the outer door, PersonB opens the door and holds it open while Person A goesout. Person A then usually smiles and says “Thanks”.The CMO-R component can be illustrated by suppos-ing that Person A just walks out without acknowledgingthe favor. In such circumstances it would not be un-usual for Person B to call out sarcastically, “You’rewelcome!” Someone’s doing a favor for someone elseis a warning stimulus (a CMO-R) that has systemati-cally preceded some form of disapproval if the favor isnot acknowledged in some way.

In applied behavior analysis, the CMO-R is oftenpart of procedures for training or teaching individualswith defective social and verbal repertoires. In languagetraining programs, for example, learners are typicallyasked questions or given verbal instructions that clearlyfunction as reflexive CMOs, which will be followed byfurther intense social interaction if they do not respond tothem appropriately. Such questions and instructions maywell be functioning primarily as reflexive CMOs, ratherthan SDs related to the possibility of receiving praise orother positive reinforcers. Although it may not be possi-ble to completely eliminate this type of aversiveness, it isimportant to understand its nature and origin.

Weakening the Effects of Reflexive CMOs

Extinction and two forms of unpairing can weaken theeffects of CMO-Rs. Extinction consists of the occurrenceof a response without its reinforcement. The reinforce-ment for the response evoked by the CMO-R is the ter-mination of the warning stimulus. When the responseoccurs repeatedly without terminating the warning stim-ulus, and the ultimate worsening occurs when the relevanttime period elapses, the response will be weakened aswith any extinction procedure.

Two forms of unpairing will also weaken the reflex-ive CMO relation. One involves the nonoccurrence of theultimate form of worsening when the warning stimulus isnot terminated.11 This type of unpairing weakens theCMO relation by weakening the reinforcement that con-sists of warning-stimulus termination. Warning-stimulustermination is only reinforcing to the extent that thewarning-stimulus-off condition is an improvement overthe warning-stimulus-on condition. When warning-stimulus-on is not followed by the ultimate worsening, itbecomes no worse than warning-stimulus-off, and the re-inforcement for the avoidance response decreases.

The other type of unpairing occurs when the re-sponse continues to terminate the warning stimulus, but

Motivating Operations

402

Motivating Operations

the ultimate worsening occurs anyway when it wouldhave occurred if the warning stimulus had not been ter-minated. In this case the warning-stimulus-off conditionbecomes just as bad as the warning-stimulus-on condi-tion, and again the reinforcement for the avoidance re-sponse decreases.

In the usual academic demand situation with somedevelopmentally disabled individuals, typical problembehavior (e.g., tantruming, self-injury, aggressive be-havior) is sometimes evoked by the early phases of the de-mand sequence and reinforced by terminating the earlyphase and not progressing to the later and possibly moredemanding phases. Assuming that the ultimate phases ofthe demand sequence must occur because of the impor-tance of the relevant repertoire being taught, and assum-ing that they cannot be made less aversive, then extinctionof the problem behavior is the only procedure that is ofpractical value. This would consist of continuing the de-mand sequence irrespective of the occurrence of prob-lem behavior. Neither of the unpairing procedures wouldbe effective. The first would not result in any training,and the other would result in the problem behavior oc-curring as soon as the later phases of training began.

But of course, one should not assume that the ulti-mate phases of the demand cannot be made less aversive.Increasing instructional effectiveness will result in lessfailure, more frequent reinforcement, and other generalimprovements in the demand situation to the point atwhich it may function as an opportunity for praise, edi-bles, and so forth, rather than a demand.

Transitive CMO (CMO-T):A Stimulus That Alters the Value of Another Stimulus

CMO-T Description

When an environmental variable is related to the relationbetween another stimulus and some form of improve-ment, the presence of that variable functions as a transi-tive CMO, or CMO-T, to establish the second condition’sreinforcing effectiveness and to evoke the behavior thathas been followed by that reinforcer. All variables thatfunction as UMOs also function as transitive CMOs forthe stimuli that are conditioned reinforcers because oftheir relation to the relevant unconditioned reinforcer.Consider the simple operant chain described earlier: Afood-deprived rat pulls a cord that turns on a buzzersound. In the presence of the buzzer sound the rat emitssome other response that causes the delivery of a foodpellet. Food deprivation makes food effective as an un-conditioned reinforcer, a relation that requires no learn-ing history. Food deprivation also makes the buzzer soundeffective as a conditioned reinforcer, which clearly does

12This scenario was first described in Michael, 1982. At that time CMO-Twas called an establishing stimulus, or SE.

require a learning history. Thus, food deprivation is aUMO with respect to the reinforcing effectiveness offood, but a CMO-T with respect to the reinforcing effec-tiveness of the buzzer sound. In the human situation fooddeprivation establishes not only food as a reinforcer, butalso all of the stimuli that have been related to obtainingfood—an attentive server in a restaurant, a menu, theutensils with which one transports the food to the mouth,and so forth.

Understanding transitive CMOs that result fromUMOs in this way requires no special knowledge beyondwhat is needed to understand the effects of UMOs. Also,the evocative effect of such transitive CMOs is not easilyconfused with the evocative effect of an SD. If one can seefood deprivation as an MO (rather than an SD) with re-spect to the behavior that has been reinforced with food,then its function as an MO (rather than an SD) with respectto behavior that has been reinforced with the various food-related conditioned reinforcers is an easy extension.

The reinforcing effectiveness of many (probablymost) conditioned reinforcers is not only altered by rel-evant UMOs as described previously, but also dependenton other stimulus conditions because of an additionallearning history. This notion underlies the fact that con-ditioned reinforcing effectiveness is often said to be de-pendent on a “context.” When the context is notappropriate, the stimuli may be available but are not ac-cessed because they are not effective as reinforcers inthat context. A change to an appropriate context willevoke behavior that has been followed by those stimuli,which are now effective as conditioned reinforcers. Theoccurrence of the behavior is not related to the availabil-ity, but rather to the value, of its consequence. For ex-ample, flashlights are usually available in home settings,but are not accessed until a power failure makes themvaluable. In this sense the power failure (the sudden dark-ness) evokes the behavior that has obtained the flashlightin the past (rummaging around in a particular drawer).The motivative nature of this CMO-T relation is notwidely recognized, and the evocative variable (the suddendarkness) is usually interpreted as an SD.

Human CMO-T Examples

Consider a workman disassembling a piece of equipment,with his assistant handing him tools as he requests them.12

The workman encounters a slotted screw that must be re-moved and requests a screwdriver. The sight of the screwevoked the request, the reinforcement for which is re-ceiving the tool. Prior to an analysis in terms of the CMO-T, the sight of the screw would have been considered an

403

SD for the request, but such screws have not been differ-entially related to the availability of reinforcement for re-quests for tools. In the typical workman’s history,assistants have generally provided requested tools irre-spective of the stimulus conditions in which the requestoccurred. The sight of the screw is more accurately in-terpreted as a CMO-T for the request, not as an SD.

The fact that several SDs are involved in this com-plex situation makes the analysis more difficult. Thescrew is an SD for unscrewing movements (with a screw-driver in hand). The verbal request, although evoked bythe sight of the slotted screw as a CMO-T, is dependenton the presence of the assistant as an SD. The offeredscrewdriver is an SD for reaching movements. The criti-cal issue, however, is the role of the screw in evoking therequest, and this is a motivating rather than a discrimi-native relation.

Another common human example involves a stim-ulus related to some form of danger that evokes somerelevant protective behavior. A night security guard ispatrolling an area and hears a suspicious sound. Hepushes a button on his phone that signals another secu-rity guard, who then activates his own phone and asksif help is needed (which reinforces the first guard’s call).The suspicious sound is not an SD in the presence ofwhich the second security guard’s response is moreavailable, but rather a CMO-T in the presence of whichit is more valuable. SDs are involved, however. Aphone’s ringing is an SD in the presence of which onehas activated the phone, said something into the receiver,and been reinforced by hearing a response from anotherperson. Answering phones that are not ringing has typ-ically not been so reinforced. (Note, incidentally, thatthe effect of the danger signal is not to evoke behaviorthat produces its own termination as with the CMO-R,but rather behavior that produces some other event, inthis case, the sound of the security guard’s colleagueoffering to help.)

For an animal analog of the CMO-T13 consider afood-deprived monkey in a chamber with a retractablelever and a chain hanging from the ceiling. Pulling thechain causes the lever to come into the chamber for 5seconds. If a light (on the wall of the chamber) is on, alever press delivers a food pellet, but if the light is off, thelever press has no effect. The light comes on and goesoff on a random time basis, unrelated to the monkey’sbehavior. Note that the chain pull will cause the lever tocome into the chamber for 5 seconds irrespective of thecondition of the light. A well-trained monkey’s chain-

13This animal example was first described in Michael, 1982. The languagehas been changed to be more in line with current terminology.

pulling behavior would be infrequent when the light isoff, but evoked by the light onset. The light onset wouldbe the CMO-T in this situation, like the slotted screw orthe suspicious sound in the previous examples.

Until recently most behavior analysts interpretedtransitive CMOs as SDs. The distinction hinges on the re-lation between reinforcer availability and the presence orabsence of the stimulus. If the reinforcer is more availablein the presence than in the absence of the stimulus, thestimulus is an SD; if it is just as available in the absenceas in the presence of the stimulus, the stimulus is a CMO-T. Screwdrivers have typically been just as available in theabsence as in the presence of screws. The response bythe security guard’s colleague has been just as availablein the absence as in the presence of a suspicious noise.The retractable lever was just as available to the monkey(for pulling the chain) in the absence as in the presenceof the light.

Weakening the Effects of Transitive CMOs

The evocative effect of the CMO-T can be temporarilyweakened by weakening the MO related to the ultimateoutcome of the sequence of behaviors. Consider the ex-ample of the workman requesting a screwdriver and alsothe example of the monkey pulling the chain to cause thelever to come into the chamber. Temporary weakeningof the CMO relation could be accomplished by elimi-nating the reason for doing the work—the workman istold that the equipment does not have to be disassem-bled, and the monkey is fed a large amount of food priorto being placed in the experimental chamber. Of course,the next time the workman saw a screw that had to beunscrewed, he would again ask his assistant for a screw-driver. When the monkey was again food deprived in thechamber and the light came on, the chain-pulling re-sponse would occur.

More permanent weakening can be accomplished byan extinction procedure and by two kinds of unpairing. Toextinguish the request that is evoked by the slotted screw,something in the environment would have to change sothat such requests are no longer honored (e.g., assistantsnow believe that workmen should get their tools them-selves). In the monkey example, the chain pull would nolonger cause the lever to come into the chamber. One typeof unpairing would be illustrated if screwdrivers no longerworked to unscrew slotted screws if, say, all such screwswere now welded. The workman can still obtain a screw-driver by asking, but the value of the screwdriver is ulti-mately lost because such a tool wouldn’t work anymore.In the monkey scenario, the monkey can still cause thelever to come into the chamber by pulling the chain, butpressing the lever does not deliver food. A second kind of

Motivating Operations

404

unpairing would be illustrated if construction practiceschanged so that slotted screws could now be easily un-screwed by hand as well as with a screwdriver, or if thelever press delivers food when the light is off as well aswhen it is on.

Importance of the CMO-Tfor Language Training

It is increasingly being recognized that mand training isan essential aspect of language programs for individualswith severely defective verbal repertoires. For such indi-viduals, manding does not spontaneously arise from tactand receptive language training. The learner has to wantsomething, make an appropriate verbal response, and bereinforced by receiving what was wanted. With this pro-cedure the response comes under the control of the rele-vant MO. UMOs can be taken advantage of to teachmands for unconditioned reinforcers, but this is a rela-tively small repertoire. Use of a CMO-T, however, is away to make the learner want anything that can be ameans to another end. Any stimulus, object, or event canbe a basis for a mand simply by arranging an environ-ment in which that stimulus can function as a conditionedreinforcer. Thus, if a pencil mark on a piece of paper isrequired for an opportunity to play with a favored toy, amand for a pencil and for a piece of paper can be taught.

Practical Implications of the CMO-T in General

A CMO-T is a stimulus onset that evokes behavior becauseof its relation to the value of a consequence rather than tothe availability of a consequence. This distinction must berelevant in subtle ways to the effective understanding andmanipulation of behavioral variables for a variety of prac-tical purposes. Two forms of behavioral control, the SD

and the CMO-T, which are so different in origin, wouldbe expected to differ in other important ways. This issue isan example of terminological refinement, not a discoveryof any new empirical relations. The value of this refine-ment, should it have value, will be found in the improvedtheoretical and practical effectiveness of those whose ver-bal behavior has been affected by it.

General Implications of Motivating Operationsfor Behavior AnalysisBehavior analysis makes extensive use of the three-termcontingency relation involving stimulus, response, and con-sequence. However, the reinforcing or punishing effec-tiveness of the consequence in developing control by thestimulus depends on an MO, and the future effectivenessof the stimulus in evoking the response depends on thepresence of the same MO in that future condition. In otherwords, the three-term contingency cannot be fully under-stood, or most effectively used for practical purposes, with-out a thorough understanding of motivating operations.

Summary

Motivating Operations

Definition and Characteristics of Motivating Operations

1. A motivating operation (MO) (a) alters the effectivenessof some stimulus as a reinforcer, the value-altering effect;and (b) alters the current frequency of all behavior thathas been reinforced by that stimulus, the behavior-alteringeffect.

2. The value-altering effect is either (a) an increase in the re-inforcing effectiveness of some stimulus, in which casethe MO is an establishing operation (EO); or (b) a decreasein reinforcing effectiveness, in which case the MO is anabolishing operation (AO).

3. The behavior-altering effect is either (a) an increase in thecurrent frequency of behavior that has been reinforced bysome stimulus, called an evocative effect; or (b) a decreasein the current frequency of behavior that has been rein-forced by some stimulus, called an abative effect.

4. The alteration in frequency can be (a) the direct evocativeor abative effect of the MO on response frequency and/or(b) the indirect effect on the evocative or abative strengthof relevant discriminative stimuli (SDs).

5. In addition to frequency, other aspects of behavior suchas response magnitude, latency, and relative frequency canbe altered by an MO.

6. It is not correct to interpret the behavior-altering effect ofan MO as due to the organism’s encountering a more orless effective form of reinforcement; a strong relation ex-ists between MO level and responding when no reinforcersare being received.

7. Behavior-altering versus function-altering effects: MOsand SDs are antecedent variables that have behavior-altering effects. Reinforcers, punishers, or the occurrenceof a response without its reinforcer (extinction procedure)

405

or without its punisher (recovery from punishment proce-dure) are consequences that change the organism’s reper-toire so that it behaves differently in the future. SDs andMOs alter the current frequency of behavior, but rein-forcers, punishers, and response occurrence without con-sequence alter the future frequency of behavior.

A Critical Distinction: Motivative versus Discriminative Relations

8. An SD controls a type of behavior because it has been re-lated to the differential availability of an effective rein-forcer for that type of behavior. This means that therelevant consequence has been available in the presence of,and unavailable in the absence of, the stimulus. Most vari-ables that qualify as motivating operations fail to meet thissecond SD requirement because in the absence of the vari-able, there is no MO for the relevant reinforcer, and thusno reinforcer unavailability.

9. A useful contrast: SDs are related to the differential avail-ability of a currently effective form of reinforcement for aparticular type of behavior; MOs are related to the differ-ential reinforcing effectiveness of a particular type of en-vironmental event.

Unconditioned Motivating Operations (UMOs)

10. The main UMOs for humans are those related to depriva-tion and satiation with respect to food, water, oxygen, ac-tivity, and sleep; and those related to sexual reinforcement,comfortable temperature conditions, and painful stimula-tion. For each variable, there are two MOs, one with an es-tablishing operation (EO), and one with an abolishingoperation (AO). Also, each variable has an evocative effectand an abative effect. Thus food deprivation is an EO andhas evocative effects on relevant behavior, and food ingestionis an AO and has abative effects on relevant behavior.

11. The cognitive interpretation of behavior-altering effects isthat the person understands (i.e., can verbally describe)the situation and then behaves appropriately as a result ofthat understanding. But in fact, reinforcement automati-cally adds the reinforced behavior to the repertoire thatwill be evoked and abated by the relevant UMO; the per-son does not have to “understand” anything for an MO tohave its effects. Two different kinds of ineffectiveness canresult from this misinterpretation. There may be insuffi-cient effort to train individuals who have very limited ver-bal repertoires, and inadequate preparation for an increasein problem behavior that preceded reinforcement.

12. The role of stimulus conditions in the generality of train-ing effects is well known, but unless the MO for the rein-forcers that were used in training is also in effect, thetrained behavior will not occur in the new conditions.

13. Temporary weakening of EO effects can be accomplishedby the relevant AO and abative operations. For example,undesirable behavior based on food deprivation can beabated by food ingestion, but the behavior will come back

when deprivation is again in effect. More permanent weak-ening of behavior-altering effects can be accomplished byan extinction procedure (i.e., letting undesired behaviorevoked by the MO occur without reinforcement).

14. A variable that alters the punishing effectiveness of a stim-ulus, object, or event, and alters the frequency of the be-havior that has been so punished, is an MO for punishment.If the value-altering effect does not depend on a learninghistory, then the variable is a UMO. An increase in painwill function as punishment as long as the current level isnot so high that an increase cannot occur. If a stimulus isa punisher because of its relation to a reduced availabilityof a reinforcer, then the MO for that reinforcer is the MOfor the punisher. Thus the MO for food removal as a pun-isher is food deprivation.

15. Social disapproval, time-out from reinforcement, and re-sponse cost are stimulus conditions that usually functionas punishment because they are related to a reduction in theavailability of some kinds of reinforcers. The MOs forthose forms of punishment are the MOs for the reinforcersthat are being made less available.

16. Multiple effects: Environmental events that function asUMOs will typically have behavior-altering effects on thecurrent frequency of a type of behavior, and (as conse-quences) function-altering effects with respect to the futurefrequency of whatever behavior immediately preceded theonset of the event.

17. A behavioral intervention is often chosen because of (a) itsMO behavior-altering effect, or (b) its repertoire-alteringeffect (as a reinforcer or a punisher). But irrespective of thepurpose of the intervention, an effect in the opposite di-rection from the targeted one will also occur, and shouldbe planned for.

Conditioned Motivating Operations (CMOs)

18. Motivating variables that alter the reinforcing effective-ness of other stimuli, objects, or events, but only as a re-sult of the organism’s learning history, are calledconditioned motivating operations (CMOs). As with theUMOs, they also alter the momentary frequency of all be-havior that has been reinforced (or punished) by thoseother events.

19. The surrogate CMO (CMO-S) is a stimulus that acquiresits MO effectiveness by being paired with another MO,and has the same value-altering and behavior-altering ef-fects as the MO with which it was paired.

20. A stimulus that acquires MO effectiveness by precedingsome form of worsening or improvement is called a re-flexive CMO (CMO-R). It is exemplified by the warningstimulus in a typical escape–avoidance procedure, whichestablishes its own offset as reinforcement and evokes allbehavior that has accomplished that offset.

21. In evoking the avoidance response, the CMO-R has usuallybeen interpreted as an SD. The CMO-R fails to qualify as

Motivating Operations

406

Motivating Operations

an SD, however, because in its absence there is no MO fora reinforcer that could be unavailable and thus no rein-forcer unavailability. It clearly qualifies as an MO and, be-cause its MO characteristics depend on its learning history,as a CMO.

22. The CMO-R identifies a negative aspect of many everydayinteractions that otherwise would be interpreted as a se-quence of opportunities for positive reinforcement. Oneexample is a request for information, which initiates a briefperiod during which a response must be made to termi-nate a period of increasing social awkwardness.

23. The CMO-R is often an unrecognized component of pro-cedures used in teaching effective social and verbal be-havior. Learners are asked questions or given instructions,which are followed by further intense social interaction ifthey do not respond to them appropriately. The question orinstruction may be functioning more as a warning stimu-lus, as a CMO-R, than as an SD related to an opportunityfor receiving praise or other positive reinforcers.

24. The CMO-R can be weakened by extinction if the responseoccurs without terminating the warning stimulus (e.g., con-tinuing the demand sequence irrespective of the occur-rence of problem behavior), or by two kinds of unpairing(the ultimate worsening fails to occur or occurs irrespec-tive of the avoidance response).

25. An environmental variable that establishes (or abolishes)the reinforcing effectiveness of another stimulus andevokes (or abates) the behavior that has been reinforced bythat other stimulus is a transitive CMO, or CMO-T.

26. Variables that function as UMOs also function as transitiveCMOs for the stimuli that are conditioned reinforcers be-cause of their relation to the unconditioned reinforcer.Food deprivation (as a UMO) establishes as a reinforcernot only food but also (as a CMO-T) all of the stimuli thathave been related to obtaining food (e.g., the utensils withwhich one transports food to the mouth).

27. The reinforcing effectiveness of many conditioned rein-forcers is not only altered by relevant UMOs, but also maybe dependent on other stimulus conditions because of anadditional learning history. Those stimulus conditions, asCMO-Ts, then also evoke behavior that has obtained theconditioned reinforcers. The occurrence of the behavioris not related to the availability, but rather to the value, ofits consequence.

28. A model for the CMO-T is a feature of the environmentthat must be manipulated with a tool—evoking behaviorthat obtains that tool—for example, requesting it of an-other person. Another human example is a stimulus re-lated to some form of danger evoking the relevantprotective behavior.

29. The evocative effect of the CMO-T can be temporarilyweakened by weakening the MO related to the ultimateoutcome of the sequence of behaviors (e.g., the job relatedto the request for a tool is no longer necessary). More per-manent weakening can be accomplished by an extinctionprocedure (e.g., requests for tools are no longer honored)and by two kinds of unpairing (e.g., the tool no longer ac-complishes the task, or the task can be accomplished with-out the tool).

30. The CMO-T is especially valuable in language programsfor teaching mands. It is a way to make the learner wantanything that can be a means to another end, and then toreinforce an appropriate mand with the object that iswanted.

General Implications of Motivating Operations for Behavior Analysis

31. The three-term contingency of stimulus, response, andconsequence is an essential component of behavior analy-sis, but this relation cannot be fully understood, or most ef-fectively used, without a thorough understanding ofmotivating operations.

407

Stimulus Control

Key Termsantecedent stimulus classarbitrary stimulus classconcept formationdiscriminative stimulus (SD)feature stimulus class

matching-to-samplereflexivitystimulus controlstimulus delta (SΔ)stimulus discrimination training

stimulus equivalencestimulus generalizationstimulus generalization gradientsymmetrytransitivity

Behavior Analyst Certification Board® BCBA® & BCABA®Behavior Analyst Task List,© Third Edition

Content Area 3: Principles, Processes and Concepts

3-2 Define and provide examples of stimulus and stimulus class.

3-7 Define and provide examples of stimulus control.

3-8 Define and provide examples of establishing operations.

3-12 Define and provide examples of generalization and discrimination.

Content Area 9: Behavior Change Procedures

9-7 Use discrimination training procedures.

9-8 Use prompt and prompt fading.

9-21 Use stimulus equivalence procedures.

© 2006 The Behavior Analyst Certification Board, Inc.,® (BACB®) all rights reserved. A current version of thisdocument may be found at www.bacb.com. Requests to reprint, copy, or distribute this document and ques-tions about this document must be submitted directly to the BACB.

From Chapter 17 of Applied Behavior Analysis, Second Edition. John O. Cooper, Timothy E. Heron, William L. Heward. Copyright © 2007by Pearson Education, Inc. All rights reserved.

408

Stimulus Control

Reinforcement of an operant response increasesthe frequency of future responding and influ-ences the stimuli that immediately precede the

response. The stimuli preceding the response (i.e., an-tecedent stimuli) acquire an evocative effect on the rele-vant behavior. In a typical laboratory demonstration ofoperant conditioning, a rat is placed in an experimentalchamber and given the opportunity to press a lever. Con-tingent on a lever press the rat receives a food pellet. Re-inforcement of the lever press increases the frequency oflever pressing.

Researchers can make this simple demonstration morecomplex by manipulating other variables. For example, oc-casionally a buzzer might sound, and the rat receives a foodpellet only when the lever is pressed in the presence of thebuzzer. The buzzer sound preceding the lever press is calleda discriminative stimulus (SD, pronounced “ess-dee”).With some experience, the rat will make more lever pressesin the presence of the buzzer sound (SD) than in its absence,a condition called stimulus delta (SΔ, pronounced “ess-delta”). Behavior that occurs more often in the presence ofan SD than in its absence is under stimulus control. Tech-nically, stimulus control occurs when the rate, latency, du-ration, or amplitude of a response is altered in the presenceof an antecedent stimulus (Dinsmoor, 1995a, b). A stimu-lus acquires control only when responses emitted in thepresence of that stimulus produce reinforcement more oftenthan responses in the absence of stimulus.

Stimulus control should not be viewed as just an in-teresting procedure for laboratory demonstrations. Stim-ulus control plays a fundamental role in everydaycomplex behaviors (e.g., language systems, conceptualbehavior, problem solving), education, and treatment(Shahan & Chase, 2002; Stromer, 2000). People do notanswer the telephone in the absence of a ring. A persondriving a car stops the car more often in the presence ofa red traffic light than in its absence. People who use bothSpanish and English languages will likely use Spanish,not English, to communicate with a Spanish-speakingaudience.

Behaviors considered inappropriate in one contextare accepted as appropriate when emitted in anothercontext. For example, teachers will accept loud talkingas appropriate on the playground, but not in the class-room. Arriving 15 to 20 minutes late is appropriate fora party, but not for a job interview. Some behaviors thatparents, teachers, and society in general may call inap-propriate are not behavior problems per se. The problemis emitting behaviors at a time or in a place or circum-stance that is deemed inappropriate by others. This rep-resents a problem of stimulus control, and it is a majorconcern for applied behavior analysts. This chapter ad-dresses the factors related to the development of stim-ulus control.

Antecedent Stimuli

Stimulus control of an operant response appears similarto the control of respondent behavior by a conditionedstimulus. The SD and the conditioned stimulus are an-tecedent stimuli that evoke the occurrence of behavior.Applied behavior analysts, however, need to distinguishbetween the function of an SD for operant behavior anda conditioned stimulus for respondent conditioning; thisis a crucial distinction for understanding environmentalcontrol of operant behavior. In a typical laboratorydemonstration of respondent conditioning, the experi-menter presents food to a dog. The food functions as anunconditioned stimulus to elicit the unconditioned re-sponse—salivation. The experimenter then introduces abuzzer sound (a neutral stimulus). The buzzer sound doesnot elicit salivation. Then after several occurrences ofpairing the buzzer sound with food delivery, the buzzersound becomes a conditioned stimulus that will elicit sali-vation (a conditioned response) in the absence of food(an unconditioned stimulus).

Laboratory experiments on operant and respondentconditioning have demonstrated consistently that ante-cedent stimuli can acquire control over a behavior. Abuzzer sounds, and a rat presses a bar. A buzzer sounds,and a dog salivates. Despite the similarities, the barpress is an operant behavior, salivation is a respondentbehavior, and the manner in which the SD and the con-ditioned stimulus acquire their controlling functions isvery different. An SD acquires its controlling functionthrough association with stimulus changes that occurimmediately following behavior. Conversely, a condi-tioned stimulus acquires its controlling functionthrough association with other antecedent stimuli thatelicit the behavior (i.e., an unconditioned stimulus orconditioned stimulus).

The environment contains many forms of energy thata person can perceive. Evolutionary adaptation to the en-vironment has provided organisms with anatomical struc-tures (i.e., organ receptors) that detect these forms ofenergy. For example, the eye detects electromagnetic ra-diation; the ear, air-pressure vibrations; the tongue andnose, chemical energies; skin receptors, mechanical pres-sure and thermal changes (Michael, 1993).

Applied behavior analysts use the physical propertiesof a stimulus to investigate its effect on behavior. Thephysical energy, however, must relate to the sensory ca-pabilities of the organism. For instance, ultraviolet radi-ation is physical energy, but ultraviolet radiation does notfunction as a stimulus for humans because that energydoes not result in any relationship with operant behav-ior. Ultraviolet radiation would function as a stimulus forhumans in the presence of a special device that detects theradiation. A dog whistle functions as a stimulus for dogs

409

Stimulus Control

but not for humans. Dogs can hear the air pressure vi-brations of the whistle, but humans cannot. Any form ofphysical energy capable of detection by the organism canfunction as a discriminative stimulus.

Discriminative and MotivationalFunctions of Stimuli

Discriminative stimuli and establishing operations sharetwo important similarities: (a) both events occur beforethe behavior of interest, and (b) both events have evoca-tive functions. To evoke behavior means to occasion be-havior by calling it up or producing it. Distinguishing thenature of the antecedent control is often difficult. Was thebehavior evoked by an SD, an establishing operation (EO),or both?

In some situations, an antecedent stimulus changealters the frequency of a response and appears to have anSD effect. For example, in a typical shock–escape pro-cedure an animal is placed in an experimental chamber.Shock is administered until a response removes theshock for a designated period of time. Then the shock isreintroduced until it is again terminated with a response,and so on. An experienced animal removes the shockimmediately. In such a situation some would say that theshock serves as an SD. The shock, an antecedent stimu-lus, evokes a response that is negatively reinforced (theshock is removed). In this situation, however, shock doesnot function as an SD. A response in the presence of anSD must produce more frequent reinforcement than itdoes in its absence. Even though the animal receives re-inforcement by removing the shock, the absence of shockdoes not constitute a state of lower frequency reinforce-ment. Before the response can be reinforced, the shockmust be on. Shock in this example is functioning as anEO because it changes what functions as reinforcementrather than the availability of reinforcement (Michael,2000). Often the apparent SD effect does not have a his-tory of effective differential reinforcement correlatedwith the altered frequency of response. These situationsare probably related to motivating operations (MOs)rather than stimulus control.

The following scenario puts Michael’s laboratory ex-ample into an applied context: A teacher requests a re-sponse from a student. The student emits aggressivebehaviors immediately following the request. The ag-gressive behaviors remove the request. Later, the teacheragain requests a response, and the cycle of aggressive be-haviors and removal of the request continues. As in thelaboratory example, some would say that the requestsserve as an SD. The request, an antecedent stimulus,evokes aggression that is negatively reinforced (the re-quest is removed). In this applied example, the request

is an EO that evokes the aggressive behaviors rather thanan SD. It makes no sense to talk about an SD evoking theaggressive behavior in the absence of the request (McGill,1999), just as it makes no sense to talk about an SD evok-ing a response in the absence of the shock. The organ-ism does not “want” to escape in the absence of theshock/demand (EO) situation. An antecedent stimulusfunctions as an SD only when in its presence a specific re-sponse or set of responses produces reinforcement, andthe same response does not produce reinforcement in theabsence of that stimulus (Michael, 2000).

The laboratory and applied examples can bechanged to show the difference between the evocativefunctions of MOs and stimulus control. The experimen-tal conditions could change so that a buzzer sounds atdifferent periods of time throughout the session, theshock would be removed only when a response is madewhile the buzzer is sounding, and a response would notproduce reinforcement in the absence of the buzzer (i.e.,the shock would not be removed). Under these condi-tions the buzzer would function as an SD, and stimuluscontrol would be demonstrated. The applied conditionscould change so that two teachers work with the student.One teacher allows the student’s aggressive behavior toremove requests. The other teacher does not remove therequests. The different procedures produce effective neg-ative reinforcement in the presence of one teacher butnot in the presence of the other. The teacher who allowsthe removal of the request would become an SD thatevokes aggressive behaviors from the student. In thesemodified examples, the characteristics of the antecedentcontrol would be different in the presence and absenceof the buzzer and the other teacher, and the buzzer andthe other teacher would be correlated with an increasedfrequency of reinforcement.

An understanding of the SD and MO evocative func-tions will improve the technological descriptions of an-tecedent control and the understanding of behaviorchange (Laraway, Snycerski, Michael, & Poling, 2001).Ultimately, these understandings will produce greater ef-fectiveness in education and treatment.

Stimulus GeneralizationWhen an antecedent stimulus has a history of evoking aresponse that has been reinforced in its presence, there isa general tendency for similar stimuli to also evoke thatresponse. This evocative function occurs with stimuli thatshare similar physical properties with the controlling an-tecedent stimulus. This tendency is called stimulus gen-eralization. Conversely, stimulus discrimination occurswhen different stimuli do not evoke the response. Dif-ferent degrees of stimulus control produce the defining

410

Stimulus Control

characteristics of stimulus generalization and discrimina-tion. Stimulus generalization and discrimination are rel-ative relations. Stimulus generalization reflects a loosedegree of stimulus control, whereas discrimination has arelatively tight degree of control. In a simple everyday sit-uation, stimulus generalization can be observed when ayoung child who has learned to say “daddy” in the pres-ence of her father says “daddy” in the presence of a neigh-bor, a clerk in a store, or Uncle Joe. Further conditioningwill sharpen the degree of stimulus control to one specificstimulus, the child’s father.

Stimulus generalization occurs with new stimuli thatshare similar physical dimensions with the controllingantecedent stimulus. For instance, if the response has ahistory of producing reinforcement in the presence of ablue stimulus, stimulus generalization is more likely witha lighter or darker color of blue than with a red or yellowstimulus. Also, stimulus generalization is likely when thenew stimuli have other elements (e.g., size, shape) incommon with the controlling stimulus. A student whosebehavior has produced reinforcement for making a re-sponse to a circle is more likely to make the same re-sponse to an oval shape than to a triangular shape.

A stimulus generalization gradient graphically de-picts the degree of stimulus generalization and discrim-ination by showing the extent to which responsesreinforced in one stimulus condition are emitted in thepresence of untrained stimuli. When the slope of the gra-dient is relatively flat, little stimulus control is evident.However, an increasing slope of the gradient shows morestimulus control.

Behavior analysts have used several procedures toproduce stimulus generalization gradients. The classictechnique of Guttman and Kalish (1956) provides a rep-resentative example. Their technique is important becausemany prior researchers had obtained stimulus general-ization gradients by conditioning groups of subjects onthe same stimulus value and then testing them individu-ally, each with a different stimulus value. Obviously, thistype of technique cannot demonstrate the degree of stim-ulus control for individual subjects. Guttman and Kalishprovided a method of acquiring gradients for each subjectand laid the foundation for greater understanding of theprinciples governing stimulus control.

Guttman and Kalish reinforced pigeons on a VI1-minute schedule for pecking a disk illuminated with alight source appearing yellow-green to humans (i.e., awavelength of 550 mμ). After the disk peck had stabilized,the pigeons were tested under extinction conditions onthe original stimulus and a randomized series of 11 dif-ferent wavelengths never presented during training as thetest for stimulus generalization.

Stimulus generalization occurs with responses to anew stimulus after a response has been conditioned in

the presence of another similar stimulus. If respondingproduces reinforcement during testing for stimulus gen-eralization, it cannot be clear whether any responses to anew stimulus after the first response represent general-ization or if the responses are a function of the rein-forcement schedule. Guttman and Kalish avoided thisproblem of confounding their results by testing for gen-eralization during extinction.

Lalli, Mace, Livezey, and Kates (1998) reported anexcellent applied example of assessing and displaying astimulus generalization gradient. They used a stimulusgeneralization gradient in an assessment of the relationbetween the physical proximity of an adult and the self-injurious behavior (SIB) of a 10-year-old girl with severemental retardation. The results presented in Figure 1show that the percentage of total SIB across sessions be-came progressively lower as the distance between thetherapist and the girl increased.

Development of StimulusControlStimulus Discrimination Training

The conventional procedure for stimulus discrimina-tion training requires one behavior and two antecedentstimulus conditions. Responses are reinforced in the pres-ence of one stimulus condition, the SD, but responses arenot reinforced in the presence of the other stimulus, the

1009080706050403020100P

erce

nta

ge o

f T

ota

l R

esp

on

ses

Acr

oss

Ses

sio

ns

0 <.5M 1.5 3.0 4.5 6.0 7.5 9.0Distance (Meters)

Stimulus Generalization Gradientof Self-Injurious Behavior

Figure 1 The percentage of total responses acrosssessions at a given distance during generalization tests.<0.5, 1.5, 3.0, 4.5, 6.0, 7.5, and 9.0 refer to the distance(in meters) between the therapist and the participant.From “Assessment of Stimulus Generalization Gradients in the Treatment of Self-Injurious Behavior” by J. S. Lalli, F.C. Mace, K. Livezey, and K. Kates (1998).Journal of Applied Behavior Analysis, 31, p. 481. Copyright 1988 by the Societyfor the Experimental Analysis of Behavior, Inc. Used by permission.

411

Stimulus Control

SΔ. When a teacher applies this training procedure ap-propriately and consistently, responding in the presenceof the SD will come to exceed responding in the presenceof the SΔ. Often, over time, the participant will learn notto respond in the presence of the SΔ.

Applied behavior analysts often describe the con-ventional procedures for discrimination training with dif-ferential reinforcement as alternating conditions ofreinforcement and extinction, meaning that a responseproduces reinforcement in the SD condition but not in theSΔ condition. To clarify and stress an important point,however, the SΔ is used not only to show a condition ofzero reinforcement (extinction) but also to denote a con-dition that provides a lesser amount or quality of rein-forcement than the SD condition (Michael, 1993).

Maglieri, DeLeon, Rodriguez-Catter, and Sevin(2000) used discrimination training as part of an inter-vention to decrease the food stealing of a 14-year-old girlwith Prader-Willi syndrome, a serious medical conditionusually correlated with obesity and food stealing. Dur-ing discrimination training, a teacher showed the girl twocontainers of cookies. One container contained a warn-ing label, the SΔ, and the other container had no warninglabel, the SD. The teacher told the girl that she could eatcookies only from the container without the warninglabel. The teacher asked the girl in the presence of thetwo containers, “Which cookies can you eat?” If the girlanswered that she could eat the cookies from the con-tainer without the warning, the teacher let her eat onecookie from the container. This discrimination trainingprocedure decreased stealing food from the containersmarked with the warning label.

Concept Formation

The preceding section on discrimination training de-scribes how an antecedent stimulus can acquire controlover a response, meaning that the behavior occurs morefrequently in the presence of the stimulus than in its ab-sence. A discrimination training procedure might be usedto teach a preschool student to name the primary colors.For instance, to teach the color red, the teacher coulduse a red object such as a red ball as the SD conditionand a nonred object such as a yellow ball as the SΔ con-dition. The teacher could position both balls randomly infront of the student, direct the student to name and pointto the red ball, and reinforce correct responses, but notincorrect responses. After a few trials the red ball wouldacquire stimulus control over the student’s response, andthe student would reliably differentiate the red ball fromthe yellow ball. This simple discrimination training, how-ever, may not sufficiently meet the instructional objec-tive of identifying the color red. The teacher may wantthe student to learn not only to discriminate between red

balls and balls of different color, but also the concept ofredness.

Terms such as concept formation or concept ac-quisition imply for many people some hypothetical con-struct of a mental process. Yet acquiring a concept isclearly dependent on responses in the presence of an-tecedent stimuli and the consequences following thoseresponses. Concept formation is a behavioral outcomeof stimulus generalization and discrimination (Keller &Schoenfeld, 1950/1995). Concept formation is a com-plex example of stimulus control that requires both stimulus generalization within a class of stimuli and dis-crimination between classes of stimuli. An antecedentstimulus class is a set of stimuli that share a commonrelationship. All of the stimuli in an antecedent stimu-lus class will evoke the same operant response class, orelicit the same respondent behavior. This evocative oreliciting function is the only common property amongthe stimuli in the class (Cuvo, 2000). For example, con-sider a stimulus class for the concept red. A red objectis called red because of a particular conditioning his-tory. This conditioning history of differential rein-forcement will evoke the response red to light waves ofdifferent wavelengths, from light red to dark red. Thesedifferent shades of the color red that will evoke the re-sponse red share a conditioning history and are includedin the same stimulus class. The different shades of red(e.g., light red) that do not evoke the response red arenot members of that stimulus class. Therefore the con-cept of redness requires stimulus generalization fromthe trained stimuli to many other stimuli within the stim-ulus class. If the preschool student described earlier hadacquired the concept of redness, he would be able toidentify the red ball and, without specific training or re-inforcement, choose a red balloon, a red toy car, a redpencil, and so on.

In addition to stimulus generalization, a concept re-quires discrimination between members and nonmem-bers of the stimulus class. For example, the concept ofredness requires discriminating between red and othercolors and irrelevant stimulus dimensions such as shapeor size. The concept begins with discrimination betweenthe red ball and the yellow ball but results in discrimi-nating a red dress from a blue dress, a red toy car from awhite toy car, and a red pencil from a black pencil.

Discrimination training is fundamental to teachingconceptual behavior. Antecedent stimuli representativeof a group of stimuli sharing a common relationship (i.e.,the stimulus class) and antecedent stimuli from otherstimulus classes must be presented. Before a concept canbe acquired, the teacher must present exemplars of whatthe concept is (i.e., the SD condition) and what the con-cept is not (i.e., the SΔ condition). This approach holdstrue for all conceptual development, even for highly

412

Stimulus Control

abstract concepts (e.g., honesty, patriotism, justice, free-dom, sharing). It is also possible to acquire a conceptthrough vicarious discrimination training and differen-tial reinforcement. A verbal definition of a concept, withexamples and nonexamples of the concept, may be suf-ficient for concept formation without additional directtraining.

Authors of children’s literature often teach conceptsvicariously, such as good and bad, honest and dishonest,courageous and cowardly. For example, consider the storyof an owner of a mom-and-pop grocery store who wantedto hire a young person to work in the store. The job in-cluded sweeping the floor, bagging groceries, and keep-ing the shelves neat. The owner wanted an honest personto work for him, so he decided to test all applicants tosee whether they were honest. The first young personwho applied was given the opportunity to try the job be-fore the owner made the commitment to hire him. Butbefore the applicant came to work, the owner hid a dol-lar bill where he knew the young person would find it. Atthe end of the test period the owner asked the applicanthow he had liked working in the store, whether he wantedthe job, and whether anything surprising or unusual hadhappened to him. The applicant replied that he wantedthe job and that nothing surprising had happened to him.The grocer told the first applicant that he wanted to con-sider others who had applied also. The second applicantworked a test period with the same results as the first per-son. He did not get the job. The third young person towork for the grocer was sweeping the floor, found thedollar bill, and took it immediately to the grocer. Thethird applicant said she turned in the dollar bill in case oneof the customers or the grocer had dropped it. The gro-cer asked the applicant whether she liked the job andwanted to work for him. The young person replied thatshe did. The grocer told the applicant that she had the jobbecause she was an honest person. The grocer also lether keep the dollar bill.

The previous children’s story presents exemplars ofhonest and dishonest behavior. The honest behavior wasrewarded (i.e., the honest person obtained the job), andthe dishonest behavior was not rewarded (i.e., the firsttwo applicants did not get the job). This story might vic-ariously teach a certain concept of honesty.

Stimuli comprising a class may function within fea-ture stimulus classes and arbitrary stimulus classes(McIlvane, Dube, Green, & Serna, 1993). Stimuli in afeature stimulus class share common physical forms(e.g., topographical structures) or common relative re-lations (e.g., spatial arrangements). Feature stimulusclasses include an infinite number of stimuli and com-prise a large portion of our conceptual behavior. For in-stance, a concept of dog is based on a feature stimulusclass. The common physical forms of all dogs will bemembers of that stimulus class. A young child, throughdifferential reinforcement, will learn to differentiate dogsfrom horses, cats, cows, and so on. Physical forms pro-vide common relations for many feature stimulus classes,such as stimulus classes that evoke the responses book,table, house, tree, cup, cat, rug, onion, and car. A rela-tional, or a relative relation, exists among stimuli in otherfeature classes. Examples of these feature stimulusclasses based on relative relations are found with con-cepts such as bigger than, hotter than, higher than, ontop of, and to the left of.

Stimuli comprising an arbitrary stimulus classevoke the same response, but they do not share a commonstimulus feature (i.e., they do not resemble each other inphysical form, nor do they share a relational relation-ship). Arbitrary stimulus classes are comprised of a lim-ited number of stimuli. For example, a teacher could forman arbitrary stimulus class using the stimuli 50%, 1/2, di-vided evenly, and .5 (see Figure 2). Following training,each of these stimuli of different physical forms willevoke the same response, a half. Green bean, asparagus,potato, and corn could be developed into an arbitrarystimulus class to evoke the response vegetable. Studentslearn to associate vowels with the arbitrary class of theletters A, E, I, O, U and sometimes Y.

The development of concepts and complex verbalrelations plays an important role in parenting, caregiv-ing, education, and treatment. Applied behavior analystsneed to consider different instructional procedures whenteaching concepts and complex verbal relations that pro-duce feature stimulus classes and arbitrary stimulusclasses. A common instructional procedure used with fea-ture stimulus classes is to differentially reinforce re-sponses to is (SD) and is not (SΔ) examples of the concept.

Antecedent Stimuli Response Reinforcement

50%

1/2

Divided evenly

.5A half Reinforcement

Figure 2 Antecedent stimuli with differentphysical forms that evoke the same response,a half. An example of an arbitrary stimulusclass.

413

Stimulus Control

Broad generalization is common with feature stimulusclasses; depending on the functioning level of the partic-ipants, a few trained examples may be sufficient to de-velop the concept. Stimulus generalization, however, isnot a characteristic of arbitrary stimulus classes. Appliedbehavior analysts have developed arbitrary stimulusclasses by using matching-to-sample procedures to cre-ate stimulus equivalence among arbitrary stimuli.

Stimulus EquivalenceIn a historically influential experiment, Sidman (1971)demonstrated the development of an equivalence stimu-lus class among arbitrary stimuli. A boy with severe re-tardation served as the participant. Before the experiment,the boy could

1. match pictures to their spoken names and

2. name the pictures.

Also before the experiment, the boy could not

3. match written words to the spoken names,

4. match written words to the pictures,

5. match pictures to the written words, or

6. say the written words.

Sidman discovered that after the boy was taught tomatch written names to spoken names (#3), he could,without additional instruction, match written names tothe pictures (#4), match pictures to the written names(#5), and say the written words (#6). In other words, asa result of learning one new stimulus–stimulus relation(#3), the other three stimulus–stimulus relations (#4, #5,and #6) emerged without additional training or rein-forcement. When Sidman linked two or more sets of stim-ulus–stimulus relations, these other noninstructed orunreinforced stimulus–stimulus relations emerged. With-out instruction, the boy’s receptive and expressive lan-guage expanded beyond what existed before the start ofthe experiment. This is the big prize—something to shootfor in curriculum design and training programs.

Following Sidman’s research (see Sidman, 1994),stimulus equivalence has become a major area of basicand applied research in many areas of complex verbal re-lations, such as reading (Kennedy, Itkonen, & Lindquist,1994), language arts (Lane & Critchfield, 1998), andmathematics (Lynch & Cuvo, 1995). For example, Rose,De Souza, and Hanna (1996) taught seven nonreadingchildren to read 51 training words. The children matchedwritten words to spoken words, and copied words andnamed the words. All of the children learned to read the51 training words, and five of the seven children read

generalization words. Data from Rose and colleaguesdemonstrate the potential power of stimulus equivalenceas a process for teaching reading.

Research on stimulus equivalence has contributed tothe understanding of stimulus–stimulus relations amongcomplex human behaviors. Sidman (1971) and otherswho researched stimulus equivalence during the 1970s(e.g., Sidman & Cresson, 1973; Spradlin, Cotter, & Bax-ley, 1973) gave future applied behavior analysts a pow-erful method for teaching stimulus–stimulus relations(i.e., conditional stimulus control).

Defining Stimulus Equivalence

Equivalence describes the emergence of accurate re-sponding to untrained and nonreinforced stimulus– stimulus relations following the reinforcement of re-sponses to some stimulus–stimulus relations. Behav-ior analysts define stimulus equivalence by testing for reflexivity, symmetry, and transitivity among stimulus–stimulus relations. A positive demonstrationof all three behavioral tests (i.e., reflexivity, symme-try, and transitivity) is necessary to meet the definitionof an equivalence relation among a set of arbitrary stim-uli. Sidman and Tailby (1982) based this definition onthe mathematical statement:

a. If A = B, and

b. B = C, then

c. A = C

Reflexivity occurs when in the absence of trainingand reinforcement a response will select a stimulus thatis matched to itself (e.g., A = A). For example, a partici-pant is shown a picture of a bicycle and three choice pic-tures of a car, an airplane, and a bicycle. Reflexivity, alsocalled generalized identity matching, has occurred if theparticipant without instruction selects the bicycle fromthe three choice pictures.

Lane and Critchfield (1998) taught generalized iden-tity matching of printed letters and a spoken word (vowelor consonant) to two adolescent females with moderatemental retardation. The comparison stimuli of A and D,and O and V accompanied the sample spoken wordvowel. Generalized identity matching occurred when par-ticipants presented with the sample spoken word vowelselected comparison O from the O and V stimuli and re-jected the nonmatching sample V stimulus; and selectedthe comparison A from the A and D stimuli and rejectedthe nonmatching sample D stimulus.

Symmetry occurs with the reversibility of the sam-ple stimulus and comparison stimulus (e.g., if A = B, thenB = A). For example, the learner is taught, when presentedwith the spoken word car (sample stimulus A), to select

414

Stimulus Control

a comparison picture of a car (comparison B). When pre-sented with the picture of a car (sample stimulus B), with-out additional training or reinforcement, the learner selectsthe comparison spoken word car (comparison A).

Transitivity, the final and critical test for stimulusequivalance, is a derived (i.e., untrained) stimulus–stimulus relation (e.g., A = C, C = A) that emerges as aproduct of training two other stimulus–stimulus relations(e.g., A = B and B = C). For example, transitivity wouldbe demonstrated if, after training the two stimulus–stimulus relations shown in 1 and 2 below, the relationshown in 3 emerges without additional instruction or re-inforcement:

1. If A (e.g., spoken word bicycle) = B (e.g., the pic-ture of a bicycle) (see Figure 3), and

2. B (the picture of a bicycle) = C (e.g., the writtenword bicycle) (see Figure 4), then

3. C (the written word bicycle) = A (the spokenname, bicycle) (see Figure 5).

Matching-to-SampleBasic and applied researchers have used the matching-to-sample procedure to develop and test for stimulus equiv-alence. Dinsmoor (1995b) reported that Skinner introducedthe experimental procedure called matching-to-sample,and he described Skinner’s procedure in the following way.A pigeon was presented with three horizontal keys to peck.The middle key was illuminated with a color to start thetrial. A peck on the illuminated key turned it off and lightedtwo side keys. One of the side keys was the same color asthe sample color illuminated on the middle key. A peck onthe side key with the same color as the sample key pro-duced reinforcement. An error response was not reinforced.

The three-term contingency is the basic unit of analy-sis in the development of complex stimulus-control

bicycleFigure 3 Example of an A = B relation (spoken name andpicture).

bicycle airplane car

Figure 4 Example of a B = C relation (picture andwritten word).

415

Stimulus Control

bicycle

car airplane bicycle

Figure 5 Example of a transitive relation, C = A (writtenword and spoken name), thatemerges as a result of training A = B (spoken name and picture)and B = C (picture and writtenword) relations.

repertoires. Skinner’s matching-to-sample procedure in-cluded the three-term contingency:

SD ------------→ Response ------------→ ReinforcementSide key with Key peck Graincolor

This basic contingency, however, is incomplete becauseof constraints from the environmental context. The con-textual events operating on the three-term contingencybecome conditional discriminations (Sidman, 1994). Thesample stimulus in Skinner’s procedure is the conditionalstimulus. The three-term contingency is effective onlywhen it matches the sample stimulus. Other three-termcontingencies (nonmatches) are ineffective. Reinforce-ment is conditional on the context of discriminative stim-uli other than the SD; that is, the effectiveness of thethree-term contingency comes under contextual control.Conditional discriminations operate at the level of a four-term contingency:

Contextual stimulus --→ SD -----→ Response → Reinforcement

Conditional sample Side key Key peck Grainwith color

Key color

Color not matching sample

To start a matching-to-sample trial, the participant willmake a response (called the observing response) to pre-sent the sample stimulus (i.e., the conditional sample). Thecomparison stimuli (i.e., the discriminative events) are pre-sented usually after removing the sample stimulus, but notalways, and provide for an effective three-term contin-gency and other noneffective three-term contingencies. Acertain comparison will match the conditional sample. Aresponse that selects the matched comparison and rejects

the nonmatched comparisons will produce reinforcement.Figure 6 presents an example of the observing response, the conditional sample, the discriminative events, and thecomparison match. Responses that select the nonmatch-ing comparison stimuli are not reinforced. During condi-tional discrimination training, the same selection must becorrect with one conditional stimulus, but incorrect withone or more other sample stimuli. Most matching-to-sample applications use a correction procedure with errorresponses. One correction procedure has the learner re-spond to the same sample and comparison stimuli until acorrect response has been reinforced. Error correction pro-cedures and the random positioning of the comparisonstimuli control for position responding.

Factors Affecting the Developmentof Stimulus ControlApplied behavior analysts establish stimulus control withfrequent differential reinforcement of behavior in thepresence and in the absence of the SD condition. Effectivedifferential reinforcement requires the consistent use ofconsequences that function as reinforcers. Additional fac-tors such as preattending skills, stimulus salience, mask-ing, and overshadowing also will affect the developmentof stimulus control.

Preattending Skills

The development of stimulus control requires certain pre-requisite skills. For academic or social skills the studentshould engage in orienting behaviors appropriate to theSDs in the instructional setting. Such preattending skillsinclude looking at the instructional materials, looking at

416

Stimulus Control

2

2

Observingresponse

Conditionalsample

Discriminativeevents

Comparisonmatch

2

SΔ SD SΔ SΔ

gested that salient cues facilitated the efficiency of func-tional analyses by producing faster and clearer outcomesin the presence of the salient cues than in the absence ofthe salient cues.

Some stimuli have more salience than others de-pending on the sensory capabilities of the individual, thepast history of reinforcement, and the context of the en-vironment. For instance, a student may not attend towords written on the blackboard because of poor vision,or to the teacher’s oral directions because of poor hear-ing, or to the curriculum materials because of past failuresto learn, or to teacher directions because of focusing at-tention on a toy in the student’s desk.

Masking and Overshadowing

Masking and overshadowing are methods for decreasingthe salience of stimuli (Dinsmoor, 1995b). In masking,even though one stimulus has acquired stimulus controlover behavior, a competing stimulus can block the evoca-tive function of that stimulus. For example, a student mayknow the answers to a teacher’s questions, but will not re-spond in the presence of the peer group, which in this ex-ample is competition of different contingencies ofreinforcement, not just an antecedent stimulus that makesit harder to “attend to” the relevant SD. With overshad-owing, the presence of one stimulus condition interfereswith the acquisition of stimulus control by another stim-ulus. Some stimuli are more salient than others. For ex-ample, looking out of the window to watch thecheerleaders’ practice may distract some students from at-tending to the instructional stimuli presented during thealgebra lesson.

Applied behavior analysts need to recognize thatmasking and overshadowing can hinder the developmentof stimulus control, and apply procedures that reducethese effects. Examples of reducing the influence ofmasking and overshadowing include (a) rearranging thephysical environment (e.g., lowering the window shade,removing distractions, changing seating assignments),(b) making instructional stimuli appropriately intense(e.g., providing a rapid pace of instruction, many oppor-tunities to respond, an appropriate level of difficulty, andopportunities to set goals), and (c) consistently reinforc-ing behavior in the presence of the instructionally rele-vant stimuli.

Using Prompts to DevelopStimulus ControlPrompts are supplementary antecedent stimuli used tooccasion a correct response in the presence of an SD thatwill eventually control the behavior. Applied behavioranalysts give response and stimulus prompts before or

the teacher whenever a response is modeled, listening tooral instructions, and sitting quietly for short periods oftime. Teachers should use direct behavioral interventionsto specifically teach preattending skills to learners whohave not developed these skills. Learners must emit be-haviors that orient the sensory receptors to the appropri-ate SD for the development of stimulus control.

Stimulus Salience

The salience of the stimulus influences attention to thestimulus and ultimately the development of stimulus con-trol (Dinsmoor, 1995b). Salience refers to the prominenceof the stimulus in the person’s environment. For example,Conners and colleagues (2000) included salient cues (e.g.,specific room color, specific therapist) into the contextof multielement functional analyses. Their results sug-

Figure 6 An example of the observing response, the conditional sample, the discriminative events, andthe comparison match during a matching-to-sample trial.

417

Stimulus Control

during the performance of a behavior. Response promptsoperate directly on the response. Stimulus prompts op-erate directly on the antecedent task stimuli to cue a cor-rect response in conjunction with the critical SD.

Response Prompts

The three major forms of response prompts are verbal in-structions, modeling, and physical guidance.

Verbal Instructions

Applied behavior analysts use functionally appropriateverbal instructions as supplementary response prompts.Verbal response prompts occur frequently in almost alltraining contexts in the forms of vocal verbal instruction(e.g., oral, telling) and nonvocal verbal instruction (e.g.,written words, manual signs, pictures)

Teachers often use vocal verbal instructionalprompts. Suppose a teacher asked a student to read thesentence, “Plants need soil, air, and water to grow.” Thestudent reads, “Plants need . . . Plants need . . . Plantsneed . . .” The teacher could use any number of verbalprompts to occasion the next word. She might say, “Thenext word is soil. Point to soil and say soil.” Or she mightuse a rhyming word for soil. For another example, Adkinsand Mathews (1997) taught in-home caregivers to usevocal verbal response prompts to improve voiding pro-cedures for two adults with urinary incontinence and cog-nitive impairments. The in-home caregiver checked fordryness each hour or every 2 hours between 6:00 A.M.and 9:00 P.M. The caregiver praised dryness, asked theadult to use the toilet, and provided assistance as neededwhen the adult was dry at the regularly scheduled check.This simple response prompt procedure, which was in-troduced following a baseline condition, produced in oneof the adults a mean 22% reduction of grams of urinecollected per day in wet diapers during the 2-hourprompted voiding condition, and a mean 69% reductionduring the 1-hour prompt condition. The second adult re-ceived only the 1-hour prompted voiding condition,which resulted in a mean 55% reduction of urine col-lected per day.

Krantz and McClannahan (1998) and Sarokoff, Tay-lor, and Poulson (2001) used nonvocal verbal instruc-tional response prompts in the form of embedded scriptsto improve the spontaneous social exchanges of childrenwith autism. Examples of the embedded scripts in thechildren’s photographic activity schedules included look,watch, and let’s eat our snack. In another example of non-vocal verbal instructional response prompts, Wong,Seroka, and Ogisi (2000) developed a checklist with 54steps to prompt self-assessment of blood glucose levelby a diabetic woman with memory impairments. This

participant followed the checklist sequence and checkedoff each step as she completed it.

Modeling

Applied behavior analysts can demonstrate or model thedesired behavior as a response prompt. Modeling can ef-fectively prompt behaviors especially for learners whohave already learned some of the component behaviorsrequired for the imitation. Modeling is an easy, practical,and successful way for a coach to show a player an ap-propriate form for shooting a basketball through a hoopwhen the player already can hold the ball, raise it overhis head, and push the ball away from his body. Fewteachers would use modeling to teach a child with severedisabilities to tie her shoes if she could not hold the lacesin her hands. In addition, attending skills are important.The learner must observe the model to enable the imita-tion of the performance. Finally, modeling as a responseprompt should be used only with students who have al-ready developed imitative skills. The use of models toassist in the development of appropriate academic andsocial behavior has been demonstrated repeatedly.

Physical Guidance

Physical guidance is a response prompt applied mostoften with young children, learners with severe disabili-ties, and older adults experiencing physical limitations.Using physical guidance, the teacher partially physicallyguides the student’s movements, or physically guides thestudent throughout the entire movement of the response.

Hanley, Iwata, Thompson, and Lindberg (2000) re-ported the use of physical guidance to help participantswith profound mental retardation manipulate leisureitems. Conaghan, Singh, Moe, Landrum, and Ellis (1992)used physical guidance to prompt adults with mental re-tardation and hearing impairments to use manual signs.When a participant made an error in sign production, theteacher physically guided the person’s hands to prompt acorrect response. In another example, a personal trainerworked with three older adults with severe disabilities,osteoporosis, and arthritis. The trainer physically guidedthe participants’ arm movements whenever they did notbegin independent pushes with a dumbbell or stoppedthe pushes before reaching their exercise criterion(K. Cooper & Browder, 1997).

Physical guidance is an effective response prompt,but it is more intrusive than verbal instruction and mod-eling. It requires direct physical involvement between theteacher and the student, making precise assessment of

418

Stimulus Control

student progress difficult. Physical response prompts pro-vide little opportunity for the student to emit the behav-ior without the direct assistance of the teacher. Anotherpossible problem is some learners’ resistance to physicaltouch. Some learners, however, will require the use ofphysical guidance.

Stimulus Prompts

Applied behavior analysts have frequently used movement,position, and redundancy of antecedent stimuli as stimu-lus prompts. For example, movement cues can help alearner discriminate between a penny and a dime by point-ing to, tapping, touching, or looking at the coin to be iden-tified. In the coin discrimination task, the teacher coulduse a position cue and place the correct coin closer to thestudent. Redundancy cues occur when one or more stim-ulus or response dimensions (e.g., color, size, shape) arepaired with the correct choice. For instance, a teacher mightuse a color mediation procedure of associating a numeralwith a color, then link the name of a color to an answerfor an arithmetic fact (Van Houten & Rolider, 1990).

Transfer of Stimulus ControlApplied behavior analysts should provide response andstimulus prompts as supplementary antecedent stimulionly during the acquisition phase of instruction. With thereliable occurrence of behavior, applied behavior ana-lysts need to transfer stimulus control from the responseand stimulus prompts to the naturally existing stimulus.Applied behavior analysts transfer stimulus control bygradually fading stimuli in or out, gradually presenting orremoving antecedent stimuli. Eventually, the natural stim-ulus, a partially changed stimulus, or a new stimulus willevoke the response. Fading response prompts and stim-ulus prompts is the procedure used to transfer stimuluscontrol from the prompts to the natural stimulus, and alsoto minimize the number of error responses occurring inthe presence of the natural stimulus.

Terrace’s (1963a, b) influential research on the trans-fer of stimulus control using fading and superimpositionof stimuli provides a classic example of transferring stim-ulus control. In these studies Terrace taught pigeons tomake red–green and vertical–horizontal discriminationswith a minimum of errors. His use of techniques for grad-ually transferring stimulus control was called errorlesslearning. To teach a red–green discrimination, Terracepresented the SΔ (red light) at the beginning of discrimi-nation training, before the SD (green light) had stimuluscontrol over the pigeon’s responses. The initial introduc-tion of the red light was with low illumination and for

brief time intervals. During successive presentations ofthe stimuli, Terrace gradually increased the intensity ofthe red light and the duration of time it was illuminateduntil it differed from the green light only in hue. Withthis procedure Terrace taught pigeons to discriminate redfrom green with only a minimum number of errors (re-sponses to the SΔ).

Terrace further demonstrated that stimulus controlacquired with red and green lights could be transferred tovertical and horizontal lines with a minimum number oferrors (i.e., responses in the presence of the SΔ). His pro-cedure consisted of first superimposing a white verticalline on the green light (SD) and a white horizontal lineon the red light (SΔ). Then the pigeons were given severalpresentations of the two compound stimuli. Finally, theamplitude of the red and green lights was reduced grad-ually until only the vertical and horizontal lines remainedas stimulus conditions. The pigeons showed almost per-fect transfer of stimulus control from the red–green lightsto the vertical–horizontal lines. That is, they emitted re-sponses in the presence of the vertical line (SD) and sel-dom responded in the presence of the horizontal line (SΔ).

Following Terrace’s work, other pioneer researchers(e.g., Moore & Goldiamond, 1964) produced landmarkstudies showing that the transfer of stimulus control withfew incorrect responses was possible with human learn-ers, which provided the foundations for developing ef-fective procedures to transfer stimulus control fromresponse prompts to natural stimuli in the applied context.

Transferring Stimulus Control from Response Prompts to Naturally Existing Stimuli

Wolery and Gast (1984) described four procedures fortransferring stimulus control from response prompts tonatural stimuli. They describe these procedures as most-to-least prompts, graduated guidance, least-to-mostprompts, and time delay.

Most-to-Least Prompts

The applied behavior analyst can use most-to-least re-sponse prompts to transfer stimulus control from responseprompts to the natural stimulus whenever the participantdoes not respond to the natural stimulus or makes an in-correct response. To apply most-to-least responseprompts, the analyst physically guides the participantthrough the entire performance sequence, then graduallyreduces the amount of physical assistance provided astraining progresses from trial to trial and session to ses-sion. Customarily, most-to-least prompting moves fromphysical guidance to visual prompts to verbal instruc-tions, and finally to the natural stimulus without prompts.

419

Stimulus Control

Graduated Guidance

The applied behavior analyst provides physical guidanceas needed, but using graduated guidance she will imme-diately start to fade out the physical prompts to transferstimulus control. Graduated guidance begins with the ap-plied behavior analyst following the participant’s move-ments closely with her hands, but not touching theparticipant. The analyst then increases the distance of herhands from the participant by gradually changing the lo-cation of the physical prompt. For example, if the appliedbehavior analyst used physical guidance for a partici-pant’s hand movement in zippering a coat, she mightmove the prompt from the hand to the wrist, to the elbow,to the shoulder, and then to no physical contact. Gradu-ated guidance provides the opportunity for an immedi-ate physical prompt as needed.

Least-to-Most Prompts

When transferring stimulus control from responseprompts using least-to-most prompts, the applied behav-ior analyst gives the participant an opportunity to per-form the response with the least amount of assistance oneach trial. The participant receives greater degrees of as-sistance with each successive trial without a correct re-sponse. The procedure for least-to-most promptingrequires the participant to make a correct response withina set time limit (e.g., 3 seconds) from the presentation ofthe natural SD. If the response does not occur within thespecified time, the applied behavior analyst will againpresent the natural SD and a response prompt of least as-sistance, such as a verbal response prompt. If after thesame specified time limit (e.g., another 3 seconds), theparticipant does not make a correct response, the analystgives the natural SD and another response prompt, suchas a gesture. The participant receives partial or full phys-ical guidance if the lesser prompting does not evoke acorrect response. Applied behavior analysts using theleast-to-most response prompt procedure present the nat-ural SD and the same time limit during each training trial.For example, Heckaman, Alber, Hooper, and Heward(1998) used instructions, nonspecific verbal prompts,modeling, and physical prompting in a least-to-most 5-second response prompt hierarchy to improve the dis-ruptive behavior of four students with autism.

Time Delay

To produce a transfer of stimulus control, most-to-leastprompts, graduated guidance, and least-to-most promptsoccur as consequences to gradual changes in the form,position, or intensity of a response evoked by the naturalstimulus. Conversely, as an antecedent response prompt,

the time delay procedures use only variations in the timeintervals between presentation of the natural stimulus andthe presentation of the response prompt. Constant timedelay and progressive time delay transfer stimulus con-trol from a prompt to the natural stimulus by delayingthe presentation of the prompt following the presentationof the natural stimulus.

The constant time delay procedure first presents sev-eral trials using a 0-second delay—that is, the simulta-neous presentation of the natural stimulus and theresponse prompt. Usually, but not always (Schuster, Grif-fen, & Wolery, 1992), the trials that follow the simulta-neous prompt condition apply a fixed time delay (e.g., 3seconds) between the presentation of the natural stimu-lus and the presentation of the response prompt (Cald-well, Wolery, Werts, & Caldwell, 1996).

The progressive time delay procedure, like constanttime delay, starts with a 0-second delay between the pre-sentation of the natural stimulus and the responseprompt. Usually, a teacher will use several 0-second tri-als before extending the time delay. The number of 0-second trials will depend on the task difficulty and thefunctioning level of the participant. Following the si-multaneous presentations, the teacher will gradually andsystematically extend the time delay, often in 1-secondintervals. The time delay can be extended after a spe-cific number of presentations, after each session, after aspecific number of sessions, or after meeting a perfor-mance criterion.

Heckaman and colleagues (1998) used the follow-ing progressive time delay procedure: They began with 0-second trials, simultaneously presenting the control-ling response prompt (e.g., physical prompt, model) andthe task instruction. The simultaneous presentations con-tinued until the participant met the criterion of nine correctresponses. The first time delay was set at 0.5 seconds.After meeting the 0.5-second criterion, the researchersincreased the delay by 1-second increments up to 5 sec-onds. Correct responses occurring before the responseprompt or within 3 seconds after the prompt received apositive feedback statement (e.g., “That’s right”). Errorresponses received a negative feedback statement (e.g.,“No, that’s not right”) and the controlling prompt. Also,the next trial following an error response moved back tothe previous delay level. The controlling prompt was alsopresented with disruptive behavior.

Transfer of Stimulus Control UsingStimulus Control Shaping

The preceding sections focused on response prompts thatdo not change the task stimuli or materials. The stimuluscontrol shaping procedures presented here modify the

420

Stimulus Control

task stimuli or materials gradually and systematically toprompt a response. Supplementary stimulus conditionsare faded in or faded out to transfer stimulus control fromthe stimulus prompt to the natural stimulus. Stimulus con-trol shaping can be accomplished with stimulus fadingand stimulus shape transformations (McIlvane & Dube,1992; Sidman & Stoddard, 1967).

Stimulus Fading

Stimulus fading involves highlighting a physical dimen-sion (e.g., color, size, position) of a stimulus to increasethe likelihood of a correct response. The highlighted orexaggerated dimension is faded gradually in or out. Thefollowing examples of (a) handwriting the uppercase let-ter A, and (b) giving the answer 9 to an arithmetic prob-lem are illustrations of systematically fading out stimuli.

Krantz and McClannahan (1998) faded out scripts(i.e., the words Look and Watch me) embedded in photo-graphic activity schedules. The embedded scriptsprompted the social exchanges of children with autism.The words Look and Watch me were printed with 72-point font and bold letters on white 9-cm note cards.Krantz and McClannahan began fading out the words byremoving one third of the script card, then another third.Sometimes, during the script fading, portions of letterswere still shown on the card such as a part of an o inLook. Finally, the scripts and cards were removed.

The treatment of a feeding disorder reported by Patel,Piazza, Kelly, Ochsner, and Santana (2001) provides anexample of fading stimuli in and out. Severe food selec-tivity is a common problem for children with feeding dis-orders. For example, some children find highly texturedfoods aversive and dangerous because of gagging. Pateland colleagues faded in Carnation Instant Breakfast (CIB)and then milk to water in treating a 6-year-old boy withpervasive developmental feeding disorder. The boy woulddrink small amounts of water. The researchers began thefading in procedure by adding 20% of the CIB packet to240 ml of water. Following three sessions of drinking the20% mixture, more CIB was gradually added to the water,initially by 5% and later 10% increments. The researchers

then faded in milk to the CIB and water mixture after theboy would drink one CIB packet with 240 ml of water.Milk was gradually added to CIB/water mixture in 10%increments as the water was faded out (e.g., 10% milkand 90% water, plus one packet of CIB; then 20% milkand 80% water).

Applied behavior analysts have used the superimpo-sition of stimuli with stimulus fading. In one instance,the transfer of stimulus control occurs when one stimu-lus is faded out; in another application, one stimulus isfaded in as the other stimulus is faded out. The researchby Terrace (1963a, b) demonstrating the transfer of stim-ulus control from a red–green discrimination to a verti-cal–horizontal discrimination shows the superimpositionof two specific classes of stimuli and the fading out ofone stimulus class. The lines were superimposed on thecolored lights; then the lights were gradually faded out,leaving only the vertical and horizontal lines as the dis-criminative stimuli. Figure 7 provides an applied ex-ample of the superimposition and stimulus fadingprocedures used by Terrace. The figure shows a series ofsteps from an arithmetic program to teach 7 � 2 = _____.

The other frequently used procedure fades in the nat-ural stimulus and fades out the stimulus prompt. Figure8 illustrates this superimposition procedure, in which the prompt is faded out and the natural stimulus 8 + 5 =_____ is faded in.

Stimulus Shape Transformations

The procedure for stimulus shape transformations usesan initial stimulus shape that will prompt a correct re-sponse. That initial shape is then gradually changed toform the natural stimulus, while maintaining correct re-sponding. For example, a program to transform the shapeof stimuli to teach number recognition could include thefollowing steps (Johnston, 1973):

The shape of the stimulus prompt must change grad-ually so that the student continues to respond correctly.In teaching word identification, using stimulus shapetransformations could include the following steps (John-ston, 1973):

421

Stimulus Control

Figure 7 Illustration of two classes of superimposed stimuli with one class then faded out.From Addition and Subtraction Math Program with Stimulus Shaping and Stimulus Fading by T. Johnson, 1973, unpublished project, OhioDepartment of Education. Reprinted by permission.

Figure 9 shows how superimposition of stimuli can be used with stimulus shaping in arithmetic instruc-tion. The + and = signs are superimposed on the stimu-lus shaping program and are gradually faded in.

In summary, a variety of procedures exist for trans-ferring stimulus control from response and stimulus

prompts to natural stimuli. Currently, procedures fortransferring stimulus control of response prompts aremore practical in teaching situations than stimulus shapetransformations because of the greater skill and time re-quired for material preparation with stimulus changeprocedures.

422

Stimulus Control

Figure 8 Illustration of superimposition and stimulusfading to fade in the natural stimulus and fade out thestimulus prompt.From Addition and Subtraction Math Program with Stimulus Shaping and StimulusFading by T. Johnson, 1973, unpublished project, Ohio Department of Education.Reprinted by permission.

423

Stimulus Control

Figure 9 Illustration of superimposition ofstimuli and stimulus shaping.(From Addition and Subtraction Math Program with StimulusShaping and Stimulus Fading by T. Johnson, 1973, unpublishedproject, Ohio Department of Education. Reprinted by permission.)

SummaryAntecedent Stimuli

1. Reinforcement of an operant response increases the fre-quency of future responding and influences the stimulithat immediately precede the response. The stimuli pre-ceding the response (i.e., antecedent stimuli) will acquirean evocative effect on the relevant behavior.

2. Stimulus control occurs when the rate, latency, duration,or amplitude of a response is altered in the presence of anantecedent stimulus (Dinsmoor, 1995a, b). A stimulus ac-quires control only when responses emitted in the pres-ence of that stimulus produce reinforcement.

3. Discriminative stimuli and motivating operations sharetwo important similarities: (a) Both events occur beforethe behavior of interest, and (b) both events have evocativefunctions.

4. Often an evocative effect that appears to be the function of anSD is not the product of a history of differential reinforcementcorrelated with the altered frequency of response. These sit-uations are probably related to motivational operations (MOs)rather than stimulus control.

Stimulus Generalization

5. When an antecedent stimulus has a history of evoking abehavior that has been reinforced in its presence, there

is a general tendency for other antecedent stimuli to alsoevoke that behavior. This evocative function occurs withstimuli that share similar physical properties with thecontrolling antecedent stimulus. This tendency is calledstimulus generalization. Conversely, stimulus discrimi-nation occurs when the new stimuli do not evoke the response.

6. Stimulus generalization reflects a loose degree of stimu-lus control, whereas discrimination indicates a relativelytight degree of stimulus control.

7. A stimulus generalization gradient graphically depictsthe degree of stimulus generalization and discriminationby showing the extent to which responses reinforced inone stimulus condition are emitted in the presence of un-trained stimuli.

Development of Stimulus Control

8. The conventional procedure for stimulus discriminationtraining requires one behavior and two antecedent stimu-lus conditions. Responses are reinforced in the presence ofone stimulus condition, the SD, but responses are not re-inforced in the presence of the other stimulus, the SΔ.

9. Concept formation is a complex example of stimulus con-trol that requires both stimulus generalization within a classof stimuli and discrimination between stimulus classes.

424

Stimulus Control

10. An antecedent stimulus class is a set of stimuli that sharea common relationship. All of the stimuli in the class willevoke the same operant response class, or elicit the sameresponse in the case of respondent behavior.

11. Stimuli comprising a class may function within featurestimulus classes and arbitrary stimulus classes.

Stimulus Equivalence

12. Equivalence describes the emergence of accurate re-sponding to untrained and nonreinforced stimulus–stimulus relations following the reinforcement of re-sponses to some other stimulus–stimulus relations.

13. Behavior analysts define stimulus equivalence by test-ing for reflexivity, symmetry, and transitivity amongstimulus–stimulus relations. A positive demonstration ofall three tests (i.e., reflexivity, symmetry, and transitivity)is necessary to meet the definition of an equivalence rela-tion among a set of arbitrary stimuli.

14. Basic and applied researchers have used the matching-to-sample procedure to develop and test for stimulus equiv-alence.

Factors Affecting the Development of Stimulus Control

15. Applied behavior analysts establish stimulus control withfrequent differential reinforcement of behavior in the pres-ence and in the absence of the SD condition. Effective dif-ferential reinforcement requires the consistent use ofconsequences that function as reinforcers. Additional fac-tors such as preattending skills, stimulus salience, mask-ing, and overshadowing will also affect the developmentof stimulus control.

Using Prompts to Develop Stimulus Control

16. Prompts are supplementary antecedent stimuli (e.g., in-structions, modeling, physical guidance) used to occasiona correct response in the presence of an SD that will even-tually control the behavior. Applied behavior analysts pro-vide response and stimulus prompts before or during theperformance of a behavior.

17. Response prompts operate directly on the response. Stim-ulus prompts operate directly on antecedent task stimuli tocue a correct response in conjunction with the critical SD.

Transfer of Stimulus Control

18. Applied behavior analysts should provide response andstimulus prompts as supplementary antecedent stimulionly during the acquisition phase of instruction.

19. Fading response prompts and stimulus prompts is the pro-cedure used to transfer stimulus control from the promptsto the natural stimulus, and also to minimize the numberof error responses occurring in the presence of the naturalstimulus.

20. Procedures for transferring stimulus control from responseprompts to natural stimuli include (a) most-to-least prompts,(b) graduated guidance, (c) least-to-most prompts, and (d) time delay.

21. Stimulus fading involves exaggerating a dimension of astimulus to increase the likelihood of a correct response.The exaggerated dimension is faded gradually in or out.

22. The procedure for stimulus shape transformations uses aninitial stimulus shape that will prompt a correct response.That initial shape is then gradually changed to form thenatural stimulus, while maintaining correct responding.

425

Imitation

Key Termsimitation

Behavior Analyst Certification Board® BCBA® & BCABA®Behavior Analyst Task List,© Third Edition

Content Area 3: Principles, Processes and Concepts

3-15 Define and provide examples of echoics and imitation.

© 2006 The Behavior Analyst Certification Board, Inc.,® (BACB®) all rights reserved. A current version of thisdocument may be found at www.bacb.com. Requests to reprint, copy, or distribute this document and ques-tions about this document must be submitted directly to the BACB.

From Chapter 18 of Applied Behavior Analysis, Second Edition. John O. Cooper, Timothy E. Heron, William L. Heward. Copyright © 2007by Pearson Education, Inc. All rights reserved.

426

Imitation

An imitative repertoire promotes the relativelyquick acquisition of behaviors such as a youngchild’s development of social and communica-

tion skills. With an understanding of the imitationprocess, applied behavior analysts can use imitation asan intervention to evoke new behaviors. Without an imi-tative repertoire, a person has little chance for the agile acquisition of behaviors.

Imitation has received considerable experimental andtheoretical attention spanning several decades (e.g., Baer& Sherman, 1964; Carr & Kologinsky, 1983; Garcia &Batista-Wallace, 1977; Garfinkle & Schwartz, 2002; Wol-ery & Schuster, 1997). The experimental literature showsthat individuals can acquire and maintain imitative andechoic behavior in the same fashion as they acquire andmaintain other operant behaviors.1 That is, (a) reinforce-ment increases the frequency of imitation; (b) when someimitative behaviors receive reinforcement, other imita-tive behaviors occur without specific training and rein-forcement; and (c) some children who do not imitate maybe taught to do so. This chapter defines imitation andsuggests a protocol for imitation training with learnerswho do not imitate.

Definition of ImitationFour behavior–environment relations functionally defineimitation: (a) Any physical movement may function asa model for imitation. A model is an antecedent stimulusthat evokes the imitative behavior. (b) An imitative be-havior must immediately follow the presentation of themodel (e.g., within 3 to 5 seconds). (c) The model and thebehavior must have formal similarity. (d) The model mustbe the controlling variable for an imitative behavior(Holth, 2003).

Models

Planned Models

Planned models are prearranged antecedent stimuli thathelp learners acquire new skills or refine the topographyof certain elements of existing skills. A planned modelshows the learner exactly what to do. A videotape of aperson emitting specific behaviors can serve as a plannedmodel. LaBlanc and colleagues (2003), for example, usedvideo models to teach three children with autismperspective-taking skills. The children learned to imitatea video model touching or pointing to an object such as

1Chapter entitled “Verbal Behavior” defines the echoic operant and func-tionally separates it from imitation. Basically, echoic is the technical termused in the context of vocal verbal behavior, and imitation applies to non-vocal verbal and nonverbal behaviors.

a bowl or box. (The children also echoed vocal-verbalbehaviors such as saying “under the bowl,” or saying“one” to identify a box marked 1.)

Unplanned Models

All antecedent stimuli with the capacity to evoke imita-tion are potentially unplanned models. Unplanned mod-els occasion many new forms of behavior becauseimitating the behavior of others in everyday social in-teractions (e.g., school, work, play) frequently producesnew and helpful adaptive behaviors. For example, ayoung person during her first outing on a city bus maylearn how to pay the fare by imitating the fare-paying ofother boarders.

Formal Similarity

Formal similarity occurs when the model and the behav-ior physically resemble each other and are in the samesense mode (i.e., they look alike, sound alike) (Michael,2004). For example, when a student observes a teacherfinger-spell the word house (i.e., the model), and thenduplicates the finger spelling (i.e., the imitation), the im-itative finger spelling has formal similarity with themodel. A baby sitting in a high chair immediately tapsthe tray with his hand after seeing his mother tap the traywith her hand. The baby’s imitative tap has formal sim-ilarity to the mother’s tap.

Immediacy

The temporal relation of the immediacy between themodel and the imitative behavior is an important featureof imitation. However, a form (topography) of an imita-tion may occur at a later time in the context of everydaylife situations. For example, the young person on her firstbus ride who used imitation to learn how to pay the faremay use her newly learned behavior to pay the bus fare forher return trip when no other boarders have paid faresbefore her. The parent of the student who imitated thefinger spelling of house might ask, “What did you learnat school today?” and in response, the student may pro-duce the finger spelling for house.

When the topography of a previous imitation occursin the absence of the model (e.g., paying the bus fare, fin-ger spelling), that delayed behavior is not an imitativebehavior. The delayed behaviors of paying the bus fareand the finger spelling in the contexts previously pre-sented have similar topographies as the imitated behav-iors but occur as the result of different controllingvariables. The relation between discriminative stimuli(e.g., the device for collecting the bus fare) or motivating

427

Imitation

operations (e.g., the parent’s question) and delayed be-haviors are functionally different from the relation be-tween a model and an imitative behavior. Therefore,delayed behaviors using the topography of a prior imita-tive model, by definition, are not imitative.2

A Controlled Relation

We often view imitative behavior as doing the same. Al-though the formal similarity of doing the same is a nec-essary condition for imitation, it is not sufficient. Formalsimilarity can exist without the model functionally con-trolling the similar behavior. The controlling relation be-tween a model and a similar behavior is the mostimportant property that defines imitation. A controllingrelation between the behavior of a model and the behav-ior of the imitator is inferred when a novel model evokesa similar behavior in the absence of a history of rein-forcement for that behavior. An imitative behavior is anew behavior that follows a novel antecedent event (i.e.,the model). After the model evokes an imitation, that be-havior comes into contact with contingencies of rein-forcement. These new contingencies of reinforcementthen become the controlling variable for the discrimi-nated operant (i.e., MO/SD → R → SR).

Holth (2003) explained the discriminated operant inthe context of imitation training this way:

Let us imagine that a dog is trained to sit whenever theowner sits down in a chair, and to turn around in a circlewhenever the owner turns around. Does the dog imitatethe owner’s behavior? Almost certainly not. The dogcould have as easily been taught to sit whenever theowner turns around in a circle and to turn around in acircle when the owner sits down. Thus, what may looklike imitation may be nothing more than a series of di-rectly taught discriminated operants. The source of con-trol can be determined only by introducing novelexemplars. In the absence of a demonstration that the dog responds to novel performances by “doing the same,” there is no evidence that a similarity to theowner’s behavior is important in determining the formof the dog’s response. Hence, there is no true demon-stration that the dog imitates the behavior of the ownerunless it also responds to new instances of the owner’sbehavior by “doing the same.” (p. 157)

Is and Is-Not Examples of ControlledRelations and Imitation

Consider two guitarists performing in a contemporaryrock band. One guitarist improvises a short riff (i.e., anovel model). The other guitarist hears the riff and then

2Delayed imitation is, however, a commonly used term in the imitation lit-erature (e.g., Garcia, 1976).

immediately reproduces the riff cord-for-cord and note-for-note (i.e., imitative behavior—a new behavior). Thisis example meets all of the conditions for imitation; im-mediacy, formal similarity, and the model (i.e., the im-provised riff) produced the controlling relation.

Continuing with the two guitarists in the contempo-rary rock band, suppose one musician says to the other,“I have a great idea for our opening song. Let me play itfor you. If you like it, I will teach it to you.” The other gui-tarist likes what she hears. They practice the riff togetheruntil the second guitarist learns it. The guitarists add theriff to the opening number. On stage, one guitarist playsthe riff, and then the other guitarist immediately repro-duces the riff. This is not an example of imitation be-cause the riff played by the first guitarist was not a novelmodel and the similar behavior of the second guitaristwas a trained similar response with a history of rein-forcement. It is an example of a discriminated operant.

As another is-not example of imitative behavior, let’snow consider two classical musicians playing a fugueusing a musical score. A fugue is a form of music basedon a short melody that is played at the beginning by oneplayer or one section of an ensemble, and then anotherplayer or section repeats the short melody. The perfor-mance of a fugue appears similar to imitation in that oneplayer or section presents a melody, then another playeror section immediately repeats the melody, and the twomelodies have formal similarity. A fugue is not an ex-ample of imitation, however, because the first introduc-tion of the short melody does not control the similarbehavior. The printed notes on the music manuscript pro-vide the controlling variables. Playing a fugue using amusical score is an example of a discriminated operantrather than of imitation.

As a final is-not example, Tim throws a baseball toBill (a person with some baseball experience). Billcatches the ball, and then immediately throws it backto Tim. Did Bill imitate Tim’s throw? It gives an ap-pearance of imitation. Tim’s throw was an antecedentevent (a model); Bill’s throw immediately followed andhad formal similarity to the model. This is not an ex-ample of imitation because Bill’s throw to Tim was nota new behavioral instance of throwing under the con-trol of Tim’s throw. Again, the controlling variable wasthe history of reinforcement that produced a discrimi-nated operant.

Imitation TrainingTypically developing children acquire many skills by im-itating unplanned models. Parents and other caregiversdo not usually have to apply specific interventions to fa-cilitate the development of imitative skills. However,

428

Imitation

some infants and children with developmental disabilitiesdo not imitate. Without an imitative repertoire, these chil-dren will have great difficulty developing beyond themost basic of skills. Still, it is possible to teach imitationto some children who do not imitate.

Applied behavior analysts have validated repeatedlythe procedures used by Baer and colleagues as an effec-tive method for teaching imitation to children who do notimitate (e.g., Baer, Peterson, & Sherman, 1967; Baer &Sherman, 1964). For example, three children with severeto profound mental retardation served as participants inone of their studies (Baer, Peterson, & Sherman, 1967).During imitation training, the children were taught toemit simple discriminated (i.e., similar to the model) re-sponses (e.g., raising an arm) when the teacher presentedthe verbal response cue “do this,” and then provided themodel (e.g., raised his arm). Baer and colleagues selectedappropriate skill levels for their participants (e.g., grossmotor movements, fine motor movements) Also, theteacher initially used physical guidance to prompt thesimilar response, and then gradually reduced the guid-ance over several trials; shaped the discriminated re-sponse by using bits of food to reinforce closer and closersimilarity to the model; and reinforced responses withformal similarity.

The imitation training protocol developed by Baerand colleagues taught some nonimitative learners to im-itate, meaning that a novel model controlled imitative be-haviors in the absence of specific training andreinforcement of those behaviors. One participant imi-tated a novel model only after receiving imitation train-ing for similarity on 130 different models. The secondparticipant showed similar results to those of the first.The third participant learned to imitate with less trainingthan the other participants had. This participant imitatedthe ninth training model, a novel model without a historyof training and reinforcement.

To summarize the results from Baer and colleagues:(a) Children who did not have an imitative repertoirelearned to imitate with training that used response cuesand prompts, shaping, and reinforcement; (b) when someimitative behaviors produced reinforcement, the partici-pants imitated novel models without reinforcement; and(c) the participants demonstrated an effect sometimesknown as learning set (Harlow, 1959), or a learning-to-learn phenomenon. As the participants progressedthrough imitation training, they required fewer and fewertraining trials to learn a new discriminated response withsimilarity to the models.

The major objective of imitation training is to teachlearners to do what the person providing the model doesregardless of the behavior modeled. A learner who learnsto do what the model does is likely to imitate modelsthat have not been associated with specific training, and

3Generalized imitation is a term used frequently in the imitation literatureto label a participant’s unprompted, untrained, nonreinforced responseswith formal similarity to the actions of a model. We label such responsesto novel models as, simply, imitation.

those imitations are likely to occur in many situationsand settings, frequently in the absence of planned rein-forcement.3 Imitation, however, may depend on the pa-rameters of the response class used during training. Forexample, Young, Krantz, McClannahan, and Poulson(1994) found that children with autism imitated novelmodels within the vocal, toy-play, and pantomime re-sponse types used for training, but the imitations did notgeneralize across response types. Even so, imitation pro-vides for the relatively quick acquisition of new, complexbehaviors that are characteristic of many human en-deavors. Imitation produces those new behaviors with-out undue reliance on physical help or previousreinforcement considerations.

Building on the experimental methods of Baer andcolleagues, Striefel (1974) developed an imitation train-ing program for practitioners. The components ofStriefel’s protocol are as follows: (a) assessing, and teach-ing if necessary, prerequisite skills for imitation training,(b) selecting models for training, (c) pretesting, (d) se-quencing models for training, and (e) performing imita-tion training.

Assessing, and Teaching IfNecessary, Prerequisite Skills for Imitation Training

Learners cannot imitate if they do not attend to the pre-sentation of the model. Therefore, attending to the modelis a prerequisite for imitation training. Striefel (1974) de-fined attending as staying seated during instruction, keep-ing one’s hands in one’s lap, looking at the trainerwhenever one’s name is called, and looking at objectsidentified by the trainer. Also, practitioners often need todecrease problem behaviors that interfere with training(e.g., aggression, screaming, odd hand movements).

Suggested procedures for assessing attending skillsinclude the following:

1. Staying seated. Seat the learner and record the du-ration of time the learner remains seated.

2. Looking at the teacher. Say the learner’s name in acommanding voice and record whether the studentmakes eye contact.

3. Keeping hands in lap. Prompt the student to put hishands in his lap and record the duration of time thestudent’s hands remain in that position.

4. Looking at objects. Place several objects on a tableand say, “Look at this.” Immediately following the

429

Imitation

command, move a finger from in front of thelearner’s eye to one of the objects and recordwhether the student looked at the object.

Teachers often assess attending skills for a minimumof three sessions. The teacher can begin imitation trainingif the assessment data show adequate attending skills. Theteacher will need to teach these skills before beginningimitation training if attending skills need to be developed.

Selecting Models for Imitation Training

Practitioners may need to select and use about 25 be-haviors as models during initial imitation training. In-cluding gross motor movements (e.g., raising a hand) andfine motor movements (e.g., manual sign language) asmodels provides learners with opportunities to developmore refined differentiations with their imitative skills.

Practitioners usually use one model at a time duringthe initial training trials rather than a sequence of move-ments. Practitioners may choose to use more complexmodels such as sequences of behavior after the learnercan imitate one model per occasion successfully. Also,the initial training usually includes models of (a) themovement of body parts (e.g., touching nose, hoppingon one foot, bringing hand to mouth) and (b) the manip-ulation of physical objects (e.g., passing a basketball,picking up a glass, zipping a coat).

Pretesting

The learner’s responses to the selected models should bepretested. The pretest may show that the learner will im-itate some models without training. The pretesting pro-cedures advocated by Striefel (1974) are as follows:

1. Prepare the learner’s attending behaviors for thepretest (e.g., seated, hands in lap) and often assumethe same ready position as the learner.

2. If you are using an object model, place one object infront of the learner and one object in front of yourself.

3. Say the learner’s name to start the pretest, andwhen the learner makes eye contact, say, “Do this”(i.e., child’s name, pause, “Do this”).

4. Present the model. For example, if the selected be-havior is to pick up a ball, pick up the ball yourselfand hold it for a few seconds.

5. Immediately praise each response that has formalsimilarity to the model, and deliver the reinforcer(e.g., a hug, an edible) as quickly as possible.

6. Record the learner’s response as correct or incor-rect (or no response), or as an approximation of

the model (e.g., touches the ball, but does notpick it up).

7. Continue pretesting with the remaining models.

Practitioners can use the pretest procedure with allmotor and vocal verbal models (e.g., name, pause, “Dothis,” “Say ball”). Striefel recommended pretesting forseveral sessions until all models have been pretested atleast three times. If the learner correctly responds duringpretesting to a selected model at a set criterion level (e.g.,three of three correct), then the practitioner should ad-vance to other models. If the learner does not meet the cri-terion, the practitioner should select that model forimitation training.

Sequencing the Selected Models for Training

Practitioners use the pretest results to arrange the pre-sentation sequence for the selected models, arranging thesequence from the easiest to the most difficult models toimitate. The first models selected for imitation trainingare those that the learner imitated correctly on some, butnot all, of the pretest trials. The models that the learnerresponded to incorrectly, but approximated the model,are selected next. Finally, the models that the learnerfailed to perform, or performed incorrectly, are the last tobe selected for training.

Conducting Imitation Training

Striefel (1974) suggested using four conditions for imi-tation training: preassessment, training, postassessment,and probing for imitative behaviors. The procedures usedin imitation training are the same as those in the pretestwith the exception of when and how often the practitionerpresents the selected models.

Preassessment

The preassessment is a short pretest given before eachtraining session. Practitioners use the first three modelscurrently selected for training for the preassessment.These three models are presented three times each in ran-dom order during the preassessment. If the learner’s behavior has similarity to the model on all three presen-tations, that model is removed from the training. The pre-assessment procedure allows practitioners to evaluate thelearner’s current performance on the models selected fortraining that session, and to determine the learner’sprogress in learning to respond to the model.

430

Imitation

Training

During training, practitioners use repeated presentationsof one of the three models used in the preassessment. Themodel selected first for training is the one most often re-sponded to during the preassessment (i.e., the behaviorwas similar to the model on some, but not all, of the pre-assessment presentations). If, however, the learner madeapproximations only, the behavior with the closest simi-larity to the model is selected first for training. Trainingcontinues until the learner responds to the model cor-rectly on five consecutive trials.

Imitation training will likely include physical guid-ance to prompt the response if the learner fails to respond.For example, the practitioner may physically guide thelearner’s behavior through the complete response. Phys-ical guidance allows the learner to experience the re-sponse and the reinforcer for that specific movement.After physically assisting the complete response, thepractitioner will gradually withdraw the physical guid-ance, letting go of the student’s body just before the en-tire movement is completed, and then continue to fadethe physical guidance by withdrawing the physical sup-port earlier on each subsequent trial. Eventually, thelearner may complete the movement without assistance.When the learner responds to the model without prompt-ing for five consecutive trials, that model is included inthe postassessment.

Postassessment

During the postassessment, the practitioner presents, threetimes each, five previously learned models and five mod-els that are still included in imitation training. The practi-tioner will remove a most recently learned behavior fromimitation training following three consecutive postassess-ments in which the learner responds correctly withoutphysical guidance to the model 14 out of the 15 opportu-nities. Physical guidance, however, is appropriate to useduring the postassessment. If the learner does not reachthis criterion (14 out of 15 postassessment opportunities),Striefel (1974) recommended continuing imitation trainingwith that model. The postassessment procedure allows thepractitioner to evaluate how well a learner performs thepreviously and most recently learned behaviors.

Probes for Imitative Behaviors

Practitioners will use approximately five nontrained, novelmodels to probe for occurrences of imitation at the end ofeach imitation training session, or they will intermix theprobes with the training trials. The probe procedure usesthe same procedures as the preassessment activities, butwithout using the antecedent verbal response prompt (i.e.,

child’s name, pause, “Do this”) or other forms of responseprompts (e.g., physical guidance). Probing for nontrainedimitations provides data on the learner’s progress in de-veloping an imitation repertoire—in other words, learn-ing to do what the model does.

Guidelines for Imitation TrainingKeep Training Sessions Active and Brief

Most practitioners use short training sessions during im-itation training, typically 10 to 15 minutes, but oftenschedule more than one session per day. Two or threebrief sessions can be more effective than one long ses-sion. To maintain quick and active training, practition-ers should allow no more than a few seconds betweentrials.

Reinforce Both Prompted and Imitative Responses

In the early stages of imitation training, practitionersshould reinforce each occurrence of either prompted re-sponses or true imitation. If the learner’s participation re-quires a reinforcer other than praise, it should bepresented immediately in small amounts that the learnercan consume quickly (e.g., cereal bits, 5 seconds music).Practitioners should reinforce only matching responsesor imitative behaviors that occur within 3 to 5 secondsof the model. Learners who consistently emit correctmatching responses but not immediately following themodel should be reinforced for shorter response laten-cies (e.g., a decreasing contingency from 7 seconds to 6 seconds to 5 seconds to 4 seconds).

Pair Verbal Praise and Attentionwith Tangible Reinforcers

During imitation training, many learners, particularlychildren with severe to profound developmental disabil-ities, need tangible consequences such as edibles or liq-uids. As training progresses, practitioners try to use socialattention and verbal praise to maintain the matching re-sponses or imitative behaviors. They do this by pairing thedelivery of other consequences with social and verbalpraise. Social attention (e.g., patting the student’s arm af-fectionately) and/or descriptive verbal praise should im-mediately follow each correct response or approximation,simultaneously with the other consequence. A learner’swillingness to participate in imitation training may in-crease when the practitioner schedules a preferred activ-ity to follow each session.

431

Imitation

If Progress Breaks Down, Back Upand Move Ahead Slowly

There may be identifiable reasons for worsening perfor-mance, such as reinforcer satiation or context distrac-tions; or perhaps the practitioner presented models thatwere too complex for the learner. Regardless of whethera clear reason exists for the worsening performance, thepractitioner should return to an earlier level of success-ful performance. Once successful imitative responses arereestablished, training can advance.

Keep a Record

As with all behavior change programs, applied behavioranalysts should directly measure and record the learner’sperformance and review the data after each session. Withthe use of frequent and direct measurement, the practi-tioner can make objective, informed, data-based deci-sions concerning the effects of the training program.

Fade Out Verbal Response Promptsand Physical Guidance

Parents and caregivers in the everyday environments ofyoung children almost always teach imitation skills usingverbal response prompts and physical guidance. For ex-

ample, a caregiver may tell a child to “Wave bye-bye,”model waving, and then physically guide the wave of thechild. Or, a parent may ask a child, “What does the cowsay?” The parent then presents a model (“The cow saysmoo”), tells the child to say “moo,” and then providespraise and attention if the child says “moo.” This naturalinstructional process is the same process advocated inthis chapter for teaching imitative skills to learners whodo not imitate: A verbal response prompt is given (“Dothis”), a model is presented, and physical guidance isused when needed. However, imitation training is notcomplete until all response prompts have been with-drawn. Children need to learn to do what the model doeswithout the supports of response prompting. Therefore,to promote the effective use of imitation, practitionersshould fade out the response prompts used during the ac-quisition of trained matching responses.

Ending Imitation Training

Decisions to stop imitation training depend on thelearner’s behavior and program goals. For example, thepractitioner could stop motor imitation training when thestudent imitates the first presentations of novel models,or when the student imitates a sequence of behaviors(e.g., washing hands, brushing teeth, finger spelling).

SummaryDefinition of Imitation

1. Four behavior–environment relations define imitation:(a) Any physical movements may function as a model forimitation. A model is an antecedent stimulus that evokesthe imitative behavior. (b) An imitative behavior must beemitted within 3 seconds of the presentation of the model.(c) The model and the behavior must have formal simi-larity. (d) The model must be the controlling variable foran imitative behavior.

Formal Similarity

2. Formal similarity occurs when the model and the behav-ior physically resemble each other and are in the samesense mode (i.e., they look alike, sound alike).

Immediacy

3. The temporal relation of the immediacy between the modeland the imitative behavior is an important feature of imi-tation. However, a form (topography) of imitation mayoccur in the context of everyday life situations, and at anytime. When the form of a previous imitation occurs in theabsence of the model, that delayed behavior is not an im-itative behavior.

A Controlled Relation

4. We often view imitative behavior as doing the same. Al-though the formal similarity of doing the same is a neces-sary condition for imitation, it is not sufficient. Formalsimilarity can exist without the model functionally con-trolling the similar behavior.

5. A controlling relation between the behavior of a modeland the behavior of the imitator is inferred when a novelmodel evokes a similar behavior in the absence of a historyof reinforcement.

6. An imitative behavior is a new behavior emitted followinga novel antecedent event (i.e., the model). After the modelevokes the imitation, that behavior comes into contact withcontingencies of reinforcement. These new contingenciesof reinforcement then provides the controlling variablesfor the discriminated operant (i.e., MO/SD → R → SR).

Imitation Training for Learners Who Do Not Imitate

7. Applied behavior analysts have validated repeatedly theinstructional method used by Baer and colleagues as aneffective method for teaching imitation to children whodo not imitate.

432

Imitation

8. Building on the experimental methods of Baer and col-leagues, Striefel (1974) developed an imitation trainingprogram for practitioners.

9. The components of Striefel’s protocol are as follows:(a) assessing, and teaching if necessary, prerequisite skillsfor imitation training, (b) selecting models for training,(c) pretesting, (d) sequencing models for training, and (e) conducting imitation training.

Guidelines for Imitation Training

10. Keep training sessions active and brief.

11. If the learner’s participation requires a reinforcer otherthan praise, it should be presented immediately in smallamounts that the learner can consume quickly.

12. Pair verbal praise and attention with tangible reinforcers.

13. If progress breaks down, back up and move ahead slowly.

14. Measure and record the learner’s performance and reviewthe data after each session.

15. Fade out verbal response prompts and physical guidance.

16. Stop motor imitation training when the student consis-tently imitates novel models, or when the student imitatesa sequence of behaviors (e.g., washing hands, brushingteeth, finger spelling).

433

Chaining

Key Termsbackward chainingbackward chaining with leaps aheadbehavior chain

behavior chain interruption strategybehavior chain with a limited hold

chaining

forward chainingtask analysistotal-task chaining

Behavior Analyst Certification Board® BCBA® & BCABA®Behavior Analyst Task List,© Third Edition

Content Area 8: Selecting Intervention Outcomes and Strategies

8-1 Conduct a task analysis.

Content Area 9: Behavior Change Procedures

9-12 Use chaining.

© 2006 The Behavior Analyst Certification Board, Inc.,® (BACB®) all rights reserved. A current version of thisdocument may be found at www.bacb.com. Requests to reprint, copy, or distribute this document and ques-tions about this document must be submitted directly to the BACB.

From Chapter 20 of Applied Behavior Analysis, Second Edition. John O. Cooper, Timothy E. Heron, William L. Heward. Copyright © 2007by Pearson Education, Inc. All rights reserved.

434

Chaining

This chapter defines a behavior chain, providesa rationale for establishing behavior chains inapplied settings, and discusses the importance

of task analysis in behavior chain training. The chapterpresents a procedure for constructing and validating atask analysis, along with procedures for assessing indi-vidual mastery levels. Forward chaining, total-task chain-ing, backward chaining, and backward chaining with leapaheads are addressed, followed by guidelines for decid-ing which behavior chain procedure to use in applied set-tings and a description of behavior chains with limitedholds. Techniques for breaking inappropriate chains arealso addressed. The chapter concludes with an examina-tion of factors affecting the performance of a behaviorchain.

Definition of a Behavior ChainA behavior chain is a specific sequence of discrete re-sponses, each associated with a particular stimulus con-dition. Each discrete response and the associatedstimulus condition serve as an individual component ofthe chain. When individual components are linked to-gether, the result is a behavior chain that produces a ter-minal outcome. Each response in a chain produces astimulus change that simultaneously serves as a condi-tioned reinforcer for the response that produced it and asa discriminative stimulus (SD) for the next response in thechain. Each stimulus that links the two response com-ponents together serves a dual function: It is both a con-ditioned reinforcer and an SD (Reynolds, 1975; Skinner,1953). Reinforcement produced by the final response inthe chain maintains the effectiveness of the stimuluschanges that serve as conditioned reinforcers and SDs

for each response in the chain. The no exception to the dual function of the components occurs with the firstand last stimulus in the chain. In these instances, thestimulus serves only one function: either as the SD or asthe conditioned reinforcer.

Reynolds (1975) provided this laboratory exampleof a behavioral chain (illustrated in Figure 1):

An experimental example of a chain may begin when apigeon is presented with a blue key. When the pigeonpecks the key, it changes to red. After the key turns red,the pigeon presses a pedal that turns the key yellow.During yellow, displacing a bar changes the key togreen. Finally, during green, pecks are reinforced withthe operation of a grain-delivery mechanism and its as-sociated stimuli, in the presence of which the bird ap-proaches the grain magazine and eats. (pp. 59–60)

In Reynolds’ example, the links are arranged in the fol-lowing sequence: blue-key-peck-red; red-peddle-press-yellow; yellow-bar-press-green; green-key-peck-grain magazine; grain magazine-eat-food intake. “Because eachstimulus has a dual function, as discriminative stimulus andconditioner reinforcer, the links overlap. In fact, it is thisdual function of the stimuli that holds the chain together”(Reynolds, 1975, p. 60).

As Figure 1 shows, this chain includes four re-sponses (R1, R2, R3, and R4), with a specific stimulus con-dition (S1, S2, S3, and S4) associated with each response.The blue light (S1) is an SD to occasion the first response(R1), a key peck that terminates the blue light and pro-duces the onset of the red light (S2). The red light servesas conditioned reinforcement for R1 and as an SD for R2

(a pedal press). The pedal press (R2) terminates the redlight and produces the onset of the yellow light (S3), andso on. The last response produces food, thus completingand maintaining the chain.

S1 Blue Light

R1 Key Peck(terminates blue light,activates red light)

S2 Red Light

R2 Pedal Press(terminates red light,activates yellow light)

S3 Yellow Light

R3 Bar Press(terminates yellow light,activates green light)

S4 Green Light

R4 Key Peck(terminates green light,activates food magazine)

SR

(food)

S = StimulusR = ResponseSR = Reinforcement

Figure 1 Illustration of a behavior chain consisting of four components.Based on a chain described by Reynolds (1975, pp. 59–60).

435

Chaining

Table 1 Delineation of the Relationship between Each Discriminative Stimulus, Response, and Reinforcement in a Sample Behavior Chain

Discriminative stimulus Response Conditioned reinforcement

S1 “Put on your coat.” R1 Obtain coat from closet Coat in hands

S2 Coat in hands R2 Place one arm in sleeve One arm in sleeve

S3 One arm in sleeve/one arm out R3 Place second arm in sleeve Coat on

S4 Coat on R4 Zip up the coat Teacher praise

Table 1 shows the links of a behavior chain using a classroom example. In preparing a class of preschoolstudents for recess, the teacher might say to one student,“Please put on your coat.” The teacher’s statement wouldserve as the SD (S1) evoking the first response (R1) in thechain, obtaining the coat from the closet. That responsein the presence of the teacher’s statement terminates theteacher’s statement and produces the onset of the coat inthe student’s hands (S2). The coat in the student’s handsserves as conditioned reinforcement for obtaining thecoat from the closet (R1) and as an SD (S2) for puttingone arm through a sleeve (R2). That response, putting onearm through the sleeve in the presence of the coat in bothhands, terminates the stimulus condition of the coat inthe student’s hands and produces the onset of one arm ina sleeve and one arm out (S3), a stimulus change thatserves as conditioned reinforcement for putting one armthrough a sleeve and as an SD (S3) for placing the secondarm through the other sleeve (R3). That response termi-nates the condition of one arm in a sleeve and one armout, and produces the SD (S4) of the coat being completelyon. The coat being fully on serves as a conditioned rein-forcer for putting the second arm through the other sleeveand as an SD for zipping it (R4). Finally, zipping up thecoat completes the chain and produces teacher praise.

A behavior chain has the following three importantcharacteristics: (a) A behavior chain involves the perfor-mance of a specific series of discrete responses; (b) theperformance of each behavior in the sequence changesthe environment in such a way that it produces condi-tioned reinforcement for the preceding response andserves as an SD for the next response; and (c) the re-sponses within the chain must be performed in a specificsequence, usually in close temporal succession.

Behavior Chains with a Limited Hold

A behavior chain with a limited hold is a sequence ofbehaviors that must be performed correctly and within aspecified time to produce reinforcement. Thus, behaviorchains with limited holds are characterized by perfor-

mance that is accurate and proficient. An assembly taskon a production line exemplifies a chain with a limitedhold. To meet the production requirements of the job, anemployee might have to assemble 30 couplers onto 30shafts within 30 minutes (one per minute). If the coupleris placed on the shaft, the retaining clip applied, and theunit forwarded to the next person in line within the pre-scribed period of time, reinforcement is delivered. In abehavior chain with a limited hold, the person must notonly have the prerequisite behaviors in his repertoire, butmust also emit those behaviors in close temporal suc-cession to obtain reinforcement.

The behavior analyst must recognize that accuracyand rate are essential dimensions of chains with limitedholds. For example, if a person is able to complete a chainin the correct sequence but the speed with which he per-forms one or more responses in the chain is slow, chang-ing the focus to increase the rate of performance iswarranted. One way to do this is to set a time criterion forcompletion of each response within the chain; another isto set a designated time criterion for completion of thetotal chain.

Rationale for Using ChainingWhereas behavior chain connotes a particular sequenceof responses ending in reinforcement, the term chainingrefers to various methods for linking specific sequencesof stimuli and responses to form new performances. Inforward chaining, behaviors are linked together begin-ning with the first behavior in the sequence. In backwardchaining, the behaviors are linked together beginningwith the last behavior in the sequence. Both of these pro-cedures and their variations are discussed in detail laterin this chapter.

There are several reasons for teaching new behav-ior chains to people. First, an important aspect of the ed-ucation of students with development disabilities is toincrease independent living skills (e.g., using public util-ities, taking care of personal needs, executing travelskills, socializing appropriately). As these skills develop,the student is more likely to function effectively in least

436

Chaining

restrictive environments or participate in activities with-out adult supervision. Complex behaviors developed withchaining procedures allow individuals to function moreon their own.

Second, chaining provides the means by which aseries of discrete behaviors can be combined to form aseries of responses that occasion the delivery of positivereinforcement. That is, essentially, chaining can be usedto add behaviors to an existing behavioral repertoire.For example, a person with a developmental disabilitymight consistently seek assistance from a coworker orjob coach while completing an assembly task. A chain-ing procedure could be used to increase the number oftasks that must be performed before reinforcement isdelivered. To that end, the teacher might give the indi-vidual a written or pictorial list of the parts that must beassembled to complete the job. When the first portion ofthe task is finished, the individual crosses off the firstword or picture on the list and proceeds to the secondtask. In behavioral terms, the first word or picture onthe list serves as the SD to occasion the response of com-pleting the first task. That response, in the presence ofthe word or picture on the list, terminates the initialstimulus and produces the next stimulus, the secondword or picture on the list. The completion of the sec-ond task serves as conditioned reinforcement for com-pleting the task and produces the onset of the third SD.In this way the chaining procedure enables simple be-haviors to be combined into a longer series of complexresponses.

Finally, chaining can be combined with other be-havior change prompting, instructing, and reinforcingprocedures to build more complex and adaptive reper-toires (McWilliams, Nietupski, & Hamre-Nietupski,1990).

Task AnalysisBefore individual components of a chain can be linked to-gether, the behavior analyst must construct and validatea task analysis of the components of the behavioral se-quence, and assess the mastery level of the learner withrespect to each behavior in the task analysis. Task analy-sis involves breaking a complex skill into smaller, teach-able units, the product of which is a series of sequentiallyordered steps or tasks.

Constructing and Validating a Task Analysis

The purpose of constructing and validating a task analy-sis is to determine the sequence of behaviors that are nec-essary and sufficient to complete a given task efficiently.

The sequence of behaviors that one person might have toperform may not be identical to what another person needsto achieve the same outcome. A task analysis should be in-dividualized according to the age, skill level, and priorexperience of the person in question. Further, some taskanalyses are comprised of a limited number of major steps,with each major step containing four to five subtasks.Figure 2 illustrates a task analysis for making a bed developed by McWilliams and colleagues (1990).

At least three methods can be used to identify andvalidate the components of a task analysis. In the firstmethod the behavioral components of the sequence aredeveloped after observing competent individuals performthe desired sequence of behaviors. For example, Test,Spooner, Keul, and Grossi (1990) produced a task analy-sis for using a public telephone by observing two adultsperform the task, and then validating that sequence bytraining a person with developmental disabilities to usethe task analysis. Based on the training, the researchersmade subsequent modifications to the original sequenceof tasks. Table 2 shows the final task analysis con-taining 17 steps.

Some of the steps in the task analysis of using a pub-lic telephone might appear in a different order or be com-bined differently. For instance, Step 3, choose the correctchange, could be interchanged with Step 2, find the tele-phone number; or an 18th step could be added, “Placechange back in your pocket.” There are no absolute rulesfor determining the number or sequence of steps. How-ever, practitioners are advised to consider the learner’sphysical, sensory, or motoric skill levels when determin-ing the scope and sequence of the steps, and be preparedto adjust if necessary.

A second method of validating a task analysis is toconsult with experts or persons skilled in performing thetask (Snell & Brown, 2006). For instance, when teach-ing mending skills to young adults with developmentaldisabilities, consultation with a seamstress or a sec-ondary-level home economics teacher is advisable.Based on such expert evaluation, a task analysis could beconstructed that would serve as the basis for assessmentand training.

A third method of determining and validating the se-quence of behaviors in a task analysis is to perform thebehaviors oneself (Snell & Brown, 2006). For example,a practitioner interested in teaching shoe tying could re-peatedly tie her own shoes, noting the discrete, observablesteps necessary to achieve a correctly tied shoe. The ad-vantage of self-performing the task is the opportunity tocome into contact with the task demands of the sequenceprior to training a learner, getting a clearer idea of the be-haviors to be taught and the associated SDs necessary tooccasion each behavior. Self-performing the task repeat-edly would refine the response topography necessary for

437

Chaining

the learner to use the sequence most efficiently. Table3 shows how an initial 7-step sequence for tying a shoe might be expanded to 14 steps after self-performingthe behavior (cf. Bailey & Wolery, 1992).

A systematic trial-and-error procedure can assist thebehavior analyst in developing a task analysis. Using asystematic trial-and-error method, an initial task analysisis generated and then refined and revised as it is beingtested. With revisions and refinements obtained throughfield tests, a more functional and appropriate task analy-sis can be achieved. For instance, as mentioned earlier,Test and colleagues (1990) generated their initial taskanalysis for using a public telephone by watching twoadults complete the task. They subsequently carried theprocess a step further by asking a person with develop-mental disabilities to perform the same task and modi-fied the task analysis accordingly.

Regardless of the method used to put the steps in se-quence, the SDs and corresponding responses must beidentified. But being able to perform a response is notsufficient; the individual must be able to discriminate theconditions under which a given response should be per-formed. Listing the discriminative stimuli and associatedresponses helps the trainer determine whether naturallyoccurring SDs will evoke different or multiple responses.This topic will be discussed in detail later in the chapter.

Assessing Mastery Level

Mastery level is assessed to determine which compo-nents of the task analysis a person can perform inde-pendently. There are two principal ways to assess aperson’s mastery level of task analysis behaviors prior to

Figure 2 Task analysis of bedmaking skills.

Given an unmade bed that included a messed-up bedspread, blanket, pillow, and flatsheet and a fitted sheet, the student will:

Section 1: bed preparation1. remove pillow from bed2. pull bedspread to foot of bed3. pull blanket to foot of bed4. pull flat sheet to foot of bed5. smooth wrinkles of fitted sheet

Section 2: flat sheet6. pull top of flat sheet to headboard7. straighten right corner of flat sheet at foot of bed8. repeat #7 for the left corner9. make sheet even across top end of mattress

10. smooth wrinkles

Section 3: blanket11. pull top of blanket to headboard12. straighten right corner of blanket at foot of bed13. repeat #12 for the left corner14. make blanket even with sheet across top end of mattress15. smooth wrinkles

Section 4: bedspread16. pull top of bedspread to headboard17. pull right corner of bedspread at foot to floor18. repeat #17 for left side19. make bedspread even with floor on both sides20. smooth wrinkles

Section 5: pillow21. fold top of spread to within 4 inches of the pillow width22. place pillow on folded portion23. cover pillow with the folded portion24. smooth bedspread wrinkles on and around the pillow

From “Teaching Complex Activities to Students with Moderate Handicaps Through the Forward Chaining ofShorter Total Cycle Response Sequences,” by R. McWilliams, J. Nietupski, & S. Hamre-Nietupski, 1990, Edu-cation and Training in Mental Retardation, 25, p. 296. Copyright 1990 by the Council for Exceptional Children.Reprinted by permission.

438

Chaining

Table 2 Task Analysis and Time Limits for Performing Each Task for Using a PublicTelephone

Step Time Limit

1. Locate the telephone in the environment 2 minutes

2. Find the telephone number 1 minute

3. Choose the correct change 30 seconds

4. Pick up receiver using left hand 10 seconds

5. Put receiver to left ear and listen for dial tone 10 seconds

6. Insert first coin 20 seconds

7. Insert second coin 20 seconds

8-14. Dial seven-digit number 10 seconds per

15. Wait for telephone to ring a minimum of five times 25 seconds

16. If someone answers, initiate conversation 5 seconds

17. If telephone is busy, hang up phone and collect money 15 seconds

From “Teaching Adolescents with Severe Disabilities to Use the Public Telephone,” by D. W. Test, F. Spooner, P. K.Keul, & T. A. Grossi, 1990, Behavior Modification, 3rd ed., p. 161. Copyright 1990 by Sage Publications. Reprinted bypermission.

Table 3 Initial and Expanded Steps for Teaching Shoe Tying

Shorter sequencea Longer sequenceb

1. Partially tighten shoe laces. 1. Pinch lace.

2. Pull shoe laces tight—vertical pull. 2. Pull lace.

3. Cross shoe laces. 3. Hang lace ends from corresponding sides of shoe.

4. Tighten laces—horizontal pull. 4. Pick up laces in corresponding hands.

5. Tie laces into a knot. 5. Lift laces above shoe.

6. Make a bow. 6. Cross right lace over the left to form a tepee.

7. Tighten bow. 7. Bring left lace toward student.

8. Pull left lace through tepee.

9. Pull laces away from each other.

10. Bend left lace to form a loop.

11. Pinch loop with left hand.

12. Bring right lace over the fingers—around loop.

13. Push right lace through hole.

14. Pull loops away from each other.

Sources: (a) Santa Cruz County Office of Education, Behavioral Characteristics Progression. Palo Alto, California,VORT Corporation, 1973. (b) Smith, D. D., Smith, J. O., & Edgar, E. “Research and Application of InstructionalMaterials Development.” In N. G. Haring & L. Brown (Eds.), Teaching the Severely Handicapped (Vol. 1). New York:Grune & Stratton, 1976. From Teaching Infants and Preschoolers with Handicaps, p. 47, by D. B. Bailey & M. Wolery,1984, Columbus, OH: Charles E. Merrill. Used by permission.

439

Chaining

training—the single-opportunity method and themultiple-opportunity method (Snell & Brown, 2006).

Single-Opportunity Method

The single-opportunity method is designed to assess alearner’s ability to perform each behavior in the taskanalysis in correct sequence. Figure 3 is an example of a form used to record a learner’s performance. Specif-ically, a plus sign (+) or minus sign (–) is marked for eachbehavior that is correctly or incorrectly emitted.

Assessment in this example began when the teachersaid, “Tom, put on your hearing aid.” Tom’s responses tothe steps in the task analysis were then recorded. Thefigure shows Tom’s data for the first four days of as-sessment. On Day 1 Tom opened the hearing aid con-tainer and removed the harness; each of these steps wasperformed correctly, independently, sequentially, andwithin a 6-second time limit. However, Tom then at-tempted to put the hearing aid harness over his head(Step 5) without first doing Steps 3 and 4. Because hecontinued to perform this behavior for more than 10 sec-

onds, the teacher stopped the assessment and scoredSteps 3 and 4 and all remaining steps as incorrect. OnDay 2 Tom was stopped after Step 1 because he per-formed a subsequent step out of order. On Days 3 and 4the assessment was discontinued after Step 4 becauseTom took more than 6 seconds to perform Step 5. Givena criterion for mastery of 100% accuracy within 6 sec-onds over three consecutive probes, the data indicate thatTom only met the criterion for Step 1 (the three +’s forStep 2 were not recorded consecutively).

Multiple-Opportunity Method

The multiple-opportunity method of task analysis as-sessment evaluates the person’s level of mastery acrossall the behaviors in the task analysis. If a step is per-formed incorrectly, out of sequence, or the time limitfor completing the step is exceeded, the behavior ana-lyst completes that step for the learner and then posi-tions her for the next step. Each step performed correctlyis scored as correct response, even if the learner erred onthe previous steps.

Figure 3 Task analysis data sheet for single-opportunity assessment of inserting a hearing aid.

Task Analysis Assessment of Inserting a Hearing AidInstructional cue: “Put on your hearing aid”Teacher: ChristineAssessment method: Single opportunityStudent: Tom

DateStep Behavior 10/1 10/2 10/3 10/41 Open container + + + +2 Remove harness + � + +3 Arm 1/Strap 1 � � + +4 Arm 2/Strap 2 � � + +5 Harness over head � � � �6 Fasten harness � � � �7 Unsnap pocket � � � �8 Remove aid from container � � � �9 Insert aid into pocket � � � �

10 Snap pocket � � � �11 Pick up earmold � � � �12 Insert earmold into ear � � � �13 Turn on aid � � � �14 Set the control � � � �Percentage of Steps Correct 14% 7% 28% 28%

Materials: Hearing aid container, harness, earmoldResponse latency: 6 secondsRecording key: + (correct) � (incorrect)Criterion: 100% correct performance for 3 consecutive days

From “Teaching Severely Multihandicapped Students to Put on Their Own Hearing Aids,” by D. J. Tucker & G. W. Berry, 1980, Journal of Applied Behavior Analysis, 13, p. 69. Copyright 1980 by the Society for the Ex-perimental Analysis of Behavior, Inc. Adapted by permission.

440

Chaining

Figure 4 Task analysis data sheet for multiple-opportunity assessment of inserting a hearing aid.

Task Analysis Assessment of Inserting a Hearing AidInstructional cue: “Put on your hearing aid”Teacher: MargeAssessment method: Multiple opportunityStudent: Kathy

DateStep Behavior 10/1 10/2 10/3 10/41 Open container � + + +2 Remove harness + � + +3 Arm 1/Strap 1 + � + +4 Arm 2/Strap 2 � � + +5 Harness over head + � + �6 Fasten harness � + � +7 Unsnap pocket + � + +8 Remove aid from container + � � +9 Insert aid into pocket + + � +

10 Snap pocket + � + �11 Pick up earmold + � + �12 Insert earmold into ear � � � +13 Turn on aid � � � �14 Set the control � � � �Percentage of Steps Correct 57% 21% 57% 64%

Materials: Hearing aid container, harness, earmoldResponse latency: 6 secondsRecording key: + (correct) � (incorrect)Criterion: 100% correct performance for 3 consecutive days

From “Teaching Severely Multihandicapped Students to Put on Their Own Hearing Aids,” by D. J. Tucker and G. W. Berry, 1980, Journal of Applied Behavior Analysis, 13, p. 69. Copyright 1980 by the Society for the Ex-perimental Analysis of Behavior, Inc. Adapted by permission.

Figure 4 shows that Kathy, after receiving the instructional cue, did not perform the first step in thesequence (opening the container), so a minus wasrecorded. The teacher then opened the container and po-sitioned Kathy in front of it. Kathy then removed theharness (Step 2) and placed her arm through the strap(Step 3), so a plus was recorded for each of these steps.Because 6 seconds passed before Step 4 was completed,the teacher performed the step for Kathy, scored aminus, and positioned her to do Step 5. Kathy then per-formed Step 5, and the rest of the assessment contin-ued in this fashion.

The key to using the multiple-opportunity methodfor a task analysis assessment is to ensure that teachingis not commingled with assessment. That is, if the teacherphysically guides or models the step, an accurate assess-ment cannot be obtained. Therefore, it is important thatthe teacher does not help the student do any step.

Single-opportunity and multiple-opportunity meth-ods can both be effective ways to determine mastery ofinitial skills in a behavior chain. Of the two, the single-

opportunity method is the more conservative measure be-cause the assessment terminates at the first step at whichperformance breaks down. It also provides less informa-tion to the teacher once instruction is initiated; but it isprobably quicker to conduct, especially if the task analy-sis is long, and it reduces the likelihood of learning tak-ing place during assessment (Snell & Brown, 2006). Themultiple-opportunity method takes more time to com-plete, but it provides the behavior analyst with more in-formation. That is, the teacher could learn which stepsin the task analysis the learner has already mastered, thuseliminating the need to instruct steps already in thelearner’s repertoire.

We have looked at two of the prerequisites for link-ing components of a chain: (a) conducting and validatinga task analysis of the components of the behavioral se-quence and (b) assessing the learner’s pre-existing skillwith each component in the chain. In the following sec-tion, we will address the third component: teaching the in-dividual to perform each step of the chain in closetemporal succession.

441

Chaining

Behavior Chaining MethodsAfter the task analysis has been constructed and validatedand the criterion for success and the data collection pro-cedures have been determined, the next step is to decidewhich chaining procedure to use to teach the new se-quence of behavior. The behavior analyst has four op-tions: forward chaining, total-task chaining, backwardchaining, and backward chaining with leap aheads.

Forward Chaining

In forward chaining the behaviors identified in the taskanalysis are taught in their naturally occurring order.Specifically, reinforcement is delivered when the prede-termined criterion for the first behavior in the sequenceis achieved. Thereafter, reinforcement is delivered for cri-terion completion of Steps 1 and 2. Each succeeding steprequires the cumulative performance of all previous stepsin the correct order.

For example, a child learning to tie her shoes ac-cording to the 14-step task analysis shown in Table 3would be reinforced when the first step, “Pinch lace,” isperformed accurately three consecutive times. Next, re-inforcement would be delivered when that step and thenext one, “Pull lace,” are performed to the same crite-rion. Then, “Hang lace ends from corresponding sides ofshoe” would be added, and all three steps would have tobe performed correctly before reinforcement is delivered.Ultimately, all 14 steps in the task analysis should be per-formed in a similar manner. However, at any given train-ing step a variety of response prompting and otherstrategies may be employed to occasion the response.

Longer chains of behaviors can be broken down intosmaller chains or skill clusters, each of which can betaught in a manner similar to that used with a single-re-sponse unit. When one skill cluster is mastered, it is linkedto the next. The final response in the first skill cluster setsthe occasion for the first response in the second skill clus-ter. Essentially, in this variation skill clusters become theanalogue for units of behaviors, and these clusters arelinked.

McWilliams and colleagues (1990) combined skillclusters in forward chaining to teach bedmaking skills tothree students with developmental disabilities. Based ona task analysis of bedmaking responses (see Figure 2),five skill clusters were identified, each containing fourto five subtasks. Once baseline was collected showingminimal accuracy with completing the entire chain in thepresence of the cue, “Show me how to make the bed,” aninstructional procedure was chosen that involved teach-ing the clusters one at a time by breaking the complexbehaviors into smaller chains (see Figure 5).

Initial training began with the teacher demonstrat-ing the chain of tasks for Sections 1 and 2. Subsequenttraining involved teaching the prior section, the currentsection targeted for instruction, and the next section in thesequence. For example, if training was to be conducted on Section 2, Sections 1, 2, and 3 were demonstrated.Next, the students practiced the targeted sequences twoto five times. When the sequence was performed cor-rectly, praise was delivered. When an error occurred, athree-part correction procedure involving verbal redi-rection, redirection plus modeling, and/or physical guid-ance was initiated until the trial ended on a correctresponse. When the first skill cluster was mastered (S1),the second skill cluster (S2) was introduced, followed bythe third (S3), the fourth (S4), and so forth.

The results indicated that the forward chaining pro-cedure was effective in teaching the students bedmakingskills. That is, all the students were able to make theirbeds independently, or with only minimal assistance,when the teacher directed them to do so. Furthermore,all of them were able to make their beds in the general-ization setting (i.e., the home).

The McWilliams and colleagues study (1990) illus-trates two main advantages of forward chaining: (a) Itcan be used to link smaller chains into larger ones, and (b) it is relatively easy, so teachers are likely to use it inthe classroom.

Our discussion thus far has focused on teaching be-havior chains through direct teacher instruction. How-ever, some evidence suggests that chained responses canalso be learned through observation (Wolery, Ault, Gast,Doyle, & Griffen, 1991). Griffen, Wolery, and Schuster(1992) used a constant time delay (CTD) procedure toteach students with mental retardation a series of chainstasks food preparation behavior. During CTD with onestudent where the chained response was taught, two otherstudents watched. Results showed that the two observ-ing students learned at least 85% of the correct steps inthe chain even though direct instruction on those stepshad not occurred.

Total-Task Chaining

Total-task chaining (sometimes called total-task pre-sentation or whole-task presentation) is a variation of for-ward chaining in which the learner receives training oneach step in the task analysis during every session. Trainerassistance is provided with any step the person is unableto perform independently, and the chain is trained untilthe learner is able to perform all the behaviors in the se-quence to the predetermined criterion. Depending on thecomplexity of the chain, the learner’s repertoire, andavailable resources, physical assistance and/or graduatedguidance may be incorporated.

442

Chaining

25

20

15

10

5

0

Mit

ch

25

20

15

10

5

0

Deb

25

20

15

10

5

0

Bet

h

Nu

mb

er o

f C

orr

ectl

y P

erfo

rmed

Tas

k S

tep

s

s5

s4

s3

s2

s1

InterventionBase-line

Generalization

Session Probe Trials

Figure 5 Number of correctly performed task steps across baseline, intervention, andgeneralization probe trials.From “Teaching Complex Activities to Students with ModerateHandicaps Through the Forward Chaining of Shorter Total CycleResponse Sequences,” by R. McWilliams, J. Nietupski, & S. Hamre-Nietupski, 1990, Education and Training in Mental Retardation, 25,p. 296. Copyright 1990 by the Council for Exceptional Children.Reprinted by permission.

Werts, Caldwell, and Wolery (1996) used peer mod-els and total-task chaining to teach skills such as operat-ing an audiotape, sharpening a pencil, and using acalculator to three elementary-level students with dis-abilities who were enrolled in general education class-rooms. Response chains were individualized for eachstudent based on teacher-recommended sequences thatthe students had to learn. Each session was divided intothree parts: (a) The students with disabilities were probedon their ability to perform the total task response chain;(b) a peer model competent with the chain demonstratedthe chain in its entirety while simultaneously describingeach step; and (c) the student partner was probed again todetermine performance of the chain. During the probesbefore and after peer modeling, the students with dis-abilities were directed to complete the chain. If a studentwas successful, a correct response was noted, but no feed-back was delivered. If a student was unsuccessful, thestudent’s view was blocked temporarily while the teachercompleted that step in the chain, and then the student was

redirected to complete the remaining steps. Each behav-ior in the response chain was scored.

All three students learned to complete the responsechain after peer modeling, and reached criterion on the re-sponse chain (100% correct for 2 out of 3 days) over thecourse of the study. The results for Charlie, one of thethree students in the study, are shown in Figure 6.

Test and colleagues (1990) used total-task chainingwith a least-to-most prompting procedure to teach twoadolescents with severe mental retardation to use a pub-lic telephone. After identifying and validating a 17-steptask analysis (see Table 2), baseline data were ob-tained by giving the students a 3 × 5 index card withtheir home phone number and two dimes, and directingthem to “phone home.” During training, a least-to-mostprompting procedure consisting of three levels ofprompts (verbal, verbal plus gesture, and verbal plusguidance) was implemented when errors occurred onany of the 17 steps that comprised the task analysis.Each instructional session consisted of two training tri-als followed by a probe to measure the number of in-dependent steps completed. In addition, generalizationprobes were conducted in two other settings at leastonce per week.

Results showed that the combination of total-taskchaining plus prompts increased the number of steps inthe task analysis performed correctly by each student andthat the skills generalized to two community-based settings (see Figure 7). Test and colleagues (1990) concluded that total-task chaining, especially as usedwithin the context of a two-per-week training regimen, of-fered benefits to practitioners working with learners incommunity-based settings.

Backward Chaining

When a backward chaining procedure is used, all thebehaviors identified in the task analysis are initially com-pleted by the trainer, except for the final behavior in thechain. When the learner performs the final behavior inthe sequence at the predetermined criterion level, rein-forcement is delivered. Next, reinforcement is deliveredwhen the last and the next-to-last behaviors in the se-quence are performed to criterion. Subsequently, rein-forcement is delivered when the last three behaviors areperformed to criterion. This sequence proceeds back-ward through the chain until all the steps in the taskanalysis have been introduced in reverse order and prac-ticed cumulatively.

Pierrel and Sherman (1963) conducted a classicdemonstration of backward chaining at Brown Univer-sity with a white rat named Barnabus. Pierrel and Sher-man taught Barnabus to climb a spiral staircase, pushdown and cross a drawbridge, climb a ladder, pull a toy

443

Chaining

Probe 1Peer

Modeling Probe 2Probe 3 Probe 4 Post-Test

100

80

60

40

20

0Acc

essi

ng

Co

mp

ute

r P

rogr

am Imitationprobe

InitialProbe

100

80

60

40

20

0

Ad

din

g w

ith

a C

alcu

lato

r

Imitation Probe

Initial Probe

Peer Modeling Equa

tions

Card Atte

ndin

g

Cue Phys

ical

Guida

nce

No Gui

danc

e

ImitationProbe

100

80

60

40

20

0

Shar

pen

ing

a P

enci

l

Per

cen

tage

of

Co

rrec

t R

esp

on

ses

5 10 15 20 25 30 35 40 45 50 55

Sessions

Peer Modeling

Initial Probe

Charlie

Figure 6 Percentage of steps performed correctly by Charlie for the three response chains. Triangles representpercentage of steps correct in initial probes; open circles represent percentage of steps correct in imitation probes.From “Peer Modeling of Response Chains: Observational Learning by Students with Disabilities,” by M. G. Werts, N. K. Caldwell, & M. Wolery, 1996, Journal of AppliedBehavior Analysis, 29, p. 60. Copyright 1996 by the Society for the Experimental Analysis of Behavior, Inc. Reprinted by permission.

car by a chain, enter the car and pedal through a tunnel,climb a flight of stairs, run through an enclosed tube,enter an elevator, raise a miniature replica of the BrownUniversity flag, exit the elevator, and finally press a barfor which he received a pellet of food. Barnabus be-came so famous for his performance of this elaboratesequence that he acquired the reputation of being “therat with a college education.” This chain of 11 re-sponses was trained by initially conditioning the lastresponse in the sequence (the bar press) in the presenceof a buzzer sound, which was established as the SD for the bar press. Then the next-to-last response inthe sequence (exiting the elevator) was conditioned in

the presence of the elevator at the bottom of the shaft.Each response in the chain was added in turn so that adiscrete stimulus served as the SD for the next responseand as the conditioned reinforcer for the preceding response.

To illustrate backward chaining with a classroom ex-ample, let us suppose that a preschool teacher wants toteach a student to tie his shoes. First, the teacher con-ducts a task analysis of shoe tying and puts the compo-nent behaviors in logical sequence.

1. Cross over laces on the shoe.

2. Tie a knot.

444

Chaining

1514131211109876543210

Nu

mb

er C

orr

ect

Baseline TT + Prompts

5 10 15 20 25 30 35 40 45

Setting 1Setting 2Setting 3

Student 1

1514131211109876543210

Nu

mb

er C

orr

ect

5 10 15 20 25 30 35 40 45

Student 2

Sessions

Figure 7 Number of correctly performed steps in the public-telephone task analysis by two studentsduring baseline and total-task chaining (TT) plusresponse prompts. Data points for settings 2 and 3show performance on generalization probes.From “Teaching Adolescents with Severe Disabilities to Use the PublicTelephone,” by D. W. Test, F. Spooner, P. K. Keul, and T. A. Grossi, 1990,Behavior Modification, p. 165. Copyright 1990 by Sage Publications.Reprinted by permission.

3. Make a loop with the lace on the right side of theshoe, and hold it in the right hand.

4. With the left hand, wrap the other lace around theloop.

5. Use index or middle finger of the left hand to pushthe left lace through the opening between the laces.

6. Grasp both loops, one in each hand.

7. Draw the loops snug.

The teacher begins by training with the last step inthe sequence, Step 7, until the student is able to com-plete it without mistakes for three consecutive trials. Aftereach correct trial at Step 7, reinforcement is delivered.The teacher then introduces the next-to-last step, Step 6,and begins training the student to perform that step inconjunction with the student’s performance of the final

step in the chain. Reinforcement is contingent on the suc-cessful performance of Steps 6 and 7. The teacher thenintroduces Step 5, making sure that the training step andall previously learned steps (i.e., Steps 5, 6, and 7) are ex-ecuted in the proper sequence prior to reinforcement.The teacher can use supplemental response prompts toevoke correct responding on behavior at any step. How-ever, any supplemental response prompts—verbal, pic-torial, demonstration, or physical—introduced duringtraining must be faded later in the program so that thestudent’s behavior comes under stimulus control of thenaturally occurring SDs. In brief, when using a backwardchaining procedure, the system of task analysis isarranged in reverse order and the last step is trained first.

With backward chaining, the first behavior thelearner performs independently produces the terminal re-inforcement: The shoe is tied. The next-to-last responseproduces the onset of a stimulus condition that reinforcesthat step and serves as an SD for the last behavior, whichis now established in the learner’s behavioral repertoire.This reinforcing sequence is repeated for the remainderof the steps.

Hagopian, Farrell, and Amari (1996) combined back-ward chaining with fading to reduce the life-threateningbehavior of Josh, a 12-year-old male with autism andmental retardation. Because of medical complications as-sociated with a variety of gastrointestinal factors, fre-quent vomiting, and constipation during the 6 monthsprior to the study, Josh refused to ingest any food or liq-uids by mouth. Indeed, when he did ingest food by mouthpreviously, he expelled it.

Over the 70-day course of Josh’s treatment program,data were collected on liquid acceptance, expulsions,swallows, and avoidance of the target behavior: drinkingwater from a cup. After baseline was collected on Josh’sability to swallow 10 cc of water, a swallow conditionwithout water was implemented. Basically, an empty sy-ringe was placed and depressed in his mouth, and he wasdirected to swallow. Next, the syringe was filled with asmall volume of water, and reinforcement delivered whenJosh swallowed the water. In subsequent phases, the vol-ume of water in the syringe was gradually increased from0.2 cc to 0.5 cc to 1 cc, and ultimately to 3 cc. To earn re-inforcement in subsequent conditions, Josh had to emitthe target chain using at first 3 cc of water from a cup toultimately 30 cc of water. At the end of the study, a suc-cessful generalization probe was implemented with 90 cc of a mixture of water and juice. The results showedthat the chaining procedure improved Josh’s ability toemit the target chain (see Figure 8). Hagopian and col-leagues explained their procedure this way:

We began by targeting a preexisting response (swallow-ing), which was the third and final response in the chainof behaviors that constitute drinking from a cup. Next,

445

Chaining

100

80

60

40

20

0

Per

cen

tage

of

Tri

als

51 10 15 20 25 30 35 40 45 50 55 60 65 70Sessions

Success

AvoidantBehavior

10 c

c cu

p, b

asel

ine

Swal

low

, no

wat

er

Emp

ty s

yrin

ge

Syri

nge

dip

ped

in w

ater

0.2

cc s

yrin

ge

10 c

c cu

p, p

rob

e

0.5

cc s

yrin

ge

1 cc

syr

inge

3 cc

syr

inge

3 cc

cu

p

10 c

c cu

p, p

rob

e3

cc c

up

10 c

c cu

p, p

rob

e30

cc

cup

90 c

c cu

pge

ner

aliz

atio

n

break

Figure 8 Percentage of trials withsuccessful drinking and trials withavoidance behaviors.From “Treating Total Liquid Refusal with Backward Chainingand Fading,” by L. P. Hagopian, D. A. Farrell, & A. Amari,1996, Journal of Applied Behavior Analysis, 29, p. 575.Copyright 1996 by the Society for the ExperimentalAnalysis of Behavior, Inc. Used with permission.

reinforcement was delivered for the last two responsesin the chain (accepting and swallowing). Finally, rein-forcement was delivered only when all three responsesin the chain occurred. (p. 575)

A primary advantage of backward chaining is thaton every instructional the learner comes into contact withthe terminal reinforcer for the chain. As a direct outcomeof reinforcement, the stimulus that is present at the timeof reinforcement increases its discriminative properties.Further, the repeated reinforcement of all behaviors inthe chain increases the discriminative capability of all thestimuli associated with these behaviors and with the re-inforcer. The main disadvantage of backward chaining isthat the potential passive participation of the learner inearlier steps in the chain may limit the total number of re-sponses made during any given training session.

Backward Chaining with Leap Aheads

Spooner, Spooner, and Ulicny (1986) reported the useof a variation of backward chaining they called back-ward chaining with leap aheads. Backward chainingwith leap aheads follows essentially the same proce-dures as backward chaining, except that not every stepin the task analysis is trained. Some steps are simplyprobed. The purpose of the leap-aheads modification isto decrease total training time needed to learn the chain.With conventional backward chaining, the stepwise rep-etition of the behaviors in the sequence may slow downthe learning process, especially when the learner hasthoroughly mastered some of the steps in the chain. Forinstance, in the previous shoe-tying illustration, the child

might perform Step 7, the last behavior in the sequence,and then leap ahead to Step 4 because Steps 5 and 6 arein her repertoire. It is important to remember, however,that the learner must still perform Steps 5 and 6 cor-rectly and in sequence with the other steps to receivereinforcement.

Forward, Total-Task, or BackwardChaining: Which to Use?

Forward chaining, total-task chaining, and backwardchaining have each been shown to be effective with awide range of self-care, vocational, and independent-living behaviors. Which chaining procedure should bethe method of first choice? Research conducted to datedoes not suggest a clear answer. After examining evi-dence reported between 1980 and 2001, Kazdin (2001)concluded: “Direct comparisons have not establishedthat one [forward chaining, backward chaining, or total-task chaining] is consistently more effective than theother” (p. 49).

Although overwhelming conclusive data do not favorone chaining method over another, anecdotal evidenceand logical analysis suggest that total-task chaining maybe appropriate when the student (a) can perform manyof the tasks in the chain, but needs to learn them in se-quence; (b) has an imitative repertoire; (c) has moderateto severe disabilities (Test et al., 1990); and/or (d) whenthe task sequence or cycle is not very long or complex(Miltenberger, 2001).

The uncertainty about which approach to followmay be minimized by conducting a personalized taskanalysis for the learner, systematically applying the

446

Chaining

single- or multiple-opportunity method to determine thestarting point for instruction, relying on empiricallysound data-based studies in the literature, and collect-ing evaluation data on the approach to determine its ef-ficacy for that person.

Interrupting and BreakingBehavior ChainsOur discussion thus far has focused on procedures forbuilding behavior chains. As we have seen, chaining hasbeen used successfully to increase and improve therepertoires of a wide range of people and across a vari-ety of tasks. Still, simply knowing how to link behaviorto behavior is not always sufficient for practitioners. Insome situations, knowing how a behavior chain workscan be used to interrupt the execution of an existingchain (e.g., making toast) to advance a different re-sponse class of skills (e.g., a speech skill). Further, be-cause some chains are inappropriate (e.g., excessiveconsumption of food), knowing how to break an inap-propriate behavior chain can achieve positive results(e.g., the person eats an appropriate amount of food andstops eating at that point).

Behavior Chain Interruption Strategy

The behavior chain interruption strategy (BCIS) relieson the participant’s skill to perform the critical elements ofa chain independently, but the chain is interrupted at a pre-determined step so that another behavior can be emitted.Initially developed to increase speech and oral responding(Goetz, Gee, & Sailor, 1985; Hunt & Goetz, 1988), theBCIS has been extended into pictorial communication sys-tems (Roberts-Pennell & Sigafoos, 1999), manual signing(Romer, Cullinan, & Schoenberg, 1994), and switch acti-vation (Gee, Graham, Goetz, Oshima, & Yoshioka, 1991).

The BCIS works as follows. First, an assessment isconducted to determine if the person can independentlycomplete a chain of three or more components. Figure9 shows an example of how a toast-making chain, di-vided into five steps, was assessed across degrees of dis-tress and attempts to complete when specific steps wereblocked or interrupted by an observer. Degree of distresswas ranked on a 3-point scale, and attempts were regis-tered using a dichotomous scale (i.e., yes or no). The as-sessment yielded a mean distress ranking of 2.3 and atotal percentage of attempts (66%).

With respect to using BCIS in applied settings to in-crease behavior, a chain is selected for training based on

Figure 9 Sample score sheet used to evaluate chains.

Chain Pretesting

Student’s Name: Chuck Chain: making toast

Date: 5/2 *interrupt at start of step

Attempts to Sequence of steps Degree of distress complete Comments

1. Pull bread from bag* 1 2 3 yes no(low) (high)

2. Put in toaster 1 2 3 yes no

3. Push down* 1 2 3 yes no tries to touch knob

4. Take toast out 1 2 3 yes no

5. Put on plate* 1 2 3 yes no very upset; self-abusive

Total 2/3 = 66%

From “Using a Chain Interruption Strategy to Teach Communication Skills to Students with Severe Disabili-ties,” by L. Goetz, K. Gee, & W. Sailor, Journal of the Association of Persons with Severe Handicaps, 13 (1),p. 24. Copyright by the Association for Persons with Severe Handicaps. Used by permission.

x distress: 73

� 2.3

447

Chaining

Figure 10 Key components and features of the BCIS.

• Instruction begins in the middle of the chain sequence, not the start, differing fromforward and backward chaining procedures. Because the instruction begins in the middleof the sequence, interrupting the sequence might function as a transitive conditionedmotivating operation or negative reinforcer, the removal of which increases behavior.

• Procedurally, the BCIS is predicated on an assessment that verifies that the person isable to complete the chain independently, but experiences moderate distress when thechain is interrupted in the middle of the sequence.

• Verbal prompts are used at the point of interruption (e.g., “What do you want?”), but a full range of response prompts—modeling and physical guidance—has also beenemployed.

• Interruption training occurs in the natural setting (e.g., a water basin for washing hair; themicrowave oven for making cookies).

• Maintenance, generalization, and social validity data, although not overwhelminglyevident in every study, are sufficiently robust to suggest that the BCIS be included withother interventions (e.g., mand-model, time delay, and incidental teaching).

it being moderately distressful for the individual whenshe is interrupted while performing the chain, but not sodistressful that the person’s behavior becomes episodic orself-injurious. After collecting baseline on a target be-havior (e.g., vocalizations), the person is directed to startthe chain (e.g., “Make toast”). At a predetermined step inthe chain—say, Step 3 (push down)—the individual’sability to complete the chain is restricted. For example,the practitioner might momentarily and passively blockaccess to the toaster. The person would then be prompted,“What do you want?” A vocalization would be requiredfor the chain to be completed. That is, the person wouldhave to make a response such as saying, “Push down.”

Although the exact behavioral mechanisms respon-sible for improved performance as a result of using BCIShave not been pinpointed, field-based research efforts,including those examining generalized outcomes (Grun-sell & Carter, 2002), have established it as an efficaciousapproach, especially for persons with severe disabilities.For example, Carter and Grunsell’s (2001) review of theliterature on the efficacy of BCIS shows that it is positiveand beneficial. In their view,

The BCIS may be seen as empirically supported andcomplementary to other naturalistic techniques. . . . A small, but growing body of research literature hasdemonstrated that individuals with severe disabilitieshave acquired requesting with the BCIS. In addition, re-search demonstrates that implementation of the BCISmay increase the rate of requesting. (p. 48)

The assumption underlying BCIS assessment is that“persistence in task completion and emotional responseto the interruption would serve as operational definitionsof high motivation for task completion” (Goetz et al.,1985, p. 23). It could also be claimed that the interruption,

because it momentarily detains the learner from obtain-ing the reinforcer for completing the task, and serves asa blocked establishing operation, a condition that existswhen a reinforcer cannot be obtained without some otheradditional action or behavior occurring. Further, the rolethat negative reinforcement plays in BCIS and the sys-tematic environmental change that occurs at the point ofthe interruption have not been fully analyzed to deter-mine their relative contributions.

According to Carter and Grunell (2001), the BCISconsists of several components and features that make ituseful as a behavior change tactic in natural settings.Figure 10 provides a review of these key features.

Breaking an Inappropriate Chain

An inappropriate behavior chain (e.g., nail biting, smok-ing, excessive food consumption) can be broken by de-termining the initial SD and substituting an SD for analternative behavior, or by extending the chain and build-ing in time delays. Given that the first SD in a chainevokes for the first response, which in turn terminatesthat SD and produces the second SD, and so on through-out the chain. If the first SD appears less frequently, theentire chain occurs less often. Martin and Pear (2003)suggested that a behavior chain that reinforces excessivefood consumption, for example, might be broken by in-troducing links that require the individual to place theeating utensil on the table between bites or introducinga 3- to 5-second time delay before the next bite can beinitiated. In their view, “In the undesirable chain, theperson gets ready to consume the next mouthful beforeeven finishing the present one. A more desirable chainseparates these components and introduces brief delays”(p. 143).

448

Chaining

Figure 11 Illustration of an original behavior chain (upper panel) and its revision (lower panel) tobreak an inappropriate response sequence.

With respect to separating chain components, letus consider the case of a restaurant trainee with mod-erate mental retardation who has been taught to bus ta-bles. Upon completion of the training program, thetrainee was able to perform each of the necessary be-haviors in the table-bussing chain accurately and pro-ficiently (see Figure 11, upper panel). However, on the job site, the new employee started to emit an inap-propriate sequence of behaviors. Specifically, in thepresence of the dirty dishes on the empty table, the in-dividual scraped excess food scraps onto the table itself,instead of placing the dirty items on the food cart. Inother words, the initial SD for the table-bussing chain,the empty table with the dirty plates, was setting theoccasion for a response (dish scraping) that should haveoccurred later in the chain. To break this inappropriatechain, the behavior analyst should consider several pos-sible sources of difficulty, including (a) reexaminingthe SDs and responses, (b) determining whether similarSDs cue different responses, (c) analyzing the job set-ting to identify relevant and irrelevant SDs, (d) deter-mining whether SDs in the job setting differ fromtraining SDs, and (e) identifying the presence of novelstimuli in the environment.

Reexamine the SDs and Responses

The purpose of reexamining the list of SDs and re-sponses in the task analysis is to determine whether theoriginal chain of associated responses they evoke is ar-bitrary, based primarily on expert opinion, time and mo-tion studies, and practical efficiency. In our example,the trainer wants the presence of the dirty dishes on the

empty table to evoke the response of placing thosedishes in the bus cart. Consequently, a rearranged chainof SDs and responses was trained (see Figure 11, lower panel).

Determine Whether Similar SDs Cue Different Responses

Figure 11 (upper panel) shows two similar SDs—dirty dishes on the empty table and dirty dishes on thecart—which might have contributed to the scraping offood onto the table. In other words, R2 (scrape dishes)might have come under the control of S1 (dirty dishes ontable). Figure 11 (lower panel) shows how the be-havior analyst corrected the sequence by rearrangingthe SDs and their associated responses. Scraping thedishes is now the fifth response in the chain and occursat the kitchen sink, an area away from the restaurant ta-bles and diners. Thus, any potential confusion would bereduced or eliminated.

Analyze the Natural Setting to IdentifyRelevant and Irrelevant SDs

The training program should be designed so that the learneris trained to discriminate the relevant (i.e., critical) compo-nents of a stimulus from the irrelevant variations. Figure11 (lower panel) shows at least two relevant character-istics of S1: an empty table and the presence of dirty disheson that table. An irrelevant stimulus might be the locationof the table in the restaurant, the number of place settingson the table, or the arrangement of the place settings. A rel-evant characteristic of S5, presence of the kitchen sink,

449

Chaining

would be water faucets, the sink configuration, or dirtydishes. Finally, irrelevant stimuli might be the size of thekitchen sink and the type or style of faucets.

Determine Whether SDs in the Natural Setting Differ from Training SDs

It is possible that some variations of the SDs cannot betaught during training phases. For this reason many au-thors recommend conducting the final training sessionsin the natural setting where the behavior chain is expectedto be performed. This allows any differences that existto be recognized by the trainer, and subsequent discrim-ination training to be further refined on site.

Identify the Presence of Novel Stimuli in the Environment

The presence of a novel stimulus unexpected in the orig-inal training situation can also prompt the occurrence ofan inappropriate chain. In the restaurant example, thepresence of a crowd of customers might set the occasionfor the chain to be performed out of sequence. Likewise,distracting stimuli (e.g., customers coming and going,tips left on the table) might set the occasion for an inap-propriate chain. Also, a coworker might unwittingly givecontradictory instructions to the trainee. In any of thesesituations, the novel stimuli should be identified and thelearner taught to discriminate them along with other SDsin the environment.

Factors Affecting the Performance of a Behavior ChainSeveral factors affect the performance of a behaviorchain. The following sections outline these factors andprovide recommendations for addressing them.

Completeness of the Task Analysis

The more complete and accurate the task analysis is, themore likely a person will be to progress through the se-quence efficaciously. If the elements making up the chainare not sequenced appropriately, or if the correspondingSDs are not identified for each response, learning thechain will be more difficult.

The behavior analyst should remember two keypoints when attempting to develop an accurate task analy-sis. First, planning must occur before training. The time

devoted to constructing and validating the task analysisis well spent. Second, after the task analysis is con-structed, training should begin with the expectation thatadjustments in the task analysis, or the use of more in-trusive prompts may be needed at various steps in thetask analysis. For example, McWilliams and colleagues(1990) noted that one of their students required exten-sive trials and more intrusive prompts with some steps in the task analysis before improved performance wasobtained.

Length or Complexity of the Chain

Longer or more complex behavior chains take more timeto learn than shorter or less complex behavior chains.Likewise, the behavior analyst might expect training timeto be longer if two or more chains are linked.

Schedule of Reinforcement

When a reinforcer is presented subsequent to the per-formance of a behavior in a chain, it affects each of theresponses making up the chain. However, the effect oneach response is not identical. For example, in back-ward chaining, responses performed at the end of thechain are strengthened faster than responses made ear-lier in the chain because they are reinforced more fre-quently. The behavior analyst is advised to remembertwo points: (a) A chain can be maintained if an appro-priate schedule of reinforcement is used, and (b) thenumber of responses in a chain may need to be consid-ered when defining the schedule of reinforcement.

Stimulus Variation

Bellamy, Horner, and Inman (1979) provided an excel-lent pictorial representation of how stimulus variationaffects the performance of a chain. The upper photo-graph in Figure 12 shows a cam switch bearing be-fore and after its placement on a cam switch axle. Thelower photograph shows the four different types of bear-ings that can be used in the assembly process. The re-sponse of placing the bearing on the axle must be underthe control of the presence of the bearing (SD); how-ever, variation among bearings requires that the responsebe under the control of several SDs, each sharing a critical feature. In the illustration each bearing has a1.12-cm hole in the center and one or more hex nut slotson a face of the bearing. Any bearing with these stimu-lus features should evoke the response of placing thebearing on the axle, even if other irrelevant dimensions

450

Chaining

Figure 12 Upper panel: A cam switch bearing before and after placement on a camswitch axle. Lower panel: Four different types ofbearings used in cam switch assembly.From Vocational Habilitation of Severely Retarded Adults, pp. 40 &42, by G. T. Bellamy, R. H. Horner, and D. P. Inman, 1979, Austin, TX:PRO-ED. Copyright 1979 by PRO-ED. Reprinted by permission.

are present (e.g., color, material composition, weight).Stimuli without these features should not occasion the response.

If possible, the behavior analyst should introduce allpossible variations of the SD that the learner will en-counter. Regardless of the behavior chain, presentationof stimulus variations increases the probability that thecorrect response will occur in their presence. This in-cludes, for example, in an assembly task, various canis-ters, and shafts; with dressing skills, different fasteners,zippers, and buttons; and with tooth brushing, assortedtubes and pumps.

Response Variation

Often when stimulus variations occur, response varia-tion also must occur to produce the same effect. Again,

Bellamy and colleagues (1979) provided an illustrationwith the cam shaft assembly. In the upper-left photo-graph of Figure 13, the bearing has been placed on the cam shaft, and the retaining clip is being positionedfor placement with a pair of nose pliers. The upper-right photograph shows the clip in position. The lower-left photograph shows a different bearing configuration(i.e., the SD is different) requiring a different response.Instead of the retaining clip being lifted over the bear-ing cap with pliers, it must be pushed over the cap witha wrench-type tool. The response of lifting or pushinghas changed, as has the response of selecting the ap-propriate tool. Thus, the behavior analyst should beaware that when stimulus variation is introduced, train-ing or retraining of responses within the chain mightalso be required.

451

Chaining

Figure 13 Two ways that retaining rings areapplied to affix bearings to cam axles.From Vocational Habilitation of Severely Retarded Adults, p. 44, by G. T. Bellamy, R. H. Horner, and D. P. Inman, 1979, Austin, TX:PRO-ED. Copyright 1979 by PRO-ED. Reprinted by permission.

SummaryDefinition of a Behavior Chain

1. A behavior chain is a specific sequence of discrete re-sponses, each associated with a particular stimulus con-dition. Each discrete response and the associated stimuluscondition serve as an individual component of the chain.When individual components are linked together, the re-sult is a behavior chain that produces a terminal outcome.

2. Each stimulus that links two sequential responses in achain serves dual functions: It is a conditioned reinforcerfor the response that produced it and an SD for the next re-sponse in the chain.

3. In a behavior chain with a limited hold, a sequence of be-haviors must be performed correctly and within a specifictime for reinforcement to be delivered. Proficient re-sponding is a distinguishing features of chains with lim-ited holds.

Rationale for Using Chaining

4. Three reasons a behavior analyst should be skilled in build-ing behavior chains: (a) chains can be used to improve in-dependent-living skills; (b) chains can provide the meansby which other behaviors are combined into more com-plex sequences; and (c) chains can be combined with otherprocedures to build behavioral repertoires in generalizedsettings.

5. Chaining refers to the way these specific sequences ofstimuli and responses are linked to form new perfor-mances. In forward chaining, behaviors are linked togetherbeginning with the first behavior in the sequence. In back-ward chaining, the behaviors are linked together begin-ning with the last behavior in the sequence.

Task Analysis

6. Task analysis involves breaking a complex skill intosmaller, teachable units, the product of which is a series ofsequentially ordered steps or tasks.

7. The purpose of constructing and validating a task analy-sis is to determine the sequence of stepwise critical be-haviors that comprise the complete task and that wouldlead to it being performed efficiently. Task analyses can be constructed by observing a competent person performthe task, consulting experts, and self-performing the sequence.

8. The purpose of assessing mastery level is to determinewhich components of the task analysis can already be per-formed independently. Assessment can be conducted withthe single-opportunity or multiple-opportunity method.

Behavior Chaining Methods

9. In forward chaining, the behaviors identified in the taskanalysis are taught in their natural sequence. Specifically,reinforcement is delivered when the predetermined criterionfor the first behavior in the sequence is achieved. There-after, reinforcement is delivered for criterion completionof Steps 1 and 2. With each successive step, reinforcementis delivered contingent on the correct performance of allsteps trained thus far.

10. Total-task chaining is a variation of forward chaining inwhich the learner receives training on each step in the taskanalysis during every session. Trainer assistance is providedusing response prompts with any step that the individual isnot able to perform. The chain is trained until the learnerperforms all the behaviors in the sequence to criterion.

452

Chaining

11. In backward chaining, all the steps identified in the taskanalysis are completed by the trainer, except the last one.When the final step in the sequence is performed to crite-rion, reinforcement is delivered. Next, reinforcement isdelivered when the next-to-last step and the last step areperformed. Subsequently, the individual must perform thelast three steps before reinforcement is delivered, and soon. The primary advantage of backward chaining is that thelearner comes into contact with the contingencies of rein-forcement immediately, and the functional relationship be-gins to develop.

12. Backward chaining with leap aheads follows essentiallythe same procedures as in backward chaining, except thatnot every step in the task analysis is trained. The leap-aheads modification provides for probing or assessing un-trained behaviors in the sequence. Its purpose is to speedup training of the behavior chain.

13. The decision to use forward chaining, total-task chaining,or backward chaining should be based on the results of atask analysis assessment, empirically sound data-basedstudies, and a functional evaluation, taking into consider-ation the cognitive, physical, and motoric abilities andneeds of the individual.

Interrupting and Breaking Behavior Chains

14. The behavior chain interruption strategy (BCIS) is an in-tervention that relies on the participant’s skill in perform-ing the critical elements of a chain independently, but thechain is interrupted at a predetermined step so that anotherbehavior can be emitted.

15. An inappropriate chain can be broken if the initial SD thatsets the occasion for the first behavior in the chain is rec-ognized and an alternative SD is substituted. AlternativeSDs can be determined by reexamining the list of SDs andresponses in the task analysis, determining whether simi-lar SDs cue different responses, analyzing the natural set-ting to identify relevant and irrelevant SDs, determiningwhether the SDs in the natural setting differ from trainingSDs, and/or identifying the presence of novel SDs in theenvironment.

Factors Affecting the Performance of a Behavior Chain

16. Factors affecting the performance of a behavior chain in-clude: (a) completeness of the task analysis, (b) length orcomplexity of the chain, (c) schedule of reinforcement, (d) stimulus variation, and (e) response variation.

453

clicker trainingdifferential reinforcement

response differentiationshaping

successive approximation

Behavior Analyst Certification Board® BCBA® & BCABA®Behavior Analyst Task List,© Third Edition

Content Area 9: Behavior Change Procedures

9-6 Use differential reinforcement.9-11 Use shaping.

© 2006 The Behavior Analyst Certification Board, Inc.,® (BACB®) all rights reserved. A current version of thisdocument may be found at www.bacb.com. Requests to reprint, copy, or distribute this document and ques-tions about this document must be submitted directly to the BACB.

Shaping

Key Terms

From Chapter 19 of Applied Behavior Analysis, Second Edition. John O. Cooper, Timothy E. Heron, William L. Heward. Copyright © 2007by Pearson Education, Inc. All rights reserved.

454

Shaping is the process of systematically anddifferentially reinforcing successive approxi-mations to a terminal behavior. Shaping is used

in many everyday situations to help learners acquirenew behaviors. For example, language therapists useshaping when they develop speech with a client by firstreinforcing lip movements, then sound production, andfinally word and sentence expression. Teachers workingwith students with severe disabilities shape social in-teractions when they differentially reinforce eye con-tact, one-word greetings, and conversational speech. Abasketball coach shapes the foul-shooting behavior ofher players when she differentially reinforces accurateshooting from positions a few feet in front of the basketto positions closer to the 15-foot regulation foul line.Further, trainers use shaping to teach desirable behav-ior to animals for function (e.g., loading horses into atrailer without injury to the horse or the attendant), or forappeal or utility duty (e.g., teaching porpoises to executeshow routines).

Depending on the complexity of a given behaviorand the learner’s prerequisite skills, shaping may requiremany successive approximations before the terminal be-havior is achieved. Achievement of the terminal behav-ior with respect to time, trials, or direction is seldompredictable, immediate, or linear. If the learner emits acloser approximation to the terminal behavior, and thepractitioner fails to detect and reinforce it, achievementof the terminal behavior will be delayed. However, if asystematic approach is used—that is, if each instance ofcloser approximations to the terminal behavior is detectedand reinforced—progress can usually be attained morequickly. Although shaping can be time-consuming, it rep-resents an important approach to teaching new behav-iors, especially those behaviors that cannot easily belearned by instructions, incidental experience or expo-sure, imitation, physical cues, or verbal prompts.

This chapter defines shaping, presents examples ofhow to shape behavior across and within different re-sponse topographies, and suggests ways to improve theefficiency of shaping. Clicker training is illustrated asone method that trainers use to shape new behaviors inanimals. Next, guidelines for implementing shaping arepresented. The chapter concludes with a look at futureapplications of shaping.

Definition of ShapingIn Science and Human Behavior, Skinner (1953) pre-sented the concept of shaping with an analogy:

Operant conditioning shapes behavior as a sculptorshapes a lump of clay. … The final product seems to

have a special unity or integrity of design, but we cannotfind a point at which this suddenly appears. In the samesense, an operant is not something which appears fullgrown in the behavior of the organism. It is the result ofa continuous shaping process. (p. 91)

Through careful and skillful manipulation of the orig-inal, undifferentiated lump of clay, the artisan keeps someaspects of the clay in its primary location, cuts otherpieces away, and reforms and molds still other sections sothat the form is slowly transfigured into the final sculpteddesign. Similarly, the skillful practitioner can shape novelforms of behavior from responses that initially bear littleresemblance to the final product. A practitioner using be-havioral shaping differentially reinforces successive ap-proximations toward a terminal behavior. The end productof shaping—a terminal behavior—can be claimed whenthe topography, frequency, latency, duration, or ampli-tude/magnitude of the target behavior reaches a prede-termined criterion level. The two key proceduralcomponents of shaping, differential reinforcement andsuccessive approximations, are described next.

Differential ReinforcementWhen a heavy ball is thrown beyond a certain mark,when a horizontal bar is cleared in vaulting or in jump-ing, when a ball is batted over the fence (and when, as aresult, a record is broken or a match or game won), dif-ferential reinforcement is at work.

– B. F. Skinner (1953, p. 97)

Differential reinforcement is a procedure in which re-inforcement is provided for responses that share a predetermined dimension or quality, and in which re-inforcement is withheld for responses that do notdemonstrate that quality. For example, differential re-inforcement is used by a parent who complies with hischild’s requests for an item at the dinner table whenthose requests include polite words such as “please” or“may I,” and who does not pass the requested item fol-lowing requests in which polite words are missing. Dif-ferential reinforcement has two effects: responsessimilar to those that have been reinforced occur withgreater frequency, and responses resembling the unre-inforced members are emitted less frequently (i.e., theyundergo extinction).

When differential reinforcement is applied consis-tently within a response class, its dual effects result in anew response class composed primarily of responsessharing the characteristics of the previously reinforcedsubclass. This emergence of a new response class is calledresponse differentiation. Response differentiation as

Shaping

455

Shaping

Figure 1 Approximations in differentially reinforcing the wearing of eyeglasses. Shadedportion includes behaviors no longer reinforced.From How to Use Shaping by M. Panyan, 1980, p. 4, Austin, TX:PRO-ED. Copyright 1980 by PRO-ED. Reprinted by permission.

a result of the parent’s differential reinforcement at thedinner table would be evident if all of the child’s requestscontained polite words.

Successive Approximations

The practitioner using shaping differentially reinforcesresponses that resemble the terminal behavior in someway. The shaping process begins with reinforcement ofresponses in the learner’s current repertoire that share animportant topographical feature with the terminal be-havior or are a prerequisite behavior for the final termi-nal behavior. When the initially reinforced responsesbecome more frequent, the practitioner shifts the crite-rion for reinforcement to responses that are a closer ap-proximation of the terminal behavior. The graduallychanging criterion for reinforcement during shaping results in a succession of new response classes, or suc-cessive approximations, each one closer in form to theterminal behavior than the response class it replaces.Skinner (1953) discussed the critical nature of succes-sive approximations as follows:

The original probability of the response in its final formis very low; in some cases it may even be zero. In thisway we build complicated operants that would never ap-pear in the repertoire of the organism otherwise. By re-inforcing a series of successive approximations, webring a rare response to a very high probability in a shorttime. This is an effective procedure because it recog-nizes and utilizes the continuous nature of a complexact. (p. 92)

Figure 1 illustrates the progression of successive approximations used by Wolf, Risley, and Mees (1964) in

shaping the wearing of eyeglasses in a preschooler whowas in danger of losing his eyesight if he did not wearcorrective glasses regularly. Touching glasses was thefirst behavior that was reinforced. When touching glasseswas established, picking up glasses was reinforced, andtouching glasses was placed on extinction (see shadedportion of figure). Next, putting glasses up to his facewas reinforced, and the two previously reinforced be-haviors were placed on extinction. Training continueduntil the terminal behavior, placing the glasses on hisface, was emitted, and all previous behaviors were placedon extinction.

Shaping Different Dimensions of Performance

Behavior can be shaped in terms of topography, fre-quency, latency, duration, and amplitude/magnitude (seeTable 1). Differential reinforcement could also be used to teach a child to talk within a conversational decibelrange. Let us suppose a behavior analyst is working witha student who usually talks at such a low volume (e.g.,below 45 decibels) that his teacher and peers have diffi-culty hearing him. Successive approximations to 65 deci-bels (dB)—the amplitude of normal conversationalspeech—might be 45, 55, and ultimately 65 dB. Differ-ential reinforcement of speaking at a minimum volume of45 dB would place responses below that amplitude onextinction. When the student is speaking consistently ator above 45 dB, the criterion would be raised to 55 db.Likewise, when 55 dB and finally 65 dB are achieved,previous lower amplitude levels are not reinforced (i.e.,they are placed on extinction).

Fleece and colleagues (1981) used shaping to in-crease the voice volume of two children enrolled in aprivate preschool for students with physical and devel-opmental disabilities. Baseline data were collected onvoice volume in the regular classroom. Voice volumewas measured on a 0- to 20-point scale with 0 indicat-ing that the child’s voice level was usually inaudible, 10indicating normal voice volume, and 20 indicatingscreaming. The shaping procedure consisted of havingthe children recite a nursery rhyme in the presence of avoice-activated relay device, whereby voice volume ac-tivated a light display. The intensity of the light corre-sponded to increased levels of voice volume: Highervoice volume produced brighter light, and lower voicevolume produced a dimmer light. The teacher shapedvoice volume by increasing the sensitivity threshold ofthe relay device. That is, whereas a low voice was suffi-cient to activate the dim light in the beginning stages oftraining, a much higher volume was needed to producethe same effect in later stages. Each raising of the volume

putting glassesup to his face

touchingglasses

picking upglasses

touching glasses

picking up glasses

touching glasses

456

Shaping

Table 1 Examples of Performance Improvements that Could Be Shaped in Various Dimensions of Behavior

Dimension Example

Topography (form of • Refining motor movements associated with a golf swing, throwing motion, or vaulting behavior.the behavior) • Improving cursive or manuscript letter formation during handwriting exercises.

Frequency (number of • Increasing the number of problems completed during each minute of a math seatwork responses per unit of time) assignment.

• Increasing the number of correctly spelled and appropriately used words written per minute.

Latency (time between the • Decreasing compliance time between a parental directive to “clean your room” and the onset of onset of the antecedent room-cleaning behavior.stimulus and the occurrence • Increasing the delay between the onset of an aggressive remark and retaliation by a student withof the behavior) severe emotional disabilities.

Duration (total elapsed time • Increasing the length of time that a student stays on task.for occurrence of the • Increasing the number of minutes of engaged study behavior.behavior)

Amplitude/Magnitude • Increasing the projected voice volume of a speaker from 45 dB to 65 dB.(response strength • Increasing the height of the high jump bar for students enrolled in a physical education class.or force)

level required to activate the light represented a succes-sive approximation to the terminal volume.

Analysis of the children’s performance using a mul-tiple-baseline across students design indicated that the

voice volume increased in the classroom setting as a func-tion of treatment (see Figure 2). Also, the children’s voice volume remained high after a 4-month period. Fi-nally, according to anecdotal reports by school staff, the

Figure 2 Voice-volume levels per session in the classroom setting.From “Elevation of Voice Volume in Young Developmentally Delayed Children via an Operant Shaping Procedure” by L. Fleece, A. Gross, T. O’Brien, J. Kistner, E. Rothblum, and R. Drabman, 1981, Journal of Applied Behavioral Analysis, 14, p. 354.Copyright by the Society for the Experimental Analysis of Behavior, Inc. Reprinted by permission.

Usuallyscreaming

Normal

Usuallyinaudible

Usuallyscreaming

Normal

Vo

ice

Vo

lum

e

Sessions

Usuallyinaudible

Follow-up

Follow-up

1 monthfollow-up4 monthfollow-up

457

Shaping

children’s higher voice volume generalized to other set-tings beyond the classroom.

From a practical standpoint, measures of increaseddecibels could be obtained with a voice-sensitive taperecorder or an audiometric recording device that emits asignal (light or sound) only when a specific threshold levelis achieved (Fleece et al., 1981). Productions below thecriterion level would not activate the recording device.

Shaping across and within ResponseTopographies

Shaping behavior across different response topogra-phies means that select members of a response classare differentially reinforced, whereas members of otherresponse classes are not reinforced. As stated previ-ously, lip movements, speech sounds, one-word utter-ances, and phrase or sentence productions represent thedifferent topographies of the layer response class ofspeaking behaviors; they are the prerequisite behaviorsof speaking. When shaping behavior across differentresponse topographies, the practitioner gradually in-creases the criterion of performance before deliveringreinforcement.

Isaacs, Thomas, and Goldiamond (1960) reported aclassic study showing how behaviors can be shapedacross and within response topographies. They success-fully shaped the vocal behavior of Andrew, a man diag-nosed with catatonic schizophrenia, who had not spokenfor 19 years despite many efforts to encourage speechproduction. Essentially, the shaping procedure was ini-tiated when an astute psychologist noticed that Andrew’susually passive expression changed slightly when a pack-age of chewing gum inadvertently dropped on the floor.Realizing that gum might be an effective reinforcer forbuilding behaviors in the response class of talking, thepsychologist selected speech production as the terminalbehavior.

The next step in the shaping process was to select aninitial behavior to reinforce. Lip movement was chosenbecause the psychologist noted that slight lip movementshad occurred in the presence of the pack of gum, andmore important, lip movement was in the response classof speech. As soon as lip movement was established bydifferential reinforcement, the psychologist waited forthe next approximation of the terminal behavior. Duringthis phase, lip movement alone was no longer reinforced;only lip movements with sound produced the reinforce-ment. When Andrew began making guttural sounds, vo-calizations were differentially reinforced. Then the

guttural sound itself was shaped (differential reinforce-ment within a response topography) until Andrew saidthe word gum. After the 6th week of shaping, the psy-chologist asked Andrew to say gum, to which Andrew responded, “Gum, please.” During that session and after-wards, Andrew went on to converse with the psychologistand others at the institution about his identity and back-ground. In this powerful demonstration of shaping, afterselection of the terminal behavior and the initial startingpoint, each member of the response class was shaped bydifferential reinforcement of successive approximationsto the terminal behavior.

Shaping a behavior within a response topographymeans that the form of the behavior remains constant,but differential reinforcement is applied to another mea-surable dimension of the behavior. To illustrate, let ussuppose that in a college-level physical education class,the teacher is instructing the students on water safety.Specifically, she is teaching them how to throw a life pre-server a given distance to a person struggling in the water.Since the important skill in this activity is to throw the lifepreserver near the person, the physical education teachermight shape accurate tossing by reinforcing successiveapproximations to a toss of a given distance. In otherwords, each toss that is near the person (e.g., within 2meters) will be praised, whereas tosses outside that rangewill not be. As students become more accurate, the areacan be reduced so that the terminal behavior is a tosswithin arm’s length of the person. In this case, the mag-nitude of the behavior is being shaped; the form of thetoss remains the same.

Another example of shaping within a response topog-raphy is a parent who attempts to increase the duration ofher child’s piano practice. The criterion for success—theterminal behavior—in this particular program might be tohave the child practice for 30 minutes three times perweek (e.g., Monday, Wednesday, and Friday). To ac-complish her objective, the parent could reinforce pro-gressively longer periods of practice, perhaps beginningwith a few minutes one night per week. Next, the parentmight reinforce increasing durations of practice: 10, 12,15, 20, 25, and ultimately 30 minutes of practice only onMondays. No contingency would be in effect for the othertwo days. As soon as an intermediate criterion level isreached (e.g., 20 minutes), reinforcement is no longer de-livered for less than 20 minutes of practice unless per-formance stalls at a higher level and progress is impeded.

During the next phase of shaping, the process is re-peated for Wednesday nights. Now the child must meetthe criterion on both days before reinforcement is deliv-ered. Finally, the sequence is repeated for all three nights.It is important to remember that the behavior being shapedin this example is not piano playing. The child can already

458

Shaping

play the piano; the topography of that response class hasbeen learned. What is being shaped by differential rein-forcement is a dimension of behavior within the responseclass, namely, the duration of piano practice.

Positive Aspects of Shaping

Shaping teaches new behaviors. Because shaping is im-plemented systematically and gradually, the terminal be-havior is always in sight. Also, shaping uses a positiveapproach to teach new behavior. Reinforcement is deliv-ered consistently upon the occurrence of successive approximations to the terminal behavior, whereasnonapproximations are placed on extinction. Punishmentor other aversive procedures are typically not involved ina shaping program. Finally, shaping can be combinedwith other established behavior change or behavior-build-ing procedures (e.g., chaining). For example, supposethat a behavior analyst designed a seven-step task analy-sis to teach a child to tie his shoes. However, the childwas unable to complete Step 5 in the task analysis. Shap-ing can be used in isolation to teach a closer approxima-tion to that step. Once Step 5 was learned throughshaping, chaining would continue with the rest of thesteps in the task analysis.

Limitations of Shaping

At least five limitations of shaping can be identified. Prac-titioners should be aware of these limitations and be pre-pared to deal with them as they arise. First, shaping newbehavior can be time-consuming because many approx-imations may be necessary before the terminal behavioris achieved (Cipani & Spooner, 1994).

Second, progress toward the terminal behavior isnot always linear. That is, the learner does not alwaysproceed from one approximation to the next in a con-tinuous, logical sequence. Progress may be erratic. Ifthe behavior is too erratic (i.e., not resembling a closerapproximation to the terminal behavior), an approxi-mation may need to be further reduced, allowing formore reinforcement and progress. The skill of the prac-titioner in noting and reinforcing the next smallest ap-proximation to the terminal behavior is critical to thesuccess of shaping. If the practitioner fails to reinforceresponses at the next approximation—because of ne-glect, inexperience, or preoccupation with other tasks—the occurrence of similar responses may be few and farbetween. If reinforcement for performance at a givenapproximation continues longer than necessary, progresstoward the terminal behavior will be impeded. 1The researcher provided a safety net so that the rat was not injured.

Third, shaping requires the practitioner to consis-tently monitor the learner to detect subtle indications thatthe next approximation that is closer to the terminal be-havior has been performed. Many practitioners—for ex-ample, teachers in busy or demanding classrooms—arenot able to monitor behavior closely to note smallchanges. Consequently, shaping may be conducted inap-propriately or at least inefficiently.

Fourth, shaping can be misapplied. Consider thechild trying to obtain her father’s attention by emittinglow-volume demands (e.g., “Dad, I want some icecream”). The father does not attend to the initial calls.As the child’s attempts become increasingly unsuccess-ful, she may become more determined to gain her fa-ther’s attention. In doing so, the frequency and amplitudeof the calls may increase (e.g., “DAD, I want icecream!”). After listening to the crescendo of vocal de-mands, the father eventually provides the ice cream. Thenext time, the child states what she wants in an evenlouder voice before getting her father’s attention. In thisscenario, the father has differentially reinforced an ever-rising level of attention-getting behavior and shapedhigher levels of call-outs for ice cream. Using this ex-ample as a backdrop, Skinner (1953) pointed out: “Dif-ferential reinforcement applied by a preoccupied ornegligent parent is very close to the procedure we shouldadopt if we were given the task of conditioning a childto be annoying” (p. 98).

Finally, harmful behavior can be shaped. For exam-ple, Rasey and Iversen (1993) showed that differential re-inforcement could be used to shape a rat’s behavior to thepoint that the rat fell from the edge of a platform. By dif-ferentially reinforcing the rat for sticking its nose fartherover the edge for food, the rat eventually fell off theledge.1 It is not difficult to speculate that adolescent gamessuch as Dare and Double Dare, which have evolved andbecome popularized in thrill- and fear-seeking televisionshows, capitalize on persons receiving differential rein-forcement for increasingly higher levels of risk-takingthat can lead to dangerous—and sometimes tragic—behavior.

Shaping versus Stimulus Fading

Shaping and fading both change behavior gradually, al-beit in much different ways. In shaping, the antecedentstimulus stays the same, while the response progressivelybecomes more differentiated. In stimulus fading, the

459

Shaping

opposite occurs: The antecedent stimulus changes grad-ually, while the response stays essentially the same.

Increasing the Efficiencyof ShapingIn addition to showing how behaviors are shaped acrossand within response topographies, the Isaacs and col-leagues (1960) study illustrates another aspect of shaping—efficiency. During the early stages of the program, the psy-chologist waited for the next approximation of the behav-ior to appear before delivering the reinforcer. Waiting canbe time-consuming and wasteful, so Isaacs and colleaguesimproved efficiency by using a verbal response prompt,“Say gum,” after the sixth training session. Presumably, ifthe psychologist had not used a verbal prompt, several ad-ditional sessions would have been necessary before a suc-cessful outcome was achieved.

Shaping can be enhanced in three ways. First, a dis-criminative stimulus (SD) may be combined with shap-ing. For example, when attempting to shape handshaking as a greeting skill for an adult with develop-mental disabilities, the teacher might say, “Frank, holdout your arm.” Scott, Scott, and Goldwater (1997) useda vocal prompt (“Reach!”) as a university-level pole-vaulter ran a path to plant his pole in the vault box. Theprompt was designed to focus the vaulter’s attention onextending his arms prior to take-off in the vault. Kazdin(2001) suggested that priming a response using any va-riety of prompting mechanisms can be helpful, espe-cially if the individual’s repertoire is weak and thelikelihood of distinguishable successive approximationsis low. “Even if the responses are not in the client’s reper-toire of responses, the priming procedure can initiateearly response components and facilitate shaping” (p.277). The second way to enhance shaping is throughphysical guidance. In the Frank example cited earlier,the teacher might manually assist Frank in holding outhis arm. Using the third shaping-enhancing technique,the teacher might use an imitative prompt to demonstratearm extension (e.g., “Frank, hold out your arm likethis”). Any contrived prompt that is used to speed shap-ing should later be faded.

Clicker TrainingPryor (1999) described clicker training as a science-based system for shaping behavior using positive rein-forcement. The clicker is a handheld device that producesa click sound when a metal tab is pressed. Reinforcement

is paired with the sound of the clicker so that the soundbecomes a conditioned reinforcer. Initially used to shapedolphin behavior without physical guidance (Pryor &Norris, 1991), clicker training was later used with otheranimals (e.g., cats and horses) and ultimately humans toshape complex behaviors such as airplane pilot skills(Pryor, 2005).

Clicker trainers focus on building behavior, not stoppingbehavior. Instead of yelling at the dog for jumping up,you click it for sitting. Instead of kicking the horse tomake it go, you click it for walking. Then, click byclick, you “shape” longer sits, or more walking, untilyou have the final results you want. Once the behavior islearned, you keep it going with praise and approval andsave the clicker and treats for the next new thing youwant to train. (Pryor, 2003, p. 1)

Figure 3 provides 15 tips for getting started with clicker training (Pryor, 2003).

Guidelines forImplementing ShapingThe practitioner should consider many factors before de-ciding to use shaping. First, the nature of the terminal be-havior to be learned and the resources available should beassessed. For example, a fifth-grade teacher might be in-terested in increasing the number of math problems per-formed by a student with learning disabilities. Perhapsthe student currently completes 5 problems per math pe-riod, with a range of 0 to 10 problems. If the student isable to finish and check her performance independentlyat the end of the period, shaping could be implementedin which the number of completed math problems wouldbe differentially reinforced. Reinforcement might be pre-sented for 5, 7, 9, and then 11 or more problems com-pleted correctly per period. In this case, solving the mathproblem is within a specific response topography, andthe student can monitor her own performance.

In addition, because shaping requires multiple ap-proximations and linear progression cannot be predicted,the practitioner is advised to estimate the total amount oftime required to achieve the terminal behavior. Such es-timates can be determined by asking colleagues howmany sessions it took them to shape similar behaviors,or by differentially reinforcing a few behaviors that ap-proximate the terminal behavior and then extrapolatingfrom that experience the total amount of time that mightbe needed to shape the terminal behavior. These two pro-cedures are likely to yield only rough estimates becauseany number of unforeseen factors can accelerate or

460

Shaping

Figure 3 Pryor’s 15 tips for getting started with the clicker.

1. Push and release the springy end of the clicker, making a two-toned click. Then treat.Keep the treats small. Use a delicious treat at first: for a dog or cat, little cubes of roastchicken, not a lump of kibble.

2. Click DURING the desired behavior, not after it is completed. The timing of the click iscrucial. Don’t be dismayed if your pet stops the behavior when it hears the click. Theclick ends the behavior. Give the treat after that; the timing of the treat is notimportant.

3. Click when your dog or other pet does something you like. Begin with something easythat the pet is likely to do on its own (ideas: sit, come toward you, touch your handwith its nose, lift a foot, touch and follow a target object such as a pencil or a spoon).

4. Click once (in-out). If you want to express special enthusiasm, increase the number oftreats, not the number of clicks.

5. Keep practice sessions short. Much more is learned in three sessions of five minuteseach than in an hour of boring repetition.You can get dramatic results, and teach yourpet many new things, by fitting a few clicks a day here and there in your normalroutine.

6. Fix bad behavior by clicking good behavior. Click the puppy for relieving itself in theproper spot. Click for paws on the ground, not on the visitors. Instead of scolding formaking noise, click for silence. Cure leash-pulling by clicking and treating thosemoments when the leash happens to go slack.

7. Click for voluntary (or accidental) movements toward your goal.You may coax or lurethe animal into a movement or position, but don’t push, pull, or hold it. Let the animaldiscover how to do the behavior on its own. If you need a leash for safety’s sake, loopit over your shoulder or tie it to your belt.

8. Don’t wait for the “whole picture” or the perfect behavior. Click and treat for smallmovements in the right direction.You want the dog to sit, and it starts to crouch inback: click.You want it to come when called, and it takes a few steps your way: click.

9. Keep raising your goal. As soon as you have a good response—when a dog, forexample, is voluntarily lying down, coming toward you, or sitting repeatedly—startasking for more. Wait a few beats, until the dog stays down a little longer, comes alittle further, sits a little faster. Then click. This is called “shaping” a behavior.

10. When your animal has learned to do something for clicks, it will begin showing you thebehavior spontaneously, trying to get you to click. Now is the time to begin offering acue, such as a word or a hand signal. Start clicking for that behavior if it happensduring or after the cue. Start ignoring that behavior when the cue wasn’t given.

11. Don’t order the animal around; clicker training is not command-based. If your pet doesnot respond to a cue, it is not disobeying; it just hasn’t learned the cue completely.Find more ways to cue it and click it for the desired behavior. Try working in a quieter,less distracting place for a while. If you have more than one pet, separate them fortraining, and let them take turns.

12. Carry a clicker and “catch” cute behaviors like cocking the head, chasing the tail, orholding up one foot.You can click for many different behaviors, whenever you happento notice them, without confusing your pet.

13. If you get mad, put the clicker away. Don’t mix scoldings, leash-jerking, and correctiontraining with clicker training; you will lose the animal’s confidence in the clicker andperhaps in you.

14. If you are not making progress with a particular behavior, you are probably clicking toolate. Accurate timing is important. Get someone else to watch you, and perhaps toclick for you, a few times.

15. Above all, have fun. Clicker-training is a wonderful way to enrich your relationship withany learner.

From Click to Win! by Karen Pryor, 2002. Sunshine Books, Waltham, MA.

461

Shaping

decelerate progress. If it appears that more time is neededthan can be arranged, the practitioner might consider an-other strategy. Some behaviors seem to preclude the useof shaping as a behavior-building technique. For instance,if a high school English teacher is interested in increas-ing the public-speaking repertoires of his students,prompting, modeling, or peer tutoring might be more ef-ficient than shaping. That is, telling or showing the stu-dents how to use gestures, inflection, or metaphors wouldbe much faster than attempting to shape each of thesedistinct response classes alone.

After the decision has been made to use shaping,Pryor’s Ten Laws of Shaping can assist the practitionerin implementing this process in applied settings (seeFigure 4).

Select the Terminal Behavior

Practitioners often work with learners who have to changemultiple behaviors. Consequently, they must identify thehighest priority behavior quickly. The ultimate criterionin this decision is the individual’s expected independenceafter the behavior change; that is, the likelihood of her at-taining additional reinforcers from the environment (Snell& Brown, 2006). For example, if a student frequentlyroams the classroom poking other students, taking theirpapers, and verbally harassing them, a shaping proceduremight best begin with a behavior that is incompatible withroaming the room because of the utility it would have for

the individual and the other students. In addition, if in-seat behavior is developed, the staff with whom the stu-dent interacts will likely notice and reinforce it. In thiscase, shaping should differentially reinforce longer dura-tions of in-seat behavior.

It is also important to define the terminal behaviorprecisely. For example, a behavior analyst might want toshape the appropriate sitting behavior of an individualwith severe retardation. To do so, sitting might be definedas being located upright in the chair, facing the front ofthe room with buttocks against the bottom of the chairand back against the chair rest during a 15-minute morn-ing activity. Using this definition, the analyst can deter-mine when the behavior is achieved, as well as what doesnot constitute the behavior—an important discriminationif shaping is to be conducted efficiently (e.g., the studentmight be half in and half out of the seat, or the studentmight be in the seat but turned toward the back of theroom).

Determine the Criterion for Success

After identifying the terminal behavior, the practitionershould specify the criterion for success. Here, the prac-titioner decides how accurate, fast, intense, or durablethe behavior must be before it can be considered shaped.Several measures can be applied to establish a criterionof success. Some of the more common include frequency,

Figure 4 Pryor’s ten laws of shaping.

1. Raise criteria in increments small enough so that the subject always has a realisticchance of reinforcement.

2. Train one aspect of any particular behavior at a time. Don’t try to shape for two criteriasimultaneously.

3. During shaping, put the current level of response on a variable ratio schedule ofreinforcement before adding or raising the criteria.

4. When introducing a new criterion, or aspect of the behavioral skill, temporarily relaxthe old ones.

5. Stay ahead of your subject: Plan your shaping program completely so that if thesubject makes sudden progress, you are aware of what to reinforce next.

6. Don’t change trainers in midstream.You can have several trainers per trainee, butstick to one shaper per behavior.

7. If one shaping procedure is not eliciting progress, find another. There are as manyways to get behavior as there are trainers to think them up.

8. Don’t interrupt a training session gratuitously; that constitutes a punishment.9. If behavior deteriorates, “Go back to kindergarten.” Quickly review the whole shaping

process with a series of easily earned reinforcers.10. End each session on a high note, if possible, but in any case quit while you’re ahead.

Reprinted with the permission of Simon & Schuster Adult Publishing Group from Don’t Shoot the Dog! The NewArt of Teaching and Training (revised edition) by K. Pryor, 1999, pp. 38–39. Copyright © 1984 by Karen Pryor.

462

Shaping

magnitude, and duration. Depending on the terminalbehavior, any or all of these dimensions can be used to as-sess achievement. For instance, a special educationteacher might shape frequency performance in solvingmathematics calculations problems that would proceedfrom the student’s current level of performance (say, halfa problem per minute) to a higher rate (e.g., four problemsper minute).

Norms for success can be determined by measuringthe behavior in a similar peer group or consulting estab-lished norms in the literature. For instance, norms andguidelines for reading rate by grade level (Kame´enui,2002), physical education skills for children by age group(President’s Council on Physical Fitness, 2005), andhomework guidelines per grade level (Heron, Hippler, &Tincani, 2003) can be referenced to validate progress to-ward the terminal behavior.

In the earlier illustration, the criterion for successcould be the student’s sitting in her seat appropriately90% of the time during the morning activity for 5 con-secutive days. In this example, two criteria are speci-fied: the percentage of acceptable sitting behavior persession (i.e., 90%) and the number of days that the cri-terion must be met to establish the behavior (i.e., 5 con-secutive days).

Analyze the Response Class

The purpose of analyzing the response class is to at-tempt to identify the approximations that might be emit-ted in the shaping sequence. When approximationbehaviors are known—and anticipated—the practitioneris in a better position to observe and reinforce an ap-proximation when it is emitted. However, the practi-tioner must recognize the projected approximations areguesstimates of behaviors that might be emitted. In ac-tuality, the learner may emit behaviors that are merelyapproximations of these projections. The practitionermust be able to make the professional judgment that thepreviously emitted behavior either is or is not a closerapproximation to the terminal behavior than the behav-iors that occurred and were reinforced in the past. Ac-cording to Galbicka (1994):

The successful shaper must carefully ascertain charac-teristics of an individual’s present response repertoire,explicitly define characteristics the final behavior willhave at the end of training, and plot a course between re-inforcement and extinction that will bring the right re-sponses along at the right time, fostering the finalbehavioral sequence while never losing responding altogether. (p. 739)

The relevant approximations across or within a re-sponse class can be analyzed in several ways. First, prac-titioners can consult experts in the field to determine theirviews on the proper sequence of approximations for agiven behavior (Snell & Brown, 2006). For example,teachers who have taught three-digit multiplication forseveral years can be consulted to learn their experienceswith prerequisite behaviors for performing this type ofcalculation task. Second, as stated earlier, normative datafrom published studies may be used to provide an esti-mate of the approximations involved. Third, a videotapecan be used to analyze the component behaviors. Thevideotape would help the practitioner to see motion thatmight be undetectable in an initial live performance, butbecome detectable as the practitioner becomes skilledwith observing it. Finally, the practitioner can performthe target behavior himself, carefully noting the discretebehavioral components as he produces them.

The ultimate determination of the approximations,the approximation order, the duration of time that rein-forcement should be delivered at a given approximation,and the criteria for skipping or repeating approximationsrests with the skill and the judgment of the practitioner.Ultimately, the learner’s performance should dictate whenapproximation size should be increased, maintained, ordecreased, making consistent and vigilant monitoring bythe practitioner essential.

Identify the First Behaviorto Reinforce

Two criteria are suggested for identifying the initial be-havior for reinforcement:(a) The behavior should alreadyoccur at some minimum frequency, and (b) the behaviorshould be a member of the targeted response class. Thefirst condition reduces the need to wait for the occurrenceof the initial behavior. Waiting for a behavior to be emit-ted can be therapeutically counterproductive and is usu-ally unnecessary. The second criterion sets the occasionfor reinforcing an existing behavioral component that hasone dimension in common with the terminal behavior.For example, if the terminal behavior is expressivespeech, as was the case with Andrew, lip movementwould be a good first choice.

Eliminate Interfering or Extraneous Stimuli

Eliminating sources of distraction during shaping en-hances the effectiveness of the process. For example, if

463

Shaping

Figure 5 Percentage of correct arm extensions at seven different heights. The numbers contained within each criterion condition refer to the height of the photoelectricbeam.From “A Performance Improvement Program for an International-Level Track and Field Athlete” by D. Scott, L. M. Scott, & B.Goldwater, 1997, Journal of Applied Behavior Analysis, 30 (3), p. 575. Copyright by the Society for the Experimental Analysis ofBehavior, Inc. Reprinted by permission.

a parent is interested in shaping one dimension of herdaughter’s dressing behavior at the local gymnasium(e.g., the rate of dressing) and decides to begin the shap-ing procedure in a room where cartoons are shown onthe television, the shaping program may not be success-ful because the cartoons will be competing for the daugh-ter’s attention. It would be more efficient to choose a timewhen and a location where sources of distraction can bereduced or eliminated.

Proceed in Gradual Stages

The importance of proceeding toward the terminal goalin gradual stages cannot be overemphasized. The practi-tioner should anticipate changes in the rate of progressand be prepared to go from approximation to approxi-mation as the learner’s behavior dictates. Each new oc-currence of a successive approximation to the terminalbehavior must be detected and reinforced. If not, shapingwill be, at worst, unsuccessful or, at best, haphazard, re-quiring much more time. Furthermore, the occurrence ofa behavior at a given approximation does not mean thatthe next response that approximates the terminal behav-ior will be immediately or accurately produced. Figure5 shows the results of a study by Scott and colleagues(1997), who found that the percentage of correct arm ex-tensions by the pole-vaulter decreased initially as the

height of the bar was raised. Whereas a 90% correct armextension was noted at the initial height, the vaulter’s cor-rect performance typically dropped to approximately 70% each time the bar was raised. After successive at-tempts at the new height, the 90% correct criterion wasreestablished.

The practitioner must also be aware that many tri-als may be required at a given approximation before thesubject can advance to the next approximation. Figure 5 illustrates this point as well. On the other hand,only a few trials may be required. The practitioner mustwatch carefully and be prepared to reinforce many tri-als at a given approximation or to move rapidly towardthe terminal objective.

Limit the Numberof Approximations at Each Level

Just as it is important to proceed gradually from approx-imation to approximation, it is equally important to en-sure that progress is not impeded by offering too manytrials at a given approximation. This may cause the be-havior to become firmly established, and because that ap-proximation has to be extinguished before progress canbegin again (Catania, 1998), the more frequently rein-forcement is delivered at a given approximation, thelonger the process can take. In general, if the learner is

100

90

80

70

60

50

40

30

20

10

0

Perc

enta

ge o

f C

orre

ct A

rm E

xte

nsi

on

s

1 21 41 61 81 101 121 141 161 181 201

Sessions

225 230 235 240 245 250 252

515 520 525 527 530 535JumpedMaximum Height

537

464

Shaping

progressing steadily, reinforcement is probably being de-livered at the correct pace. If too many mistakes are beingmade or behavior ceases altogether, the criterion for re-inforcement has probably being raised too quickly. Fi-nally, if the performance stabilizes at a certain level,shaping is probably going too slowly. Practitioners mayexperience all three of these conditions over the course ofsuccessive approximations that ultimately lead to the be-havior being shaped. Therefore, they must be vigilant andadjust the procedure, if necessary.

Continue to Reinforce When theTerminal Behavior Is Achieved

When the terminal behavior is demonstrated and rein-forced, it is necessary to continue to reinforce it. Other-wise the behavior will be lost, and performance willreturn to a lower level. Reinforcement must continue untilthe criterion for success is achieved and a maintenanceschedule of reinforcement is established.

Future Applications of ShapingNew applications of shaping are being reported in the lit-erature that extend its utility, promote its efficiency, andultimately benefit the individual with respect to shortertraining time and/or skill development (Shimoff & Cata-nia, 1995). At least three future applications of shapingcan be considered: using percentile schedules, using com-puters to teach shaping skills, and combining shapingprocedures with robotics engineering.

Percentile Schedules

Galbicka (1994) argued against the often-heard notionthat shaping is more art than science and that practition-ers can learn to use shaping only by hard-earned direct experience. Building on more than two decades of labo-ratory research on percentile reinforcement schedules,Galbicka’s conceptualization of shaping specifies the cri-teria for responses and reinforcement in mathematicalterms, thereby standardizing the shaping process regard-less of the person doing the shaping. He stated:

Percentile schedules disassemble the process of shapinginto its constituent components, translate those compo-nents into simple, mathematical statements, and then usethese equations, with parameters specified by the experi-menter or trainer, to determine what presently consti- 2They also developed simulations for reinforcement schedules.

tutes a criterional response and should therefore be rein-forced. (p. 740)

Galbicka (1994) conceded that for percentile sched-ules to be used effectively in applied settings, behaviorsmust be (a) measured constantly and (b) rank-ordered usinga system of comparisons to previous responses. If a re-sponse exceeds a criterion value, reinforcement is deliv-ered. If not, reinforcement is withheld. Galbicka projectedthat understanding how percentile schedules work may

increase our understanding of the complex social andnon-social dynamics that shape behavior . . . allow[ing]unprecedented control over experimentally relevant stim-uli in operant conditioning and differential proceduresand provide a seemingly endless horizon against which tocast our sights for extensions and applications. (p. 759)

Using Computers to Teach Shaping

Several options are currently available to teach shapingskills using computers or a combination of specializedsoftware with computer-based applications. For exam-ple, Sniffy, the Virtual Rat, is a computer-based, digitizedanimation of a white rat within a Skinner box. Studentscan practice basic shaping on their own personal com-puters using this virtual simulation.

Acknowledging the practical limitations of teachingbehavior analysis principles (e.g., shaping) interactivelyto a large student body, Shimoff and Catania (1995) de-veloped and refined computer simulations to teach theconcepts of shaping.2 In their simulation, The ShapingGame, four levels of difficulty (easy, medium, hard, andvery hard) were programmed over a series of refinements.Beginning with easier tasks and advancing to the veryhard level, students are taught to detect successive ap-proximations to bar-pressing behaviors that should be re-inforced, and those that should not. In still later versions,students are expected to shape the movement of the ani-mated rat from one side of a Skinner box to another. Shimoff and Catania provided a case for computer sim-ulations when they stated, “Computers can be importantand effective tools for establishing sophisticated behav-ior that appears to be contingency shaped even in classesthat do not provide real (unsimulated) laboratory experi-ence” (pp. 315–316).

Martin and Pear (2003) contended that computersmight be used to shape some dimensions of behavior(e.g., topography) more quickly than humans because a

465

Shaping

computer can be calibrated and programmed with spe-cific decision rules for delivering reinforcement. Theyfurther suggested that in medical application cases (e.g.,amputees, stroke victims), in which minute muscle move-ments may go undetected by a human observer, a micro-processor could detect shapeable responses. In their view,“Computers . . . are both accurate and fast and may there-fore be useful in answering fundamental questions con-cerning which shaping procedures are most effective . . .computers may be able to shape at least some kinds of be-havior as effectively as humans” (p. 133).

Combining Shaping with Robotics Engineering

Some researchers have begun to explore how shapingcan be applied to robot training (e.g., Dorigo & Colom-betti, 1998; Saksida, Raymond, & Touretzky, 1997). Es-sentially, shaping is being considered as one method of

programming robots to progress from a start state througha series of intermediate and end-goal states to achievemore complicated sets of commands. According to Sav-age (1998):

Given that shaping is a significant determinant of how awide variety of organisms adapt to the changing circum-stances of the real world, then shaping may, indeed,have potential as a model for real-world robotic interac-tions. However, the success of this strategy depends on aclear understanding of the principles of biological shap-ing on the part of the roboticists, and the effective ro-botic implementation of these principles. (p. 321)

Although Dorigo and Colombetti (1998) and Sak-sida and colleagues (1997) concede the difficulty in ap-plying behavioral principles of successive approximationsto robots, this line of inquiry, combined with emergingknowledge on artificial intelligence, offers excitingprospects for the future.

SummaryDefinition of Shaping

1. Shaping is the differential reinforcement of successive ap-proximations to a desired behavior. In shaping, differentialreinforcement is applied to produce a series of slightly dif-ferent response classes, with each successive responseclass becoming a closer approximation to the terminal be-havior than the previous one.

2. A terminal behavior—the end product of shaping—can beclaimed when the topography, frequency, latency, dura-tion, or amplitude/magnitude of the target behavior reachesa predetermined criterion level.

3. Differential reinforcement is a procedure in which rein-forcement is provided for responses that share a predeter-mined dimension or quality, and in which reinforcementis withheld for responses that do not demonstrate that quality.

4. Differential reinforcement has two effects: responses simi-lar to those that have been reinforced occur with greater fre-quency, and responses resembling the unreinforced membersare emitted less frequently (i.e., undergo extinction).

5. The dual effects of differential reinforcement result in re-sponse differentiation, the emergence of a new responseclass composed primarily of responses sharing the char-acteristics of the previously reinforced subclass.

6. The gradually changing criterion for reinforcement dur-ing shaping results in a succession of new response classes,or successive approximations, each one closer in form tothe terminal behavior than the response class it replaces.

Shaping across and within Response Topographies

7. Shaping behavior across different response topographiesmeans that select members of a response class are differ-entially reinforced, while members of other responseclasses are not reinforced.

8. When shaping a behavior within a response topography,the form of the behavior remains constant, but differen-tial reinforcement is applied to a dimension of the behav-ior (e.g., frequency, duration).

9. Shaping entails several limitations that practitioners shouldconsider before applying it.

10. In shaping, the antecedent stimulus stays the same, whilethe response progressively becomes more differentiated. Instimulus fading, the response remains the same, while theantecedent stimulus changes gradually.

Increasing the Efficiency of Shaping

11. The efficiency of shaping may be improved in severalways, including using a discriminative stimulus, a vocalprompt, physical guidance, an imitative prompt, or prim-ing. Any prompt that is introduced is later faded.

Clicker Training

12. Clicker training is a science-based system for shaping be-havior using positive reinforcement.

13. The clicker, a handheld device that produces a click soundwhen a metal tab is pressed, provides the signal that whena behavior is being performed in the presence of the click,reinforcement follows.

466

Shaping

Guidelines for Implementing Shaping

14. Before deciding to use shaping, the nature of the behaviorsto be learned and the resources available must be assessed.

15. After the decision has been made to use a shaping proce-dure, the practitioner performs the following steps: selectthe terminal behavior, decide the criterion for success, an-alyze the response class, identify the first behavior to re-inforce, eliminate interfering or extraneous stimuli,

proceed in gradual stages, limit the number of approxi-mations at each level, and continue reinforcement whenthe terminal behavior is achieved.

Future Applications of Shaping

16. Future applications of shaping include applying percentileschedules, integrating computer technology to teach be-havior analysis using shaping procedures, and combiningshaping procedures with emerging robotics technology.

467

Extinction

Key Termsescape extinctionextinction (operant)

extinction burstresistance to extinction

sensory extinctionspontaneous recovery

Behavior Analyst Certification Board® BCBA® & BCABA®Behavior Analyst Task List,© Third Edition

Content Area 3: Principles, Processes and Concepts

3-11 Define and provide examples of extinction.

Content Area 9: Behavior Change Procedures

9-4 Use extinction:

(a) Identify possible reinforcers maintaining behavior and use extinction.

(b) State and plan for the possible unwanted effects of the use of extinction.

© 2006 The Behavior Analyst Certification Board, Inc.,® (BACB®) all rights reserved. A current version of thisdocument may be found at www.bacb.com. Requests to reprint, copy, or distribute this document and ques-tions about this document must be submitted directly to the BACB.

From Chapter 21 of Applied Behavior Analysis, Second Edition. John O. Cooper, Timothy E. Heron, William L. Heward. Copyright © 2007by Pearson Education, Inc. All rights reserved.

468

This chapter describes how to reduce the fre-quency of a previously reinforced behavior bywithholding reinforcement, a principle known

as extinction. Extinction as a procedure provides zeroprobability of reinforcement. It is also a behavioralprocess of a diminishing rate of response. Practitionershave applied this principle of behavior effectively in awide variety of settings, such as homes, schools, and in-stitutions, and with diverse problem behaviors rangingfrom severe self-destruction to mild disruptions. How-ever, the effectiveness of extinction in an applied settingdepends primarily on the identification of reinforcingconsequences and consistent application of the proce-dure. Extinction does not require the application of aver-sive stimuli to decrease behavior; it does not provideverbal or physical models of punishers directed towardothers. Extinction simply requires the withholding of re-inforcers. This chapter defines extinction and describesprocedures for applying extinction. Additionally, it dis-cusses extinction behavior and resistance to extinction.Although extinction appears to be a simple process, itsapplication in applied settings can be difficult.

Definition of ExtinctionExtinction as a procedure occurs when reinforcement ofa previously reinforced behavior is discontinued; as a re-sult, the frequency of that behavior decreases in the fu-ture.1 To restate this principle: Keller and Schoenfeld(1950/1995) defined extinction this way; “Conditionedoperants are extinguished by severing the relation be-tween the act and the effect. . . . The principle of Type R[operant] extinction may be put as follows: The strengthof a conditioned operant may be diminished by with-holding its reinforcement” (pp. 70–71). Similarly, Skin-ner (1953) wrote, “When reinforcement is no longerforthcoming, a response becomes less and less frequentin what is called ‘operant extinction’” (p. 69).

DeShawna, a third-grade student, frequently inter-rupted her teacher when he instructed other students. Itappeared that DeShawna tried to direct the teacher’s at-tention from the other students to herself. The teacher al-ways responded to DeShawna’s interruptions byanswering her questions, asking her to return to her seat,telling her not to interrupt him while he worked with theother students, or explaining that she needed to wait forher turn. DeShawna usually agreed that she would not in-terrupt her teacher, but her annoying interruptions did not

1The term extinction is also used with respondent conditioned reflexes.Presenting a conditioned stimulus (CS) again and again without the un-conditioned stimulus until the CS no longer elicits the conditioned responseis called respondent extinction.

stop. The teacher knew that DeShawna interrupted his in-struction to get his attention. Since telling DeShawna notto interrupt him at certain times did not work, the teacherdecided to try ignoring her interruptions. DeShawnastopped interrupting her teacher after 4 days during whichher interruptions produced no teacher attention.

In this example, student interruptions (i.e., the prob-lem behavior) were maintained by teacher attention (i.e.,the reinforcer). The teacher applied the extinction pro-cedure by ignoring DeShawna’s interruptions, and theprocedure produced a decrease in the frequency of theinterruptions. Note that the extinction procedure does notprevent occurrences of a problem behavior (e.g., the in-terruptions). Rather, the environment is changed so thatthe problem behavior no longer produces reinforcement(e.g., teacher attention).

Procedural and Functional Forms of Extinction

The use of functional behavior assessments have enabledapplied behavior analysts to distinguish clearly betweenthe procedural variations of extinction (ignoring) and thefunctional variations of extinction (withholding main-taining reinforcers). This clarification between proce-dural and functional variations of extinction has led tomore effective treatments (Lerman & Iwata, 1996a).

Historically, some applied behavior analysts haveemphasized the procedural form of extinction (e.g., justignore the problem behavior and it will go away) ratherthan the functional form (i.e., withholding the main-taining reinforcers). Applications of the procedural formof extinction are often ineffective. Functional behaviorassessments enabled applied behavior analysts to dis-tinguish clearly between the procedural (ignoring) andthe functional forms of extinction (withholding main-taining reinforcers), and as an outcome, may have in-creased research in using basic extinction proceduresin applied settings. When the extinction procedure ismatched to the behavioral function, the intervention isusually effective.

Extinction: Misuses of a Technical Term

With the possible exception of negative reinforcement,extinction is perhaps the most misunderstood and mis-used technical term in applied behavior analysis.Extinction is a technical term that applied behavior ana-lysts should use only to identify the procedure of with-holding reinforcers that maintain behavior. Four commonmisuses of the technical term are presented in the fol-lowing sections.

Extinction

469

Using Extinction to Referto Any Decrease in Behavior

Some use the term extinction when referring to any de-crease in responding, regardless of what produced thebehavior change. For example, if a person’s respondingdecreased as a result of punishment such as time-out fromreinforcement or a physical restriction of the behavior, astatement such as “the behavior is extinguishing” is mis-leading and incorrect in a technical sense. Labeling anyreduction in behavior that reaches a zero rate of occur-rence as extinction is another common misuse of thistechnical term.

Confusing Forgetting and Extinction

A misuse of the term extinction occurs when the speakerconfuses extinction with forgetting. In forgetting, a be-havior is weakened by the passage of time during whichthe person does not have an opportunity to emit the be-havior. In extinction, behavior is weakened because itdoes not produce reinforcement.

Confusing Response Blocking and Sensory Extinction

Applied behavior analysts have used goggles, gloves, hel-mets, and wrist weights to block the occurrence of re-sponses maintained by automatic reinforcement, ratherthan masking the sensory stimulation. The applications ofresponse blocking to reduce problem behaviors appearsimilar to sensory extinction. Response blocking, how-ever, is not an extinction procedure. With all extinctionprocedures, including sensory consequences, the personscan emit the problem behavior, but that behavior will notproduce reinforcement. By contrast, response blockingprevents the occurrence of the target behavior (Lerman &Iwata, 1996).

Confusing NoncontingentReinforcement and Extinction

Two definitions of extinction now appear in the appliedbehavior analysis literature. Each definition uses a dif-ferent procedure to diminish behavior. We use one defi-nition for extinction throughout this chapter (i.e., Keller& Schoenfeld, 1950/1995; Skinner 1953). It is the defi-nition most associated with the principle of behaviorcalled extinction. This definition of extinction has servedthe experimental analysis of behavior and applied be-havior analysis well for many years.

The procedure for the second definition does notwithhold the reinforcers that maintain the problem be-havior. It presents those reinforcers noncontingently

2Extinction linked to behaviors maintained by positive reinforcement isoften called attention extinction in the applied behavior analysis literature,especially in the functional behavior assessment (FBA) literature in whichreinforcement by social attention is one of the conditions or hypothesesexplored by FBA.

(NCR), meaning that the applied behavior analyst deliv-ers stimuli with known reinforcing properties to an individual on a fixed-time or variable-time schedule com-pletely independent of responding.

NCR provides an important and effective interven-tion for diminishing problem behaviors, but NCR oper-ates on behavior in a different way than does the principleof extinction. Extinction diminishes behavior by chang-ing consequence stimuli; NCR diminishes behavior bychanging motivating operations. Although both proce-dures produce a decrease in behavior, the behavioral ef-fects result from different controlling variables. Using anestablished technical term to describe effects producedby two different procedures is confusing.

Extinction ProceduresSpecifically, procedures for extinction take three distinctforms that are linked to behavior maintained by positivereinforcement,2 negative reinforcement, and automaticreinforcement.

Extinction of Behavior Maintainedby Positive Reinforcement

Behaviors maintained by positive reinforcement areplaced on extinction when those behaviors do not pro-duce the reinforcer. Williams (1959) in a classic studydescribed the effects of the removal of positive rein-forcement (extinction) on the tyrant-like behavior of a21-month-old boy. The child had been seriously ill forthe first 18 months of his life but had completely recov-ered when the study began. The boy demanded special at-tention from his parents, especially at bedtime, andresponded with temper tantrums (e.g., screaming, fussing,crying) when his parents did not provide the attention. Aparent spent 1⁄2 to 2 hours each bedtime waiting in thebedroom until the boy went to sleep.

Although Williams did not speculate about how thetantrums developed, a likely explanation is not difficultto imagine. Because the child had been seriously ill formuch of his first 18 months, crying had been his signalthat he was experiencing discomfort or pain or neededhelp. In effect, his crying may have been reinforced by hisparents’ attention. Crying became a high-frequency be-havior during the extended illness and continued after thechild’s health improved. The parents probably realized

Extinction

470

Du

rati

on

of

Cry

ing

in M

inu

tes

Times Child Put to Bed

Figure 1 Two extinction series showingduration of crying as a function of being put tobed.From “The Elimination of Tantrum Behavior by ExtinctionProcedures” by C. D. Williams, 1959, Journal of Abnormal andSocial Psychology, 59, p. 269. Copyright 1959 by the AmericanPsychological Association. Reprinted by permission of the publisherand author.

eventually that their child cried at bedtime to gain theirattention and tried to ignore the crying. The crying in-creased in intensity and emotion when they did not stayin the room. As the days and weeks went by, the child’sdemand for absolute attention at bedtime grew worse.The parents probably decided again not to stay with thechild at bedtime; but the intensity of the crying increased,and some new tantrum behaviors occurred. The parentsreturned to the room and, by giving in to the tantrums,taught their child to behave like a tyrant.

After 3 months of the tyrant-like behavior, the par-ents decided that they must do something about theirchild’s tantrums. It appeared that parental attention main-tained the tantrums; therefore, they planned to apply theprinciple of extinction. The parents, in a leisurely and re-laxed manner, put the child to bed, left the bedroom, andclosed the door. The parents recorded the duration ofscreaming and crying from the moment the door wasclosed.

Figure 1 shows the duration of the tantrums duringextinction. The child had a tantrum for 45 minutes thefirst time he was put to bed without his parents staying inthe room. The tantrums gradually decreased until the 10thsession, when the child “no longer whimpered, fussed,or cried when the parents left the room. Rather, he smiledas they left. The parents reported that their child madehappy sounds until he dropped off to sleep” (Williams,1959, p. 269).

No tantrums occurred for approximately 1 week, butthey resumed after his aunt put him to bed. When the

child started to tantrum, the aunt returned to the bedroomand stayed with the child until he went to sleep. Tantrumsthen returned to the previous high level and had to be de-creased a second time.

Figure 1 also shows the duration of tantrums for the10 days following the aunt’s intervention. The data curveis similar to that of the first removal of parent attention.The duration of the tantrums was somewhat greater dur-ing the second removal, but reached zero by the ninthsession. Williams reported that the child had no furtherbedtime tantrums during 2 years of follow-up.

Extinction of Behavior Maintainedby Negative Reinforcement

Behaviors maintained by negative reinforcement areplaced on extinction (a.k.a., escape extinction) whenthose behaviors do not produce a removal of the aversivestimulus, meaning that the person cannot escape from theaversive situation. Anderson and Long (2002) and Daw-son and colleagues (2003) provided excellent examples ofusing escape extinction as a behavioral intervention.

Anderson and Long (2002) provided treatment to re-duce the problem behaviors of Drew, an 8-year-old boywith autism and moderate to severe retardation. Drew’sproblem behaviors consisted of self-injurious behavior(SIB), aggression, and disruptions. Anderson and Longcompleted a functional behavior assessment, and thenhypothesized that escape from task situations maintainedthe problem behaviors. Based on this hypothesis, thespeech therapist used escape extinction to diminish theproblem behaviors that occurred while Drew worked onmatch-to-sample and receptive language tasks. Thesetasks evoked the highest rates of problem behavior. Thespeech therapist provided instructional prompts duringthe tasks. When Drew emitted problem behaviors fol-lowing the instructional prompt, the speech therapistphysically guided him to task completion. Escape ex-tinction produced a significant reduction in problem be-haviors during the match-to-sample and receptivelanguage tasks. Figure 2 shows the number of problembehaviors emitted per minute during baseline (i.e., es-cape) and escape extinction conditions.

Dawson and colleagues (2003) reported the use ofescape extinction to diminish food refusal by Mary, a 3-year-old who was admitted to a day program for thetreatment of total food refusal. Her medical history in-cluded gastroesophageal reflex, delayed gastric empty-ing, and gastrostomy tube dependence among othermedical issues. Her food refusal included head turningwhen presented with a bite of food, putting her hand onthe spoon or therapist’s hand or arm, and using her handsor bib to cover her face.

Extinction

471

6

5

4

3

2

1

0Pro

ble

m B

ehav

ior

per

Min

ute

2 4 6 8 10 12 14 16 18 20 22 24 26Sessions

Baseline Escape Extinction Baseline Escape Extinction

Drew

Figure 2 The number of problem behaviors perminute by Drew during baseline and escape extinc-tion conditions.From “Use of a Structured Descriptive Assessment Methodology to IdentifyVariables Affecting Problem Behavior” by C. M. Anderson & E. S. Long,2002, Journal of Applied Behavior Analysis, 35 (2), p. 152. Copyright 2002by Society for the Experimental Analysis of Behavior, Inc. Adapted bypermission.

The escape extinction procedure followed 12 ses-sions of food refusal. If Mary refused the bite of food,the therapist held the spoon to Mary’s mouth until shetook the bite. When Mary expelled the bite, the food wasre-presented until swallowed. With this procedure, Mary’srefusals did not produce an escape from the food. Thetherapist terminated the session after Mary had swallowed12 bites of food. Mary accepted no food during the 12 baseline sessions; but after two sessions of escape extinction, Mary’s food acceptance increased to 100%compliance.

Extinction of Behavior Maintainedby Automatic Reinforcement

Behaviors maintained by automatic reinforcement areplaced on extinction (a.k.a. sensory extinction;) bymasking or removing the sensory consequence (Rincover,1978). Some behaviors produce natural sensory conse-quences that maintain the behavior. Rincover (1981) de-scribed a naturally occurring sensory consequence as astimulus that “sounds good, looks good, tastes good,smells good, feels good to the touch, or the movementitself is good” (p. 1).

Extinction linked to automatic reinforcement is nota recommended treatment option for problem behaviors,even self-stimulatory behaviors that are maintained bysocial consequences or negative reinforcement. Auto-matic reinforcement, however, can maintain self-injuriousbehaviors (SIB) and persistent, nonpurposeful, repetitiveself-stimulatory behaviors (e.g., flipping fingers, headrocking, toe walking, hair pulling, fondling body parts).For example, Kennedy and Sousa (1995) reported that a19-year-old man with profound disabilities had poked hiseyes for 12 years, which resulted in visual impairment inboth eyes. Eye poking was thought to serve a sensorystimulation function because it occurred most frequentlywhen he was alone. When Kennedy and Sousa used gog-gles to mask contact with his eyes, eye-poking behaviordecreased markedly.

Social consequences frequently maintain aggression,but not always. Automatic reinforcement can maintain

aggression also, as it can with SIB and self-stimulatorybehaviors (Thompson, Fisher, Piazza, & Kuhn, 1998).Deaver, Miltenberger, and Stricker (2001) used extinc-tion to decrease hair twirling. Hair twirling is a frequentprecursor to hair pulling, a serious self-injury. Tina, at 2years and 5 months of age, received treatment for hairtwirling and pulling. A functional analysis documentedthat Tina did not twirl or pull hair to obtain attention, andthat hair twirling most often occurred when Tina wasalone at bedtime. The sensory extinction procedure con-sisted of Tina wearing thin cotton mittens on both handsat naptime during day care and during bedtime at home.Figure 3 shows that sensory extinction diminished hairtwirling to near-zero levels at home and at day care.

Rincover, Cook, Peoples, and Packard (1979) andRincover (1981) provided the following examples of ap-plying extinction with automatic reinforcement:

1. A child persisted in flipping a light switch on andoff. The visual sensory consequence was removedby disconnecting the switch.

2. A child persistently scratched his body until itbled. The tactile (touch) sensory consequence wasremoved by putting a thin rubber glove on his handso that he could not feel his skin. Later, the glovewas faded by gradually cutting off portions of it.

3. A child would throw up, and then eat the vomit.The gustatory (taste) extinction procedure con-sisted of adding lima beans to the vomit. The childdid not like lima beans; therefore, the vomit didnot taste as good, and the positive sensory conse-quence was masked.

4. A child received kinesthetic stimulation (i.e., stim-ulation of muscles, tendons, and joints) by holdinghis arms out to his side and incessantly flappinghis fingers, wrists, and arms. The extinction proce-dure consisted of taping a small vibratory mecha-nism on the back of his hand to mask thekinesthetic stimulation.

5. A child produced auditory stimulation by persis-tently twirling an object, such as a plate, on a table.

Extinction

472

100

80

60

40

20

0

Per

cen

tage

of

Sess

ion

Tim

eBaseline Mittens Baseline Mittens

Bedroom

Day care

100

80

60

40

20

0

Baseline Mittens Baseline Mittens

Sessions

Figure 3 The percentage of session time of hair twirlingduring baseline and treatmentconditions in the home and daycare settings.From “Functional Analysis and Treatment ofHair Twirling in a Young Child” by C. M. Deaver,R. G. Miltenberger, and J. M. Stricker, 2001,Journal of Applied Behavior Analysis, 34p. 537. Copyright 2001 by Society forExperimental Analysis of Behavior, Inc. Used by permission.

Carpeting the surface of the table he used for spin-ning objects masked the auditory stimulation fromhis plate spinning.

Extinction EffectsWhen a previously reinforced behavior is emitted but isnot followed by the usual reinforcing consequences, theoccurrence of that behavior should either gradually de-crease to its prereinforcement level or stop entirely. Be-haviors undergoing extinction are usually associated withpredictable characteristics in rate and topography of re-sponse. These extinction effects have strong generalityacross species, response classes, and settings (Lerman &Iwata, 1995, 1996a; Spradlin, 1996). Applied behavioranalysts, however, have not devoted much research ef-fort to the basic extinction procedure beyond using ex-tinction as a component in treatment packages forproblem behaviors (Lerman & Iwata, 1996a). Conse-quently, extinction effects have not been documentedclearly in applied settings.

Lerman and Iwata (1996) cautioned applied behav-ior analysts that these clearly documented extinction ef-fects may have limited generality in applied behavioranalysis. Applied researchers and practitioners should

view all of the following comments on the extinction ef-fects tentatively when they relate to behavioral interven-tions or applied research rather than basic research.Applied behavior analysts almost always apply extinc-tion as an element of a treatment package, thereby con-founding the understanding of extinction behavior inapplied settings.

Gradual Decrease in Frequencyand Amplitude

Extinction produces a gradual reduction in behavior.However, when reinforcement is removed abruptly, nu-merous unreinforced responses can follow. This gradualdecrease in response frequency will tend to be sporadicwith a gradual increase in pauses between responses(Keller & Schoenfeld, 1950/1995). The extinction pro-cedure is often difficult for teachers and parents to applybecause of the initial increase in frequency and magnitudeand the gradual decrease in behavior. For example, par-ents may be unwilling to ignore tantrum behavior for asufficient amount of time because tantrums are so aver-sive to parents. Rolider and Van Houten (1984) presenteda tactic for this practical problem. They suggested teach-ing parents to ignore gradually increasing durations of

Extinction

473

Res

po

nse

s

Before Extinction (Reinforcement)

During Extinction

ExtinctionBurst

SpontaneousRecovery

TimeFigure 4 Illustration of an extinction burstand spontaneous recovery.

bedtime crying. They used baseline data to assess howlong the parents could ignore bedtime crying comfort-ably before attending to their child. Then, the parentsgradually increased their duration of ignoring. Every 2days they waited an additional 5 minutes before attend-ing to the child until a sufficient total duration of timewas achieved.

Extinction Burst

A general effect of the extinction procedure is an imme-diate increase in the frequency of the response after theremoval of the positive, negative, or automatic rein-forcement. The behavioral literature uses the term extinction burst to identify this initial increase in re-sponse frequency. Figure 4 presents an illustration of anextinction burst. Operationally, Lerman, Iwata, and Wal-lace (1999) defined extinction burst as “an increase in re-sponding during any of the first three treatment sessionsabove that observed during all of the last five baselinesessions or all of baseline” (p. 3). The extinction burst iswell documented in basic research, but not well docu-mented in applied research (Lerman & Iwata, 1995,1996a). When reported, the bursts have occurred for onlya few sessions without notable problems.

Goh and Iwata (1994) provided data showing an ex-tinction burst. Steve was a 40-year-old man with pro-found mental retardation. He had been referred forevaluation for self-injury (head banging, head hitting). Afunctional analysis showed that self-injury was reinforcedby escape from instructions. Goh and Iwata used extinc-tion for the treatment of Steve’s self-injury. The top panelin Figure 5 shows extinction bursts occurring with theonset of each of the two extinction phases.

Even though applied researchers have rarely re-ported on extinction bursts, they do occur in applied set-tings (e.g., Richman, Wacker, Asmus, Casey, &Anderson, 1999; Vollmer et al., 1998). Problem behav-iors can worsen during extinction before they show im-provement. For example, teachers should anticipate aninitial increase in disruption during extinction. There-after, problem behaviors should begin to decrease andshould return to their prereinforcement level. Appliedbehavior analysts need to anticipate extinction bursts andbe prepared to consistently withhold the reinforcing con-sequence. Extinction bursts usually suggest that the re-inforcer(s) maintaining the problem behavior wassuccessfully identified, indicating that there is a goodchance of an effective intervention.

Initial Increase in Response Amplitude

In addition to the extinction burst, which is an increase inthe frequency of the response, an initial increase in theamplitude or force of the response also may occur duringextinction. Parents who begin to ignore a child’s bedtimetantrums might experience an increase in the loudness(i.e., amplitude) of screaming and the force of kickingbefore the tantrums begin to diminish.

Spontaneous Recovery

During extinction a behavior will typically continue a de-creasing trend until it reaches a prereinforced level or ul-timately ceases. However, a phenomenon commonlyassociated with the extinction process is the reappear-ance of the behavior after it has diminished to its prere-inforcement level or stopped entirely. Basic researchers

Extinction

474

15

10

5

0

SIB

Baseline ExtinctionBLExtinction

Sessions

3

2

1

0

Agg

ress

ion

Res

po

nse

s p

er M

inu

te

20 40 60 80

Figure 5 Number of responses per minute of self-injurious behavior (SIB) (toppanel) and aggression (bottompanel) for Steve during baselineand extinction.From “Behavioral Persistence and Variabilityduring Extinction of Self-Injury Maintained byEscape” by H-L. Goh and B. A. Iwata, 1994,Journal of Applied Behavior Analysis, 27,p. 174. Copyright 1994 by Society for theExperimental Analysis of Behavior, Inc. Used by permission.

commonly report this extinction effect and call it spont-aneous recovery. With spontaneous recovery, the be-havior that diminished during the extinction processrecurs even though the behavior does not produce rein-forcement. Spontaneous recovery is short-lived and lim-ited if the extinction procedure remains in effect (seeFigure 4).

Applied behavior analysts have not researched thecharacteristics and prevalence of spontaneous recovery(Lerman & Iwata, 1996a). Therapists and teachers needto know about spontaneous recovery, however, or theymight conclude erroneously that the extinction procedureis no longer effective.

Variables AffectingResistance to ExtinctionBehavior analysts refer to continued responding duringthe extinction procedure as resistance to extinction. Be-havior that continues to occur during extinction is saidto have greater resistance to extinction than behavior thatdiminishes quickly. Resistance to extinction is a relativeconcept. Three methods are commonly used to measureresistance to extinction. Reynolds (1968) described twomeasurement procedures: the rate of decline in responsefrequency, and the total number of responses emitted before responding either attains some final low level orceases. Lerman, Iwata, Shore, and Kahng (1996) reportedmeasuring resistance to extinction as the duration of time required for a behavior to reach a predetermined criterion.

Continuous and IntermittentReinforcement

Three tentative statements describe resistance to extinc-tion as it relates to continuous and intermittent rein-forcement. (a) Intermittent reinforcement may producebehavior with greater resistance to extinction than the re-sistance produced by continuous reinforcement. For ex-ample, behavior maintained on a continuous schedule ofreinforcement may diminish more quickly than behaviormaintained on intermittent schedules of reinforcement(Keller & Schoenfeld, 1950/1995). (b) Some intermittentschedules may produce more resistant than others (Ferster& Skinner, 1957). The two variable schedules (VR, VI)may have more resistance to extinction than the fixedschedules (FR, FI). (c) To a degree, the thinner the inter-mittent schedule of reinforcement is, the greater the re-sistance to extinction will be.

An undergraduate student in an introductory class inapplied behavior analysis gave an interesting example ofthe effects of intermittent reinforcement on persistence.About 2 months after a lecture on resistance to extinc-tion, the student shared this experience:

You know, intermittent reinforcement really does affectpersistence. This guy I know was always calling me fora date. I didn’t like him and didn’t enjoy being with him.Most of the time when he asked for a date I would notgo out with him. Occasionally, because of his persis-tence, I would give in and accept a date. After your lec-ture I realized that I was maintaining his calling on an

Extinction

475

intermittent schedule of reinforcement, which could ac-count for his persistence. I decided to change this situa-tion and put him on a continuous schedule ofreinforcement. Every time he called for a date, I agreedto go out with him. We spent about four evenings a weektogether for 3 weeks. Then without giving any reason, Iabruptly stopped accepting dates with him. Since thenhe’s called only three times, and I have not heard fromhim in 4 weeks.

Establishing Operation

All stimuli that function as reinforcers require a mini-mum level of an establishing operation (i.e., motivationmust be present). The strength of the establishing opera-tion (EO) above the minimum level will influence resis-tance to extinction. Basic research has shown that“[R]esistance to extinction is greater when extinction iscarried out under high motivation than under low” (Keller& Schoenfeld, 1950/1995, p. 75). We assume that thefunctional relation between an establishing operation andresistance to extinction exists in applied settings as wellas laboratory settings. For example, the fellow might per-sist in calling the woman for a date if he had made a siz-able bet with his buddies that the woman would date himagain.

Number, Magnitude, and Qualityof Reinforcement

The number of times a behavior has produced reinforce-ment may influence resistance to extinction. A behaviorwith a long history of reinforcement may have more re-sistance to extinction than a behavior with a shorter his-tory of reinforcement. If bedtime tantrums have producedreinforcement for 1 year, they might have more resistanceto extinction than tantrums that have produced rein-forcement for only 1 week.

Lerman, Kelley, Vorndran, Duhn, and LaRue (2002)reported that applied behavior analysts have not estab-lished what effects the magnitude of a reinforcer has onthe resistance to extinction. However, the magnitude andquality of a reinforcer will likely influence resistance toextinction. A reinforcer of greater magnitude and qualitymight produce more resistance to extinction than a rein-forcer with less magnitude and quality.

Number of Previous Extinction Trials

Successive applications of conditioning and extinctionmay influence the resistance to extinction. Sometimesproblem behaviors diminish during extinction, and thenare accidentally strengthened with reinforcement. When

this happens, applied behavior analysts can reapply theextinction procedure. Typically, behavior may diminishquickly with fewer total responses during a reapplicationof extinction. This effect is additive when the participantcan discriminate the onset of extinction. With each suc-cessive application of extinction, decreases in behaviorbecome increasingly rapid until only a single responseoccurs following the withdrawal of reinforcement.

Response Effort

Even though limited, applied researchers have producedsome data on response effort and its effect on resistanceto extinction (Lerman & Iwata, 1996a). The effort re-quired for a response apparently influences its resistanceto extinction. A response requiring greater effort dimin-ishes more quickly during extinction than a response re-quiring less effort.

Using Extinction EffectivelyNumerous guidelines for using extinction effectively havebeen published, with most authors providing similar rec-ommendations. Presented below are 10 guidelines to fol-low before and during the application of extinction. Theyare: withholding all reinforcers maintaining the problembehavior, withholding reinforcement consistently, com-bining extinction with other procedures, using instruc-tions, planning for extinction-produced aggression,increasing the number of extinction trials, including sig-nificant others in extinction, guarding against uninten-tional extinction, maintaining extinction-decreasedbehavior, and circumstances under which extinctionshould not be used.

Withholding All ReinforcersMaintaining the Problem Behavior

A first step in using extinction effectively is to identifyand withhold all possible sources of reinforcement thatmaintain the target behavior. The effectiveness of ex-tinction is dependent on the correct identification of theconsequences that maintain the problem behavior (Iwata,Pace, Cowdery, & Miltenberger, 1994). Functional be-havior assessments have improved greatly the applica-tion and effectiveness of using extinction in appliedsettings (Lerman, Iwata, & Wallace, 1999; Richman,Wacker, Asmus, Casey, & Anderson, 1999).

Applied behavior analysts collect data on antecedentand consequent stimuli that are temporally related to the

Extinction

476

problem behavior and provide answers to questions suchas the following:

1. Does the problem behavior occur more frequentlywhen something happens in the environment to oc-casion the behavior (e.g., a demand or request)?

2. Is the frequency of the problem behavior unrelatedto antecedent stimuli and social consequences?

3. Does the problem behavior occur more frequentlywhen it produces attention from other persons?

If the answer to the first question is yes, the problem be-havior may be maintained with negative reinforcement.If the answer to the second question is yes, the applied be-havior analyst will need to consider withholding tactile,auditory, visual, gustatory, olfactory, and kinesthetic con-sequences alone or in combination. If the answer to thethird question is yes, the behavior may be maintained bypositive reinforcement in the form of social attention.

The consequence that maintains the problem behav-ior appears obvious in some applied settings. In theWilliams (1959) study, for instance, parental attentionappeared to be the only source of reinforcement main-taining the tyrant-like behavior at bedtime. However, be-haviors are frequently maintained by multiple sources ofreinforcement. The class clown’s behavior might be main-tained by the teacher’s reaction to the disruptive behav-ior, by the attention received from peers, or a combinationof both. Johnny might cry when his parent brings him topreschool to escape from preschool, to keep his parentswith him, to occasion his teacher’s concern and attention,or to achieve some combination of all three. When mul-tiple sources of reinforcement maintain problem behavior,identifying and withholding one source of reinforcementmay have minimal or no affect on behavior. If teacherand peer attention maintain clownish classroom behavior,then withholding only teacher attention may produce lit-tle change in the problem behavior. The teacher mustwithhold her attention and also teach the other studentsto ignore the clownish behavior to effectively apply theextinction procedure.

Withholding ReinforcementConsistently

When the reinforcing consequences have been identified,teachers must withhold them consistently. All behaviorchange procedures require consistent application, but con-sistency is essential for extinction. Teachers, parents, ther-apists, and applied behavior analysts often report thatconsistency is the single most difficult aspect in usingextinction. The error of not withholding reinforcementconsistently negates the effectiveness of the extinctionprocedure, and this point cannot be overemphasized.

Combining Extinction with Other Procedures

Extinction is an effective singular intervention. However,teachers and therapists, applied researchers, and writersof textbooks have rarely recommended it as such(Vollmer et al., 1998). We also recommend that appliedbehavior analysts always consider combining extinctionwith other treatments, especially the reinforcement of al-ternative behaviors. Two reasons support this recom-mendation. First, the effectiveness of extinction mayincrease when it is combined with other procedures, es-pecially positive reinforcement. By combining extinctionwith differential reinforcement of appropriate behaviors,for example, the applied behavior analyst alters the en-vironment by reinforcing appropriate alternative behav-iors and placing problem behaviors on extinction. Duringintervention, the extinction procedure should not reducethe overall amount of positive consequences received bya participant. Second, differential reinforcement and an-tecedent procedures hold promise for reducing extinc-tion effects such as bursting and aggression (Lerman,Iwata, & Wallace, 1999).

Rehfeldt and Chambers (2003) combined extinctionwith differential reinforcement for the treatment of ver-bal perseverance maintained by positive reinforcementin the form of social attention. The participant was anadult with autism. Rehfeldt and Chambers presented so-cial attention contingent on appropriate and nonpresev-eration speech, and applied attention extinction by notresponding to the participant’s inappropriate verbal be-havior. This combined treatment produced an increase inappropriate verbal responses and a decrease in persever-ance. These data suggest that contingencies of rein-forcement might maintain the unusual speech of someindividuals with autism.

Using Instructions

The contingencies of reinforcement affect the future fre-quency of behavior automatically. It is not necessary forpeople to know, to describe, or even to perceive that con-tingencies are affecting their behavior. However, behav-iors sometime diminish more quickly during extinctionwhen teachers describe the extinction procedure to stu-dents. For example, teachers frequently provide small-group instruction while other students do independentseatwork. When seatwork students ask questions, theyinterrupt instruction. Many teachers correct this problemby placing question asking on extinction. They simplyignore student questions until after the end of small-groupinstruction. This tactic is often effective. The extinctionprocedure, however, tends to be more effective when

Extinction

477

Extinction

teachers tell students that they will ignore all their ques-tions until after the end of small-group instruction.

Planning for Extinction-ProducedAggression

Behaviors that occurred infrequently in the past willsometime become prominent during extinction by re-placing the problem behaviors. Frequently, these side effect replacement behaviors are aggressive (Lerman et al., 1999). Skinner (1953) interpreted the change in response topographies (e.g., side effects) as emotionalbehaviors, including aggression that sometimes ac-companies extinction.

Goh and Iwata (1994) provided a convincing demon-stration showing how aggression-induced behaviors oc-curred when a target behavior was placed on extinction.Steve was a 40-year-old man with profound mental re-tardation. He had been referred for evaluation for self-injury (head banging, head hitting). A functional analysisshowed that self-injury was reinforced by escape frominstructions. Goh and Iwata used extinction for the treat-ment of Steve’s self-injury. Steve rarely slapped or kickedother persons (i.e., aggression) during the two baselinephases, but aggression increased with the onset of thetwo extinction phases (see Figure 5, bottom panel). Ag-gression essentially stopped by the end of each extinc-tion phase when self-injury was stable and low, eventhough Steve’s aggression remained untreated during thebaseline and extinction conditions.

Applied behavior analysts need to plan on manag-ing aggressive behaviors when they occur as side effectsof extinction. It is critical that extinction-produced ag-gression not produce reinforcement. Frequently, parents,teachers, and therapists react to the aggression with at-tention that may function as reinforcement of theextinction-produced aggression. For instance, a teacherdecides to ignore DeShawna’s questions while providingsmall-group instruction. When DeShawna interrupts theteacher with a question, the teacher does not respond.DeShawna then starts disrupting other seatwork students.To quiet DeShawna down, the teacher responds, “Oh, allright, DeShawna, what did you want to know?” In effect,the teacher’s behavior may have reinforced DeShawnainterruptions during small-group instruction and the ac-companying inappropriate disruptions of the other seat-work students.

Many times extinction-produced aggression takesthe form of verbal abuse. Often, teachers and parents donot need to react to it. If extinction-produced aggressionproduces reinforcement, individuals simply use other in-appropriate behavior such as verbal abuse to produce re-inforcement. Teachers and parents cannot and should not

ignore some forms of aggression and self-injurious be-havior. Teachers and parents need to know (a) that theycan ignore some aggression, (b) when they need to in-tervene on the aggression, and (c) what they will do forthe intervention.

Increasing the Numberof Extinction Trials

An extinction trial occurs each time the behavior doesnot produce reinforcement. Whenever possible, appliedbehavior analysts should increase the number of extinc-tion trials for the problem behaviors. Increasing extinc-tion trials improves the efficiency of extinction byaccelerating the extinction process. Applied behavior an-alysts can increase extinction trials when applied settingswill accommodate frequent occurrences of the problembehavior. For example, Billy’s parents employed an ex-tinction procedure to reduce his tantrums. His parents no-ticed that Billy had tantrums most often when he did notget his way about staying up late, eating a snack, andgoing outside. For the purpose of the program, they de-cided to set up several additional situations each day inwhich Billy did not get his way. Billy was then morelikely to emit the inappropriate behavior at a higher rate,thereby giving his parents more occasions to ignore it.As a result, his tantrums decreased in a shorter period oftime than they would have if left at their usual rate.

Including Significant Others in Extinction

It is important that other persons in the environment not re-inforce undesirable behavior. A teacher, for example, needsto share extinction plans with other people who might helpin the classroom—parent volunteers, grandparents, musicteachers, speech therapists, industrial arts specialists—toavoid their reinforcing inappropriate behaviors. All peoplein contact with the learner must apply the same extinctionprocedure for effective treatment.

Guarding againstUnintentional Extinction

Desirable behaviors are often unintentionally placed onextinction. A beginning teacher confronted with one stu-dent who is on task and many other students who are notwill probably direct most of his attention to the majorityand will provide little or no attention to the student whois working. It is common practice to give the most atten-tion to problems (the squeaky wheel gets the grease) andto ignore situations that are going smoothly. However,behaviors must continue to be reinforced if they are to

478

Extinction

be maintained. All teachers must give attention to stu-dents who are on-task.

Maintaining Extinction-DecreasedBehavior

Applied behavior analysts leave the extinction procedurein effect permanently for maintaining the extinction-diminished behavior. A permanent application of escapeextinction and attention extinction is a preferred proce-dure. Applied behavior analysts can use a permanent ap-plication also with some sensory extinction procedures.For example, the carpeting on a table used for spinningobjects could remain in place indefinitely. Some appli-cations of sensory extinction appear inappropriate andinconvenient if kept permanently in effect. For example,requiring Tina to permanently wear thin cotton mittens onboth hands at naptime during day care and during bedtimeat home appears inappropriate and inconvenient (seeFigure 3). In such a case, the applied behavior ana-lysts can maintain treatment gains by gradually fadingout the sensory extinction procedure—for example, cutting off 1 inch of the glove palm every 3 to 4 daysuntil the glove palm is removed. The analyst may thenremove the fingers and thumb one at a time to graduallyremove the mittens.

When Not to Use Extinction

Imitation

Extinction can be inappropriate if the behaviors placed onextinction are likely to be imitated by others. Some be-haviors can be tolerated if only one person emits them,but become intolerable if several persons emit them.

Extreme Behaviors

With few exceptions, most applications of extinction asa singular intervention have focused on important but rel-atively minor behavior problems (e.g., disruptive class-room behavior, tantrums, excessive noise, mild forms ofaggression). However, some behaviors are so harmful toself or others or so destructive to property that they mustbe controlled with the most rapid and humane procedureavailable. Extinction as a singular intervention is not rec-ommended in such situations.

The use of extinction as a singular intervention todecrease severe aggression toward self, others, or prop-erty raises ethical concerns. Addressing the issue ofethics, Pinkston, Reese, LeBlanc, and Baer (1973) ana-lyzed the effects of an extinction technique that did notallow a person to harm himself or his victim. In their ap-proach the aggressor was ignored, but the victim wassheltered from attack. Pinkston and colleagues demon-strated the effectiveness of a safe extinction techniquewith an extremely aggressive preschool boy, whose ag-gressive behaviors included choking, biting, pinching,hitting, and kicking classmates. During the baseline con-dition the teachers responded to the child’s aggression asthey had in the past. “Typically, this took the form of ver-bal admonitions or reproofs such as: ‘Cain, we do not dothat here,’ or ‘Cain, you can’t play here until you are readyto be a good boy’” (p. 118). The teachers did not attendto the boy’s aggressive behaviors during extinction. Whenhe attacked a peer, the teachers immediately attended tothe peer. The victim was consoled and provided with anopportunity to play with a toy. In addition, the teachers at-tended to the boy’s positive behaviors. This extinctionprocedure was effective in greatly reducing aggression.The application of extinction requires sound, mature, hu-mane, and ethical professional judgment.

SummaryDefinition of Extinction

1. Extinction as a procedure provides zero probability of re-inforcement. It is also a behavioral process of a diminish-ing rate of response.

2. Functional behavior assessments have enabled appliedbehavior analysts to distinguish clearly between the pro-cedural variations of extinction (ignoring) and the func-tional variations of extinction (withholding maintaining reinforcers).

3. Extinction is a technical term that applied behavior analystsshould use only to identify the procedure of withholdingreinforcers that maintain behavior.

Extinction Procedures

4. Procedures for extinction take three distinct forms that arelinked to behavior maintained by positive reinforcement,negative reinforcement, and automatic reinforcement.

5. Behaviors maintained by positive reinforcement areplaced on extinction when those behaviors do not pro-duce the reinforcer.

6. Behaviors maintained by negative reinforcement areplaced on extinction (a.k.a., escape extinction) when thosebehaviors do not produce a removal of the aversive stim-ulus, meaning that the individual cannot escape from theaversive situation.

479

7. Behaviors maintained by automatic reinforcement areplaced on extinction (a.k.a., sensory extinction) by mask-ing or removing the sensory consequence.

Extinction Effects

8. Behaviors undergoing extinction are usually associatedwith predictable characteristics in rate and topographyof response.

9. Extinction produces a gradual reduction in behavior.

10. A general effect of the extinction procedure is an imme-diate increase in the frequency of the response after theremoval of the positive, negative, or automatic reinforce-ment. The behavioral literature uses the term extinctionburst to identify this initial increase in response frequency.

11. With spontaneous recovery, the behavior that diminishedduring the extinction process recurs even though the be-havior does not produce reinforcement.

Variables Affecting Resistance to Extinction

12. Behavior that continues to occur during extinction is saidto have greater resistance to extinction than behavior thatdiminishes quickly. Resistance to extinction is a relativeconcept.

13. Intermittent schedules of reinforcement may produce be-havior with greater resistance to extinction than the resis-tance produced by a continuous reinforcement schedule.

14. Variables schedules of reinforcement (e.g., VR, VI) mayhave more resistance to extinction than fixed schedules(e.g., FR, FI).

15. To a degree, the thinner the intermittent schedule of rein-forcement is, the greater the resistance to extinction will be.

16. Resistance to extinction is likely to increase with thestrength of the establishing operation (EO) for the rein-forcer being withheld.

17. The number, magnitude, and quality of the reinforcer mayaffect resistance to extinction.

18. Successive applications of conditioning and extinctionmay influence the resistance to extinction.

19. The effort required for a response apparently influencesits resistance to extinction.

Using Extinction Effectively

20. A first step in using extinction effectively is to withhold allreinforcers that are maintaining the problem behavior.

21. When the reinforcing consequences have been identified,teachers must withhold them consistently.

22. Applied behavior analysts should always consider com-bining extinction with other procedures.

23. Behaviors often decrease more quickly during extinctionwhen the person is informed of the procedure being applied.

24. When using extinction, applied behavior analysts shouldplan for extinction-produced aggression.

25. Increasing the number of extinction trials improves the ef-ficiency of extinction by accelerating the extinction process.

26. All people in contact with the learner must apply the sameextinction procedure for effective treatment.

27. Extinction should not be used for behaviors that are likelyto be imitated by others or for behaviors that are harmfulto self or others.

Extinction

480

differential reinforcement ofalternative behavior (DRA)

differential reinforcement ofincompatible behavior (DRI)

differential reinforcement of lowrates (DRL)

differential reinforcement of otherbehavior (DRO)

fixed-interval DRO (FI-DRO)fixed-momentary DRO (FM-DRO)full-session DRLinterval DRL

spaced-responding DRLvariable-interval DRO (VI-DRO)variable-momentary DRO

(VM-DRO)

Behavior Analyst Certification Board® BCBA® & BCABA®Behavior Analyst Task List,© Third Edition

Content Area 9: Behavior Change Procedures

9-6 Use differential reinforcement.

© 2006 The Behavior Analyst Certification Board, Inc.,® (BACB®) all rights reserved. A current version of thisdocument may be found at www.bacb.com. Requests to reprint, copy, or distribute this document and ques-tions about this document must be submitted directly to the BACB.

Differential Reinforcement

Key Terms

From Chapter 22 of Applied Behavior Analysis, Second Edition. John O. Cooper, Timothy E. Heron, William L. Heward. Copyright © 2007by Pearson Education, Inc. All rights reserved.

481

Differential Reinforcement

Practitioners can choose from among a widerange of effective procedures for decreasing oreliminating problem behaviors. Although often

effective in reducing targeted problem behaviors, inter-ventions based primarily on extinction or punishmentmay produce unwanted side effects. Maladaptive emo-tional behavior and a higher-than-usual rate of respond-ing are commonly observed when a behavior with a longand consistent history of reinforcement is placed on ex-tinction. Punishment may evoke escape, avoidance, ag-gression, and other forms of undesirable countercontrol.In addition to unwanted side effects, another limitation ofextinction and punishment as primary methods for re-ducing problem behaviors is that neither approachstrengthens or teaches adaptive behaviors with which in-dividuals can attain the reinforcers previously achieved bythe undesirable behaviors. Finally, beyond the possibil-ity of unwanted side effects and their lack of educativevalue, interventions that rely on extinction alone or pun-ishment in any of its varied forms raise important ethicaland legal concerns (Repp & Singh, 1990).

Because of all these concerns, applied behavior an-alysts have developed effective reinforcement-based pro-cedures for reducing problem behaviors (e.g., Kazdin,1980; Lerman & Vorndran, 2002; Singh & Katz, 1985).These positive reductive procedures are based on dif-ferential reinforcement to diminish or eliminate prob-lem behaviors.

Basic Description ofDifferential ReinforcementAll applications of differential reinforcement entail rein-forcing one response class and withholding reinforce-ment for another response class.1 When used as areductive procedure for problem behavior, differential re-inforcement consists of two components: (a) providing re-inforcement contingent on either the occurrence of abehavior other than the problem behavior or the problembehavior occurring at a reduced rate, and (b) withholdingreinforcement as much as possible for the problem be-havior. Although implementing differential reinforcementinvolves extinction, as Cowdery, Iwata, and Pace (1990)pointed out,

differential reinforcement does not involve extended in-terruptions of ongoing activities (e.g., time out), contin-gent removal of positive reinforcers (e.g., response cost),

1Differential reinforcement is not only for reducing problem behaviors. Asdescribed elsewhere in this text, differential reinforcement is a definingfeature of shaping new behaviors. Various forms of differential reinforce-ment contingencies are also used as experimental control procedures (seeThompson & Iwata, 2005).

or presentation of aversive stimuli (e.g., punishment).These characteristics make differential reinforcementthe least intrusive of all behavior interventions and prob-ably account for its widespread popularity. (p. 497)

Differential reinforcement in its various forms is oneof the most effective, widely known, and commonly usedtechniques to reduce problem behavior. The four mostresearched variations of differential reinforcement for de-creasing inappropriate behavior are differential rein-forcement of incompatible behavior (DRI), differentialreinforcement of alternative behavior (DRA), differen-tial reinforcement of other behavior (DRO), and differ-ential reinforcement of low rates (DRL). This chapterdefines, gives examples of applications of, and suggestsguidelines for the effective use of each of these differen-tial reinforcement procedures for decreasing problem behavior.

Differential Reinforcementof Incompatible Behaviorand DifferentialReinforcementof Alternative BehaviorDRI and DRA have the dual effects of weakening theproblem behavior while simultaneously strengtheningacceptable behaviors that are either incompatible withor an alternative to the targeted problem behaviors. Whenproperly implemented as a treatment for problem be-havior, differential reinforcement of an incompatible oralternative behavior can be conceptualized as a sched-ule of reinforcement in which two concurrent operants—the inappropriate behavior targeted for reduction and theappropriate behavior selected—receive reinforcement atdifferent rates (Fisher & Mazur, 1997). Because the dif-ferential reinforcement schedule favors the appropriatebehavior, the client allocates more responding to the ap-propriate behavior and less responding to the problembehavior (which is placed on extinction) (Vollmer,Roane, Ringdahl, & Marcus, 1999). For example, whenFriman (1990) reinforced the in-seat behavior of apreschool student with hyperactivity, out-of-seat behav-ior decreased markedly.

With the proper selection of behaviors, these two in-terventions may promote educational, social, and personalskill development. With DRI/DRA, the practitioner con-trols the development of appropriate behaviors and con-currently measures both the problem behavior and thedesired replacement behavior. Teachers, therapists, andparents have a long history of using DRI/DRA inter-ventions in education, treatment, and everyday social

482

Differential Reinforcement

interactions. Practitioners usually find DRI/DRA the easi-est of the four differential reinforcement procedures to apply.

Differential Reinforcementof Incompatible Behavior

A practitioner applying differential reinforcement ofincompatible behavior (DRI) reinforces a behavior thatcannot occur simultaneously with the problem behaviorand withholds reinforcement following instances of theproblem behavior. The behavior that gets reinforced (e.g.,a student seated at his seat) and the problem behavior thatis placed on extinction (e.g., student is out of seat) aremutually exclusive response classes whose differenttopographies make it impossible to emit both behaviorsat the same time.

Dixon, Benedict, and Larson (2001) used DRI totreat the inappropriate verbal behavior of Fernando, a 25-year-old man with moderate mental retardation and a psy-chotic disorder. Inappropriate vocalizations includedutterances not related to the context, sexually inappro-priate remarks, illogical placement of words within a sen-tence, and “psychotic” statements (e.g., “There’s a purplemoose on my head named Chuckles,” p. 362). The re-searchers defined appropriate verbal behavior as anyvocal utterance that did not meet the defining character-istics of inappropriate vocalization. A functional analysisrevealed that the inappropriate verbal statements weremaintained by social attention.

The researchers implemented DRI by ignoring Fer-nando’s inappropriate verbal behavior and attending tohis appropriate statements with 10 seconds of commentsthat were relevant to Fernando’s statements. For example,if Fernando said something about an activity he liked todo, the experimenter told him it was interesting and hopedhe could do it again soon. During a comparison baselinecondition, inappropriate behavior resulted in attentionand appropriate verbal behavior was ignored. The DRIintervention effectively reduced Fernando’s inappropriateverbal behavior and increased appropriate utterances(Figure 1).

Differential Reinforcementof Alternative Behavior

Differential reinforcement of alternative behavior (DRA)and DRI are similar procedures. A practitioner usingdifferential reinforcement of alternative behavior(DRA) reinforces occurrences of a behavior that pro-vides a desirable alternative to the problem behavior butis not necessarily incompatible with it. Behavior ana-lysts can use an alternative behavior to occupy the timethat the problem behavior would ordinarily use. The al-

2Any behavior selected for reinforcement with DRI provides the learnerwith an “alternative” to the problem behavior, but not all behaviors se-lected for reinforcement in a DRA are incompatible with the problem be-havior. Sometimes the difference between DRI and DRA is a fine one andprovokes legitimate debate. For example, the interventions used in studiesby Dixon and colleagues (2001) and Wilder, Masuda, O’Conner, andBaham (2001) were described as applications of DRA. However, becauseappropriate verbal behavior was defined in each study as vocal utterancesthat did not meet the defining characteristics of inappropriate vocaliza-tions, the response classes they reinforced were incompatible with the prob-lem behavior. If the “alternative” behavior selected for reinforcement cannotoccur simultaneously with the problem behavior, we believe the procedureis properly labeled DRI.

ternative behavior and the problem behavior, however,are not topographically incompatible. For example, aclassroom teacher could assign two students who fre-quently argue with each other to work on a class projecttogether, thus reinforcing cooperative behaviors associ-ated with project development. Working together on aclass project is not incompatible with arguing; the two re-sponse classes could occur together. However, the twostudents might argue less when engaged in cooperativebehaviors.2

Roane, Lerman, and Vorndran (2001) selected plac-ing a plastic block in a bucket as an alternative behaviorto screaming (i.e., a brief vocal sound above conversa-tional levels). Lerman, Kelley, Vorndran, Kuhn, andLaRue (2002) shaped and maintained touching a com-munication card as an alternative behavior to disruptionssuch as throwing task materials and aggression.

When escape from a task or demand situation isused as the reinforcer in a differential reinforcementprocedure for reducing inappropriate behavior, the in-tervention is sometimes called differential negativereinforcement of alternative (or incompatible) behavior(DNRA or DNRI) (Vollmer & Iwata, 1992). DRI/DRAinterventions can use negative reinforcers also (Mar-cus & Vollmer, 1995; Piazza, Moses, & Fisher, 1996).Applying DNRA/DNRI to reduce the occurrence of aproblem behavior maintained by escape from a task ordemand context consists of providing negative rein-forcement of the alternative behavior in the form ofbrief periods of escape from the task and escape ex-tinction for the problem behavior. Positive reinforce-ment for the alternative/incompatible behavior is oftenprovided as well.

Lalli, Casey, and Kates (1995) allowed participantsto escape from a task for 30 seconds contingent on usingan alternative response (e.g., giving the therapist a cardwith “BREAK” printed on it, or saying “no”) as an al-ternative to their aberrant behavior. The researchers alsoprovided praise for using the alternative responses. Astraining progressed, the participants’ escape from a taskwas contingent on using the trained verbal responses andcompleting a gradually increasing number of required

483

Differential Reinforcement

steps of the task. The DNRA intervention increased theuse of the trained verbal response and decreased the prob-lem behavior.3

Guidelines for Using DRI/DRA

Practitioners often use DRI and DRA as interventionsfor a wide range of problem behaviors. Many re-searchers and practitioners have discovered that the fol-lowing guidelines improve the effectiveness of DRI and DRA.

Select Incompatible/Alternative Behavior

Ideally, the behavior selected to be incompatible with oralternative to the inappropriate behavior (a) already ex-ists in the learner’s current repertoire; (b) requires equal,or preferably less, effort than the problem behavior; (c)is being emitted at a rate prior to the DRI/DRA inter-vention that will provide sufficient opportunities for re-inforcement; and (d) is likely to be reinforced in thelearner’s natural environment after intervention is termi-nated. Behaviors meeting these criteria will increase theinitial effectiveness of DRI/DRA and facilitate the main-tenance and generalization of behavior changes after in-tervention is terminated.

When possible, the practitioner should differentiallyreinforce incompatible/alternative behaviors that will leadto or increase the opportunities for the learner to acquirenew or other useful skills. When no behaviors meetingthose criteria can be identified, the practitioner can se-lect an alternative behavior that can be easily taught orconsider using a DRO procedure. Figure 2 shows some examples of incompatible or alternative behaviors(Webber & Scheuermann, 1991).

3DRA is a primary component of functional communication training (FCT),an intervention in which an alternative behavior is selected to serve thesame communicative function as the problem behavior (e.g., saying “Breakplease” results in the same reinforcement as aggression or tantrummingdid previously). FCT is described in detail in Chapter entitled “AntecedentInterventions.”

Select Reinforcers That Are Powerful and Can Be Delivered Consistently

Perhaps the greatest threat to the effectiveness of any re-inforcement-based intervention is the use of stimuluschanges following behaviors that the practitioner onlyassumes function as reinforcement. Providing as conse-quences for the incompatible/alternative behavior stimu-lus changes identified by stimulus preference assessmentand reinforcer assessment or functional behavior assess-ments will increase the effectiveness of DRI/DRA. In ad-dition, reinforcers should be selected that are relevant tothe establishing operations that naturally exist in the treat-ment setting or that can be created (e.g., by deprivationprior to treatment sessions).

The same consequence that is maintaining the prob-lem behavior prior to intervention is often the most ef-fective reinforcer for the alternative or incompatiblebehavior (Dixon et al., 2001; Wilder et al., 2001). Afterdiscovering that the problem behaviors emitted by dif-ferent students in a classroom were maintained by dif-ferent reinforcers, Durand, Crimmins, Caufield, andTaylor (1989) effectively used individually selected re-inforcers to differentially reinforce appropriate alternativebehaviors for each student, and the inappropriate behav-iors decreased in frequency.

The effectiveness of any intervention involving dif-ferential reinforcement depends on the practitioner’s abil-ity to deliver and withhold consistently stimulus changesthat currently function as reinforcement. Therefore, inaddition to their potential effectiveness as reinforcers, thestimulus changes that the practitioner selects must beones he can deliver immediately and consistently whenthe alternative/incompatible behavior occurs and with-hold following instances of the problem behavior.

The magnitude of the reinforcer used in a differen-tial reinforcement intervention is probably less impor-tant than its consistent delivery and control. Lerman,Kelley, Vorndran, Kuhn, and LaRue (2002) provided pos-itive reinforcement (access to toys) and negative rein-forcement (escape from demands) in magnitudes rangingfrom 20 seconds to 300 seconds. They found that rein-forcement magnitude only affected responding minimally

20

15

10

5

0

Nu

mb

er o

f V

erb

al U

tter

ance

s

1 3 5 7 9 11 13 15 17 19 21 23 25Sessions

Appropriate

Inappropriate

DRI Baseline DRI Baseline DRI

Figure 1 Number of appropriate and inappropriate verbal utterances byan adult male during DRI and baselineconditions.Adapted from “Functional Analysis and Treatment ofInappropriate Verbal Behavior” by M. R. Dixon, H.Benedict, & T. Larson (2001), Journal of AppliedBehavior Analysis, 34, p. 362. Copyright 2001 by theSociety for the Experimental Analysis of Behavior, Inc.Used by permission.

484

Differential Reinforcement

during treatment and had little effect on maintenance aftertreatment was discontinued.

Reinforcing Incompatible/AlternativeBehavior Immediately and Consistently

Initially, a practitioner should use a continuous (CRF)schedule of reinforcement for the incompatible or alterna-tive behavior, and then transition to an intermittent sched-ule. The reinforcer should be presented consistently andimmediately following each occurrence of the incompati-ble or alternative behavior. After firmly establishing the in-compatible or alternative behavior, the practitioner shouldgradually thin the reinforcement schedule.

Withhold Reinforcementfor the Problem Behavior

The effectiveness of differential reinforcement as an in-tervention for problem behavior depends on the incom-patible or alternative behavior yielding a higher rate ofreinforcement than the problem behavior. Maximizing the

difference between rates of reinforcement obtained by thetwo response classes entails withholding all reinforcementfor the problem behavior (i.e., extinction schedule).

Ideally, the alternative behavior would always be re-inforced (at least initially) and the problem behaviorwould never be reinforced. As Vollmer and colleagues(1999) noted, “Perfect implementation of differential re-inforcement entails providing reinforcers as immediatelyas possible after an appropriate behavior occurs (e.g.,within 5 s). Treatment effects may degrade as the delayto reinforcement increases, especially if inappropriate be-havior is occasionally reinforced” (p. 21). However, prac-titioners must often implement DRI/DRA procedures inless than optimal conditions, in which some occurrencesof the alternative/incompatible behavior are not followedby a reinforcer and the problem behavior sometimes in-advertently produces reinforcement.

Results from a study by Vollmer and colleagues(1999) suggest that even when such treatment “mistakes”occur, differential reinforcement may still be effective.These researchers compared a “full implementation” ofDRA, in which 100% of instances of alternative behavior

Figure 2 Examples of behaviors selected as incompatible/alternative behaviorswhen implementing DRI/DRA for common classroom behavior problems.

Problem Behavior Positive Incompatible Alternative

Talking back Positive response such as “Yes, sir” or “OK” or “I understand”; or acceptable questions such as “May I ask you a question about that?” or “May I tell you my side?”

Cursing Acceptable exclamations such as “darn,” “shucks.”

Being off-task Any on-task behavior: looking at book, writing, looking at teacher, etc.

Being out of seat Sitting in seat (bottom on chair, with body in upright position).

Noncompliance Following directions within _____ seconds (time limit will depend upon age of student); following directions by second time direction is given.

Talking out Raising hand and waiting quietly to be called on.

Turning in messy papers No marks other than answers.

Hitting, pinching, kicking, Using verbal expression of anger; pounding fist into hand; pushing/shoving sitting or standing next to other students without touching them.

Tardiness Being in seat when bell rings (or by desired time).

Self-injurious or self- Sitting with hands on desk or in lap; hands not touching any stimulatory behaviors part of body; head up and not touching anything (desk,

shoulder, etc.).

Inappropriate use Holding/using materials appropriately (e.g., writing only on of materials appropriate paper, etc.).

Adapted from “Accentuate the Positive . . . Eliminate the Negative” by J. Webber and B. Scheuermann, 1991,Teaching Exceptional Children, 24(1), p. 15. Copyright 1991 by Council for Exceptional Children. Used by permission.

485

Differential Reinforcement

6.0

4.0

2.0

0

Res

po

nse

s p

er M

inu

te

Inapp.

AppropriateRachel

10 20 30 40 50 60

0/100 100/0 0/100 25/75 100/0 50/50 75/25

100

50.0

0

Per

cen

tage

of

Res

po

nse

s A

llo

cate

d

Inapp.

App.

Rachel

10 20 30 40 50 60

0/100 100/0 0/100 25/75 100/0 50/50 75/25

Sessions

Figure 3 Number of appropriate and inappropriate responses per minute(upper panel) and allocation of appro-priate and inappropriate behavior (lowerpanel) by 17-year-old student withdisabilities during full (100/0) and partiallevels of implementation of DRA.Adapted from “Evaluating Treatment Challenges withDifferential Reinforcement of Alternative Behavior” by T. R. Vollmer, H. S. Roane, J. E. Ringdahl, and B. A.Marcus, 1999, Journal of Applied Behavior Analysis, 32,p. 17. Copyright 1999 by the Society for the ExperimentalAnalysis of Behavior, Inc. Used by permission.

were reinforced (CRF) and 0% of instances of aberrantbehavior were reinforced (extinction), with various lev-els of “partial implementation.” For example, with a25/75 implementation schedule, only one in every four in-stances of appropriate behavior was reinforced, whereasa reinforcer followed three of every four instances of in-appropriate behavior. As expected, full implementation ofdifferential reinforcement produced the greatest effectswith inappropriate behavior being “virtually replaced byappropriate behavior, and lower levels of implementa-tion eventually reducing treatment efficacy if the sched-ule of reinforcement favored inappropriate behavior” (p. 20). Figure 3 shows the results for Rachel, a 17-year-old girl with profound mental retardation who engaged inself-injurious behavior (e.g., head hitting and hand biting)and aggression toward others (e.g., scratching, hitting,hair pulling).

An important and perhaps surprising finding of theVollmer and colleagues (1999) study was that partial im-plementation was effective under certain conditions. Dur-ing partial implementation, the participants’ behaviorshowed a “disproportional tendency toward appropriatebehavior” (p. 20) if they had prior exposure to full im-plementation. The researchers concluded that this findingsuggests the possibility of intentionally thinning imple-mentation levels prior to extending differential rein-forcement interventions to settings in which maintainingtreatment fidelity will be difficult. Recognizing that treat-

ment effects might erode over time, they recommendedthat full implementation booster sessions be conductedperiodically to reestablish the predominance of appro-priate behavior.

Combine DRI/DRA with Other Procedures

Considering that the DRI/DRA interventions do notspecifically provide consequences for the problem be-havior, a practitioner will seldom apply DRI/DRA (orDRL) as a single intervention if the problem behavior isdestructive, dangerous to the learner or to others, or if itinterferes with health and safety. In these situations, thepractitioner might combine DRI/DRA with other reduc-tive procedures (e.g., response blocking, time-out, stim-ulus fading) to produce a more potent intervention (e.g.,Patel, Piazza, Kelly, Oschsher, & Santana, 2001).

For example, Ringdahl and colleagues (2002) com-pared the effects of DRA with and without instructionalfading on the frequency of problem behaviors of an 8-year-old girl, Kristina, who had been diagnosed withautism and functioned in the moderate range of mental re-tardation. Kristina’s problem behaviors during academictasks (destruction: throwing, tearing, or breaking workmaterials; aggression: hitting, kicking, biting; and self-injurious behavior: biting herself) had become so severethat she had been admitted to a hospital day treatmentprogram for help. Because a previous functional analysis

486

Differential Reinforcement

3

2

1

0

Pro

ble

m B

ehav

ior

per

Min

ute

1

0.8

0.6

0.4

0.2

0

Inst

ruct

ion

s p

er M

inu

te

Problem Behavior—DRA onlyProblem Behavior—DRA + FadingInstruction Rate—DRA onlyInstruction Rate—DRA + Fading

5 10 15 20 25 30 35Sessions

Figure 4 Problem behavior by an 8-year-old student withautism and developmental disabil-ities (left y axis) and instructionsgiven per minute (right y axis)under DRA only and DRA imple-mented with instructional fading.From “Differential Reinforcement with andwithout Instructional Fading” by J. E. Ringdahl, K. Kitsukawa, M. S. Andelman, N. Call, L. C.Winborn, A. Barretto, & G. K. Reed (2002),Journal of Applied Behavior Analysis, 35, p. 293.Copyright 2002 by the Society for theExperimental Analysis of Behavior, Inc. Used bypermission.

revealed that Kristina’s problem behavior was maintainedby escape from instructions to complete tasks, the DRAprocedure in both conditions consisted of giving Kristinaa 1-minute break from the task and access to leisure ma-terials contingent on independent completion of the task(e.g., counting items, matching cards with the same col-ors or shapes) without exhibiting problem behavior. Inthe DRA without instructional fading condition, Kristinawas given an instruction to complete a work task at a rateof approximately one instruction every other minute. Inthe DRA with instructional fading, Kristina was givenno instructions during the first three sessions, then oneinstruction every 15 minutes. The rate of instructions wasgradually increased until it equaled the rate in the DRAwithout the fading condition. Problem behavior was highat the outset of the DRA without the instructional fadingcondition but decreased across sessions to a mean of 0.2responses per minute over the final three sessions (seeFigure 4). During DRA with instructional fading, problem behavior was low from the outset and occurredduring only two sessions (Sessions 9 and 14). Over thefinal three sessions of DRA with instructional fading,Kristina exhibited no problem behavior and the instruc-tion rate was equal to the DRA without instructional fad-ing (0.5 instructions per minute).

Differential Reinforcementof Other BehaviorA practitioner using differential reinforcement of otherbehavior (DRO) delivers a reinforcer whenever theproblem behavior has not occurred during or at specifictimes (see momentary DRO later in chapter). As de-

4Because instances of target problem behavior in DRO and DRL (to bediscussed later in this chapter) often postpone reinforcement, some be-havior analysts have suggested that DRO and DRL be conceptualized asnegative punishment procedures rather than reinforcement techniques. Forexample, Van Houten and Rolider (1990) consider DRO and DRL varia-tions of a procedure they called contingent reinforcement postponement(CRP).

scribed by Reynolds (1961), DRO provides “reinforce-ment for not responding” (p. 59). Because reinforcementis contingent on the absence or omission of target be-havior, DRO is sometimes called differential reinforce-ment of zero responding or omission training (e.g.,Weiher & Harman, 1975).4

The delivery of reinforcement with DRO is deter-mined by a combination of how the omission requirementis implemented and scheduled. The omission requirementcan make reinforcement contingent on the problem be-havior not occurring either (a) throughout an entire in-terval of time (interval DRO), or (b) at specific momentsof time (momentary DRO). With an interval DRO, rein-forcement is delivered if no occurrences of the problembehavior were observed throughout the entire interval.Any instance of the target behavior resets the interval,thereby postponing reinforcement. With a momentaryDRO procedure, reinforcement is contingent on the ab-sence of the problem behavior at specific points in time.

The researcher or practitioner can determine whetherthe omission requirement has been met (i.e., at the ends ofintervals or at specific moments in time) according to afixed or variable schedule. The combinations of these twofactors—an interval or momentary omission requirementimplemented on either a fixed or a variable schedule—yield the four basic DRO arrangements shown in Figure5 (Lindberg, Iwata, Kahng, & DeLeon, 1999).

487

Differential Reinforcement

Fixed-Interval DRO(FI-DRO)

Fixed-Momentary DRO(FM-DRO)

Variable-Interval DRO(VI-DRO)

Variable-Momentary DRO(VM-DRO)

Interval

Mom

entaryFixed

Variable

Omission Requirement

Sch

edu

le

Figure 5 Four basic variations of DROthat can be created by altering theschedule (fixed or variable) and omissionrequirement (interval or momentary).Adapted from “DRO Contingencies: An Analysis of Variable-Momentary Schedules” by J. S. Lindberg, B. A. Iwata, S. W.Kahng, & I. G. DeLeon, 1999. Journal of Applied BehaviorAnalysis, 32, p. 125. Copyright 1999 by the Society for theExperimental Analysis of Behavior, Inc. Used by permission.

Interval DRO

The following sections introduce procedures for applyingthe fixed-interval DRO, variable-interval DRO, fixed-momentary DRO, and variable-momentary DRO sched-ules. Most researchers and practitioners use thefixed-interval DRO as an intervention for problem be-haviors. However, a growing number of researchers andpractitioners are exploring applications of variable-interval DRO and momentary DRO schedules.

Fixed-Interval DRO (FI-DRO)

Most applications of interval DRO apply the omission re-quirement at the end of successive time intervals of equalduration. To apply a fixed-interval DRO (FI-DRO) pro-cedure, a practitioner (a) establishes an interval of time;(b) delivers reinforcement at the end of that interval if theproblem did not occur during the interval; and (c) uponany occurrence of the problem behavior, immediately re-sets the timer to begin a new interval.5 For example, Allen,Gottselig, and Boylan (1982) applied an FI-DRO proce-dure as a group contingency to decrease the disruptiveclassroom behaviors of third-grade students.6 A kitchentimer was set to 5-minute intervals and continued to runas long as no disruptive behaviors occurred. If any studentengaged in disruptive behavior at any time during the 5-minute interval, the timer was reset and a new 5-minuteinterval began. When the timer signaled the end of a 5-minute interval without disruptive behavior, the class wasawarded 1 minute of free time, which they accumulatedand used at the end of the class period.

The DRO interval and treatment session length canbe increased gradually as the person’s behavior improves.Cowdery, Iwata, and Pace (1990) used FI-DRO to help

5A DRO contingency can also be applied using data from permanent prod-uct measurement. For example, Alberto and Troutman (2006) described aprocedure in which a teacher provided reinforcement each time a studentsubmitted a paper that contained no doodles. If the papers were basicallyof the same length or type, the procedure would be a variation of the fixed-interval DRO.6Group contingencies are described in Chapter entitled “Contigency Con-tracting, Token Economy, and Group Contingencies.”

Jerry, a 9-year-old who had never attended school andspent most of his time hospitalized because he scratchedand rubbed his skin so often and hard that he had open le-sions all over his body. During treatment, the experi-menter left the room and watched Jerry through anobservation window. Cowdery and colleagues initiallymade praise and token reinforcement contingent on 2-minute “scratch-free” intervals, which were determinedby baseline assessment. If Jerry scratched during the in-terval, the experimenter entered the room and told himshe regretted he had not earned a penny and asked him totry again. Jerry’s SIB immediately decreased to zerowhen DRO was first implemented and remained low asthe DRO interval and session length were gradually in-creased (from three 2-minute intervals to three 4-minuteintervals). Following a brief return to baseline conditionsin which Jerry’s scratching quickly increased, DRO wasreinstated with five intervals per session and sessionlength was increased in 1-minute increments. Jerry couldnow earn a total of 10 tokens per session, a token for eachinterval in which he did not scratch plus a 5-token bonusif he refrained from scratching for all five intervals.Jerry’s SIB gradually reduced to zero as session lengthand DRO intervals were extended (see Figure 6). Ses-sion 61 consisted of a single 15-minute DRO interval,which was followed by 25- and 30-minute intervals.

Variable-Interval DRO (VI-DRO)

When reinforcement is delivered contingent on the ab-sence of the targeted problem behavior during intervalsof varying and unpredictable durations, a variable-interval DRO (VI-DRO) schedule is in effect. For ex-ample, on a VI-DRO 10-second schedule, reinforcementwould be delivered contingent on the omission of the be-havior throughout intervals of varying duration that av-erage 10 seconds (e.g., a random sequence of 2-second,5-second, 8-second, 15-second, and 20-second intervals).

Chiang, Iwata, and Dorsey (1979) used a VI-DROintervention to decrease the inappropriate behaviors dur-ing bus rides to and from school of a 10-year-old student

488

Differential Reinforcement

100

80

60

40

20

0

Per

cen

tage

In

terv

als

of

SIB

SIB—DemandSIB—AttentionSIB—PlaySIB—Alone with ToysSIB—AloneSession Length

BaselineDRO

BL DRO30

20

10

0

Sessions20 40 60

Sess

ion

Len

gth

(M

inu

tes)

Figure 6 Percentage intervals of SIB by Jerry duringbaseline and DRO and durationof DRO sessions (right y axis).Other data points during firstbaseline phase show results offunctional analysis confirmingthat Jerry’s SIB occurred mostoften when he was left alone.From Cowdery, G., Iwata, B. A., & Pace, G. M.(1990). Effects and Side Effects of DRO asTreatment for Self-injurious Behavior. Journalof Applied Behavior Analysis, 23, 497–506.Copyright 1990 by the Society for the Exper-imental Analysis of Behavior, Inc. Used bypermission.

with developmental disabilities. The boy’s disruptive be-haviors included aggression (e.g., slapping, poking, hit-ting, kicking), being out of seat, stereotypic behaviors,and inappropriate vocalizations (e.g., screaming, yelling).Instead of basing the DRO intervals on elapsed time, theresearchers used an interesting procedure they describedas a “distance-based” DRO schedule. The driver dividedthe bus route into sections designated by landmarks suchas stop signs and traffic lights and mounted a four-digithand counter on the dashboard of the bus. The counterwas within reach of the driver and within view of the stu-dent. At each predetermined landmark the bus driverpraised the student’s behavior and added one point to thehand counter if no disruptive behaviors had been emittedduring the DRO interval. Upon arrival at home or schoolthe driver recorded the number of points earned on a cardand gave the card to the boy’s foster father or teacher. Thestudent exchanged the points earned for rewards (e.g., ac-cess to toys, home and school privileges, small snacks).

Chiang and colleagues used a two-tier multiple base-line across settings design to evaluate the VI-DRO inter-vention. During baseline, disruptive behavior ranged from20 to 100% with an average of 66.2% for the afternoonrides and 0 to 92% with an average of 48.5% in the morn-ings. When the VI-DRO procedure was implemented onthe afternoon rides only, disruptive behavior decreasedimmediately to an average of 5.1% (range of 0 to 40%) onthe afternoon rides but remained unchanged during themorning rides. When the intervention was applied also inthe mornings, all disruptive behavior was eliminated.

Progar and Colleagues (2001) implemented VI-DROas one component of a conjunctive schedule of rein-forcement to reduce the aggressive behaviors of Milty, a14-year-old boy with autism. Reinforcement is deliveredon a conjunctive schedule when the requirements of bothschedule components had been met; in this case a fixed-ratio 3 schedule for task completion and a variable-interval 148-second schedule for the absence ofaggression. Milty received an edible reinforcer each timehe completed three components of a task (e.g., makinghis bed, vacuuming, straightening objects in his room) in the absence of aggression for the duration of the variable interval. An occurrence of aggression prior tocompleting the FR 3-second requirement reset both com-ponents of the conjunctive schedule. This interventionproduced a substantial reduction of aggressive behavior.

Momentary DRO

Fixed-momentary DRO (FM-DRO) and variable-momentary DRO (VM-DRO) schedules use the same pro-cedures as interval DRO (FI-DRO, VI-DRO) except thatreinforcement is contingent on the absence of the problembehavior only when each interval ends, rather than through-out the entire interval as with the whole-interval DRO.

Lindberg, Iwata, Kahng, and DeLeon (1999) used aVM-DRO intervention to help Bridget, a 50-year-oldwoman with severe developmental disabilities who oftenbanged her head and hit herself on the head and body. Afunctional analysis indicated that Bridget’s self-injurious

489

Differential Reinforcement

14

12

10

8

6

4

2

0

Res

po

nse

s p

er M

inu

te (

SIB

)

10 20 30 40 50 60 70Sessions

Baseline BL

VM-DRO

Bridget

300 s22 s

11 sFigure 7 Responses per minute of self-injurious behavior (SIB) during baseline andtreatment conditions.From “DRO Contingencies: An Analysis of Variable-momentarySchedules” by J. S. Lindberg, B. A. Iwata, S. W. Kahng, & I. G.DeLeon, 1999. Journal of Applied Behavior Analysis, 32, p. 131.Copyright 1999 by the Society for the Experimental Analysis ofBehavior, Inc. Reprinted by permission.

behaviors (SIB) were maintained by social-positive re-inforcement. During the VM-DRO intervention, her SIBwas placed on extinction, and Bridget received 3 to 5 sec-onds of attention from the therapist if she was not bang-ing her head or hitting herself when each interval ended.Bridget did not have to refrain from SIB throughout theinterval, only at the end of the interval. As shown inFigure 7, when a VM-DRO 15-second schedule was implemented for five sessions following baseline, Brid-get’s SIB decreased abruptly from baseline levels to al-most zero. Following a return to baseline in whichBridget’s SIB increased, the researchers returned to aVM-DRO treatment condition of 11 seconds, which waslater thinned to 22 seconds, and then to the maximumtarget interval of 300 seconds.

Guidelines for Using DRO

Interval DRO has been used more widely than momen-tary DRO. Some researchers have found that intervalDRO is more effective than momentary DRO for sup-pressing problem behavior, and that momentary DROmight be most useful in maintaining reduced levels ofproblem behavior produced by interval DRO (Barton,Brulle, & Repp, 1986; Repp, Barton, & Brulle, 1983).Lindberg and colleagues (1999) noted two potential ad-vantages of VM-DRO schedules over FI-DRO schedules.First, the VM-DRO schedule appears more practical be-cause the practitioner does not need to monitor the par-ticipant’s behavior at all times. Second, data obtained bythese researchers showed that participants obtained higheroverall rates of reinforcement with VM-DRO than theydid with FI-DRO.

In addition to the importance of selecting potent re-inforcers, we recommend the following guidelines forthe effective use of DRO.

Recognize the Limitations of DRO

Although DRO is often positive and highly effective forreducing problem behaviors, it is not without shortcom-ings. With interval DRO, reinforcement is delivered con-tingent only on the absence of problem behavior duringthe interval, even though another inappropriate behaviormight have occurred during that time. For instance, letus assume that a 20-second interval DRO is to be imple-mented to reduce the facial tics of an adolescent withTourette’s syndrome. Reinforcement will be delivered atthe end of each 20-second interval containing no facialtics. However, if the adolescent engages in cursing be-havior at any time during the interval or at the end of theinterval, reinforcement will still be delivered. It is possi-ble that reinforcement delivered contingent on the ab-sence of facial tics will occur in close temporal proximityto cursing, thereby inadvertently strengthening anotherinappropriate behavior. In such cases, the length of theDRO interval should be shortened and/or the definition ofthe problem behavior expanded to include the other un-desirable behaviors (e.g., reinforcement contingent onthe absence of facial tics and cursing).

With momentary DRO, reinforcement is deliveredcontingent on the problem behavior not occurring at theend of each interval, even though the inappropriate be-havior might have occurred throughout most of an inter-val. Continuing the previous example, on an FM-DRO20-second schedule, reinforcement will be delivered at

490

Differential Reinforcement

each successive 20th second if facial tics are not occur-ring at those precise moments, even if tics occurred for50%, 75%, or even 95% of the interval. In such circum-stances, the practitioner should use interval DRO and re-duce the length of the DRO interval.

DRO may not always be successful, and practition-ers must be alert to their data to make informed deci-sions. For instance, Linscheid, Iwata, Ricketts, Williams,and Griffin (1990) found that the systematic applicationof neither DRO nor DRI eliminated the chronic self-injurious behavior of five people with developmental disabilities. A more intrusive punishment procedure eventually had to be instituted to eliminate the behavior.

Set Initial DRO Intervals ThatAssure Frequent Reinforcement

Practitioners should establish an initial DRO time inter-val that ensures that the learner’s current level of behav-ior will contact reinforcement when the DROcontingency is applied. For example, Cowdery and col-leagues (1990) initially made praise and token rein-forcement contingent on 2-minute “scratch-free”intervals because that was “the longest we had previouslyseen Jerry refrain from scratching while left alone” (p.501). Beginning with an interval that is equal to orslightly less than the mean baseline interresponse time(IRT) usually will make an effective initial DRO interval.To calculate a mean IRT, the practitioner divides the totalduration of all baseline measurements by the total num-ber of responses recorded during baseline. For example,if a total of 90 responses were recorded during 30 min-utes of baseline measurement, the mean IRT would be 20seconds (i.e., 1,800 seconds ÷ 90 responses). Using thesebaseline data as a guide, the practitioner could set theinitial DRO interval at 20 seconds or less.

Do Not Inadvertently Reinforce OtherUndesirable Behaviors

Reinforcement on a “pure” DRO schedule is contingenton a very general response class that is defined only bythe absence of the targeted problem behavior. Conse-quently, a “pure” DRO is typically used as a treatmentof choice for serious behavior problems that occur at veryhigh rates by people whose current repertoires providefew, if any, other behaviors that might function as alter-native or incompatible behaviors and for whom just aboutanything else they might do is less of a problem than thetarget behavior.

Because DRO does not require that any certain be-haviors be emitted for reinforcement, whatever the per-son is doing when reinforcement is delivered is likely to

occur more often in the future. Therefore, practitionersusing DRO must be careful not to strengthen inadver-tently other undesirable behaviors. When using DRO, thepractitioner should deliver reinforcement at the intervalsor moments in time specified by the schedule contingenton the absence of the problem behavior and the absenceof any other significant inappropriate behaviors.

Gradually Increase the DRO Interval

The practitioner can increase the DRO interval after theinitial DRO interval effectively controls the problem be-havior (i.e., thinning the reinforcement schedule). TheDRO schedule should be thinned by increasing the in-terval through a series of initially small and gradually in-creasing increments.

Poling and Ryan (1982) suggested three proceduresfor increasing the duration of the DRO interval.

1. Increase the DRO interval by a constant durationof time. For example, a practitioner could, at eachopportunity for an interval increase, increase theDRO interval by 15 seconds.

2. Increase intervals proportionately. For example, apractitioner, at each opportunity to change the in-terval, could increase the DRO interval by 10%.

3. Change the DRO interval each session based onthe learner’s performance. For example, a practi-tioner could set the DRO interval for each sessionequal to the mean IRT from the preceding session.

If problem behavior worsens when a larger DRO in-terval is introduced, the practitioner can decrease the du-ration of the interval to a level that again controls theproblem behavior. The DRO interval can then be ex-tended with smaller, more gradual durations after previ-ously obtained reductions in problem behavior have beenreestablished.

Extend the Application of DRO to OtherSettings and Times of Day

When the frequency of the problem behavior is reducedsubstantially in the treatment setting, the DRO interven-tion can be introduced during other activities and timesin the person’s natural environment. Having teachers, par-ents, or other caregivers begin to deliver reinforcement onthe DRO schedule will help extend the effects. For ex-ample, after Jerry’s SIB was under good control duringtreatment sessions (see Figure 6), Cowdery and Col-leagues (1990) began to implement the DRO interven-tion at other times and places during the day. At first,

491

Differential Reinforcement

various staff members would praise Jerry and give him to-kens for refraining from scratching during single 30-minute DRO intervals during leisure activities,instructional sessions, or free time. Then they beganadded additional 30-minute DRO intervals until the DROcontingency was extended to all of Jerry’s waking hours.Jerry’s SIB decreased sufficiently to allow him to be dis-charged to his home for the first time in two years. Jerry’sparents were taught how to implement the DRO proce-dure at home.

Combine DRO with Other Procedures

DRO can be used as a single intervention. However, aswith DRI/DRA, including DRO in a treatment packagewith other behavior-reduction procedures often yieldsmore efficient and effective behavior change. De Zu-bicaray and Clair (1998) used a combination of DRO,DRI, and a restitution (punishment) procedure to reducethe physical and verbal abuse and physical aggression ofa 46-year-old woman with mental retardation living in aresidential facility for people with chronic psychiatricdisorders. Further, the results of three experiments byRolider and Van Houten (1984) showed that an inter-vention consisting of DRO plus reprimands was more ef-fective than DRO alone in decreasing a variety of problembehaviors by children (e.g., physical abuse by a 4-year-old girl toward her baby sister, thumb sucking by a 12-year-old boy, and bedtime tantrums).

DRO can also be added as a supplement to an inter-vention that has produced insufficient results. McCord,Iwata, Galensky, Ellingson, and Thomson (2001) addedDRO to a stimulus fading intervention that was “havinglimited success” in decreasing the problem behaviors ofSarah, a 41-year-old woman with severe mental retarda-tion and visual impairment. Her problem behaviors in-cluded severe self-injurious behavior, propertydestruction, and aggression evoked by noise. Graduallyincreasing the noise volume by 2 dB when DRO was ineffect, “a therapist delivered a preferred edible item (halfa cheese puff) to Sarah following each 6-s interval inwhich problem behavior was not observed. If problembehavior occurred, the food was withheld and the DROinterval was reset” (p. 455). Each time Sarah completedthree consecutive 1-minute sessions with no occurrencesof the problem behavior, the researchers increased theDRO interval by 2 seconds. This schedule thinning con-tinued until Sarah could complete the session with no oc-currences of the problem behavior. The therapist thendelivered the edible item only at the end of the session.The addition of DRO to the intervention produced an im-mediate decrease in Sarah’s problem behavior, which re-mained at or near zero in the treatment setting for morethan 40 sessions. Probes conducted in noisy conditions

after treatment ended showed that the treatment effectshad maintained and generalized to Sarah’s home.

Differential Reinforcementof Low Rates of RespondingIn laboratory research, Ferster and Skinner (1957) foundthat delivering a reinforcer following a response that hadbeen preceded by increasingly longer intervals of timewithout a response reduced the overall rate of responding.When reinforcement is applied in this manner as an in-tervention to reduce the occurrences of a target behav-ior, the procedure is called differential reinforcementof low rates (DRL). Because reinforcement with a DRLschedule is delivered following an occurrence of the tar-get behavior, as contrasted with DRO in which rein-forcement is contingent on the complete absence of thebehavior, behavior analysts use DRL to decrease the rateof a behavior that occurs too frequently, but not to elim-inate the behavior entirely. For example, a practitionermay identify a certain behavior as a problem behaviornot because of the form of that behavior, but because thebehavior occurs too often. For example, a student rais-ing his hand or coming to the teacher’s desk to ask forhelp during independent seatwork is a good thing, butbecomes a problem behavior if it occurs excessively.

Some learners may emit a behavior less frequentlywhen told to do the behavior less often, or when givenrules stating an appropriate rate of response. When in-struction alone does not diminish occurrences of the prob-lem behavior, the practitioner may need to use aconsequence-based intervention. DRL offers practitionersan intervention for diminishing problem behaviors that isstronger than instruction alone, but still falls short of morerestrictive consequences (e.g., punishment). Deitz (1977)named and described three DRL procedures: full-sessionDRL, interval DRL, and spaced-responding DRL.

Full-Session DRL

In a full-session DRL procedure, reinforcement is deliv-ered at the end of an instructional or treatment session ifduring the entire session the target behavior occurred ata number equal to or below a predetermined criterion. Ifthe number of responses exceeds the specified limit dur-ing the session, reinforcement is withheld. For example,a teacher applying a full-session DRL with a criterionlimit of four disruptive acts per class period would providereinforcement at the end of the class period contingenton four or fewer disruptive acts. Full-session DRL is aneffective, efficient, and easy-to-apply intervention forproblem behaviors in education and treatment settings.

492

Differential Reinforcement

Res

po

nse

s p

er M

inu

te

Sessions

Figure 8 Rate of talk-outs during baselineand treatment phases for a student with devel-opmental disabilities.From “Decreasing Classroom Misbehavior Through the Use of DRLSchedules of Reinforcement” by S. M. Deitz and A. C. Repp, 1973,Journal of Applied Behavior Analysis, 6, p. 458. Copyright 1973 bythe Society for the Experimental Analysis of Behavior, Inc.Reprinted by permission.

Deitz and Repp (1973) demonstrated the efficacy andmanageability of full-session DRL for diminishing class-room misbehavior. They decreased the talk-outs of an11-year-old boy with developmental disabilities. During10 days of baseline, the student averaged 5.7 talk-outsper 50-minute session. With the introduction of full-ses-sion DRL, the student was allowed 5 minutes for play atthe end of the day contingent on 3 or fewer talk-outs dur-ing a 50-minute session. Talk-outs during the DRL con-dition decreased to an average of 0.93 talk-outs per 50minutes. A return to baseline slightly increased the talk-outs to an average of 1.5 per session (see Figure 8)

Interval DRL

To apply an interval DRL schedule of reinforcement,the practitioner divides a total session into a series ofequal intervals of time and provides reinforcement at theend of each interval in which the number of occurrencesof the problem behavior during that interval is equal to orbelow a criterion limit. The practitioner removes the op-portunity for reinforcement and begins a new interval ifthe learner exceeds the criterion number of responses dur-ing the time interval. For example, if a DRL criterionlimit is four problem behaviors per hour, the practitionerdelivers a reinforcer at the end of each 15 minutes con-tingent on the occurrence of no more than one problembehavior. If the participant emits a second problem be-havior during the interval, the practitioner immediatelybegins a new 15-minute interval. Beginning a new inter-val postpones the opportunity for reinforcement.

7Because reinforcement in a spaced-responding DRL immediately followsan instance of the target behavior, it is the applied variation of DRL thatmost closely resembles the DRL schedules of reinforcement described byFerster and Skinner (1957).

Deitz (1977) initially defined the interval DRLschedule of reinforcement using a criterion of one re-sponse or no responses per interval. However, Deitz andRepp (1983) reported programming an interval DRLschedule with a criterion above one per interval. “A re-sponse which occurs an average of 14 times per hourcould be limited to three occurrences per 15-minute in-terval. The response limit of three could then be loweredto two responses per interval, and then to one response perinterval” (p. 37).

Deitz and colleagues (1978) used an interval DRLschedule of reinforcement to reduce the disruptive be-haviors of a student with learning disabilities. The 7-year-old student had several difficult classroom misbehaviors(e.g., running, shoving, pushing, hitting, throwing ob-jects). The student received a sheet of paper ruled into15 blocks, with each block representing a 2-minute in-terval. A star was placed in a block each time the studentcompleted 2 minutes with 1 or no misbehaviors. Eachstar permitted the student to spend 1 minute on the play-ground with the teacher. If the student emitted 2 misbe-haviors during the interval, the teacher immediatelystarted a new 2-minute interval.

Effective application of interval DRL (as well as thespaced-responding DRL procedure described in the fol-lowing section) requires continuous monitoring of prob-lem behavior, careful timing, and frequent reinforcement.Without the help of an assistant, many practitioners mayhave difficulty applying the interval DRL procedure ingroup settings. Interval and spaced-responding DRL pro-cedures are quite reasonable for one-on-one instruction orwhen competent assistance is available.

Spaced-Responding DRL

Using a spaced-responding DRL schedule of reinforce-ment, the practitioner delivers a reinforcer following anoccurrence of a response that is separated by at least aminimum amount of time from the previous response.7

Interresponse time (IRT) is the technical term for the du-ration of time between two responses. IRT and rate of re-sponding are directly correlated: the longer the IRT, thelower the overall rate of responding; shorter IRTs corre-late with higher response rates. When reinforcement iscontingent on increasingly longer IRTs, response rate willdecrease.

Favell, McGimsey, and Jones (1980) used a spaced-responding DRL schedule of reinforcement and responseprompts to decrease the rapid eating of four persons with

493

Differential Reinforcement

profound developmental disabilities. At the start of treat-ment, reinforcement was contingent on short indepen-dent pauses (IRTs) between bites of food, and thengradually required longer and longer pauses betweenbites. The researchers also manually prompted a separa-tion between bites of food and faded out those responseprompts when a minimum of 5-second pauses occurredindependently between approximately 75% of all bites.Finally, Favell and colleagues gradually thinned food re-inforcement and praise. The frequency of eating was de-creased from a pretreatment baseline average of 10 to 12bites per 30 seconds to 3 to 4 bites per 30 seconds dur-ing the spaced-responding DRL condition.

Most behavior-reduction procedures that manipulateconsequences have the potential to reduce behavior tozero occurrences. Spaced-responding DRL is unlikely todo that. For that reason, spaced-responding DRL becomesan important intervention for diminishing behavior: Aspaced-responding DRL contingency tells learners thattheir behavior is acceptable, but they should do it lessoften. For example, a teacher could use a spaced-responding DRL schedule to diminish a student’s prob-lem behavior of asking too many questions. The questionsoccurred so frequently that they interfered with the class-room learning and teaching. To intervene, the teachercould respond to the student’s question if the student hadnot asked a question during a preceding minimum of 5minutes. This spaced-responding DRL intervention coulddecrease, yet not eliminate, asking questions.

Singh, Dawson, and Manning (1981) used a spaced-responding DRL intervention to reduce the stereotypicresponding (e.g., repetitive body movements, rocking) ofthree teenage girls with profound mental retardation. Dur-ing the first phase of spaced-responding DRL interven-tion, the therapist praised each girl whenever she emitteda stereotypic response at least 12 seconds after the last re-sponse. One of the experimenters timed the intervals andused a system of automated lights to signal the therapistwhen reinforcement was available. After the DRL 12-second IRT resulted in an abrupt decrease in stereotypicbehavior by all three girls (see Figure 9), Singh and colleagues systematically increased the IRT criterion to30, 60, and then 180 seconds. The spaced-respondingDRL procedure not only produced substantial reductionsin stereotypic responding in all three subjects but alsohad the concomitant effect of increasing appropriate be-havior (e.g., smiling, talking, playing).

Guidelines for Using DRL

Several factors influence the effectiveness of the threeDRL schedules in diminishing problem behaviors. Thefollowing guidelines address those factors.

Recognize the Limitations of DRL

If a practitioner needs to reduce an inappropriate behav-ior quickly, DRL would not be the method of first choice.DRL is slow. Reducing the inappropriate behavior to ap-propriate levels may take more time than the practitionercan afford. Further, DRL would not be advisable for usewith self-injurious, violent, or potentially dangerous be-haviors. Finally, from a practical standpoint, using DRLmeans that the practitioner must focus on the inappro-priate behavior. If a teacher, for example, is not cautious,he may inadvertently give too much attention to the in-appropriate behavior, inadvertently reinforcing it.

Choose the Most Appropriate DRL Procedure

Full-session, interval, and spaced-responding DRLschedules provide different levels of reinforcement forlearners. Of the three DRL procedures, only spaced-responding DRL delivers reinforcement immediately fol-lowing the occurrence of specific response; a responsemust be emitted following a minimum IRT before rein-forcement. Practitioners use spaced-responding DRL toreduce the occurrences of a behavior while maintainingthose behaviors at lower rates.

With full-session and interval DRL, a response doesnot need to occur for the participant to receive the rein-forcer. Practitioners may apply full-session or intervalDRL when it is acceptable that the rate of the problem be-havior reaches zero or as an initial step toward the goalof eliminating the behavior.

Spaced-responding and interval DRL usually pro-duce reinforcement at a higher rate than full-session DRLdoes. Arranging frequent contact with the reinforcementcontingency is especially appropriate, and most oftennecessary, for learners with severe problem behaviors.

Use Baseline Data to Guide the Selection of the Initial Response or IRT Limits

Practitioners can use the mean number of responses emit-ted during baseline sessions, or slightly lower than thataverage, as the initial full-session DRL criterion. For ex-ample, 8, 13, 10, 7, and 12 responses recorded per sessionover five baseline sessions produce a mean of 10 re-sponses. Therefore, 8 to 10 responses per session makesan appropriate initial full-session DRL criterion.

Interval DRL and spaced-responding DRL initialtime criteria can be set at the baseline mean or slightlylower. For example, 1 response per 15 minutes makes anacceptable initial interval DRL criterion as calculatedfrom a baseline mean of 4 responses per 60-minute ses-sion. With the same baseline data (i.e., 4 responses per 60-minute session), it appears reasonable to use 15 minutes

494

Differential Reinforcement

Per

cen

t o

f In

terv

als

Sessions

Figure 9 Effects of spaced-responding DRL on stereotypic responding of threeteenage girls with profound mental retardation.From “Effects of Spaced Responding DRL on the StereotypedBehavior of Profoundly Retarded Persons” by N. N. Singh, M. J. Dawson, and P. Manning, 1981. Journal of Applied BehaviorAnalysis, 38, p. 524. Copyright 1981 by the Society for theExperimental Analysis of Behavior, Inc. Reprinted by permission.

as the initial IRT criterion for a spaced-responding DRLschedule. That is, a response will produce reinforcementonly if it is separated from the prior response by a mini-mum of 15 minutes.

Gradually Thin the DRL Schedule

Practitioners should gradually thin the DRL schedule toachieve the desired final rate of responding. Practitionerscommonly use three procedures for thinning the initialDRL time criterion.

1. With full-session DRL, the practitioner can set anew DRL criterion using the participant’s currentDRL performances. Another option is to set a newDRL criterion at slightly less than the mean num-

ber of responses emitted during recent DRL sessions.

2. With interval DRL, the practitioner can graduallydecrease the number of responses per interval ifthe current criterion is more than one response perinterval; or, gradually increase the duration of thecriterion interval if the current criterion is one re-sponse per interval.

3. With spaced-responding DRL, the practitioner canadjust the IRT criterion based on the mean IRT ofrecent sessions, or slightly less than that average.For example, Wright and Vollmer (2002) set theIRT for the DRL component of a treatment pack-age that successfully reduced the rapid eating of a17-year-old girl at the mean IRT of the previousfive sessions. The researchers did not exceed an

495

Differential Reinforcement

IRT of 15 seconds because it was not necessary toreduce the girl’s eating below a rate of four bitesper minute.

Practitioners who successfully thin DRL schedules ofreinforcement make only gradual, but systematic, changesin time and response criteria associated with full, inter-val, and space-responding DRL variations.

Two possible decision rules for thinning the DRLschedule are:

Rule 1: Practitioners may want to change the DRLcriterion whenever the learner meets or exceeds thecriterion during three consecutive sessions.

Rule 2: Practitioners may want to change the DRLcriterion whenever the learner receives reinforce-ment for at least 90% of the opportunities duringthree consecutive sessions.

Provide Feedback to the Learner

The effectiveness of a DRL procedure can be enhanced byfeedback to help the learner monitor her rate of respond-ing. Full-session, interval, and spaced-responding DRLprocedures provide different levels of feedback for par-ticipants. The most accurate feedback comes with spaced-responding DRL because reinforcement follows eachresponse that meets the IRT criterion. When a responseoccurs that does not meet the IRT criterion, reinforce-ment is withheld, the time interval is immediately reset,

and a new interval begins. This process provides learnerswith immediate feedback on responding per interval.

Interval DRL also provides a high level of feedback,although it is less than that of spaced-responding DRL.Interval DRL provides the learner with two types of feed-back. The first problem behavior does not provide feed-back. The second response, however, resets the timeinterval, providing a consequence for the problem behav-ior. Reinforcement occurs at the end of the interval whenone or no problem behaviors occurred during the inter-val. These two types of feedback improve the effectivenessof the interval DRL intervention (Deitz et al., 1978).

Applied behavior analysts can arrange full-sessionDRL with or without feedback. The usual arrangementdoes not provide feedback concerning moment-to-moment accumulation of responses. Deitz (1977) statedthat with full-session DRL, learners would respond onlyto the DRL criterion: If learners exceeded the criterionthey lose the opportunity for reinforcement. Once thelearner loses the opportunity for reinforcement, he couldthen emit high rates of misbehavior without consequence.When the schedule is arranged without moment-to-moment feedback, learners usually stay well below theDRL limit. Full-session DRL, without moment-to-moment feedback, may not be as effective for learnerswith severe problem behaviors as spaced-responding andinterval DRL. The effectiveness of full-session DRL re-lies heavily on an initial verbal description of the con-tingencies of reinforcement (Deitz, 1977).

SummaryBasic Description of Differential Reinforcement

1. When used as a reductive procedure for problem behav-ior, differential reinforcement consists of two components:(a) providing reinforcement contingent on either the oc-currence of a behavior other than the problem behavioror the problem behavior occurring at a reduced rate, and(b) withholding reinforcement as much as possible for theproblem behavior.

Differential Reinforcement of Incompatible Behaviorand Differential Reinforcement of Alternative Behavior

2. Differential reinforcement of an incompatible or alterna-tive behavior can be conceptualized as a schedule of rein-forcement in which two concurrent operants—theinappropriate behavior targeted for reduction and the ap-propriate behavior selected—receive reinforcement at dif-ferent rates.

3. DRI and DRA have the dual effect of weakening the prob-lem behavior while simultaneously strengthening accept-able behaviors that are either incompatible with or analternative to the targeted problem behaviors.

4. With DRI, reinforcement is delivered for a behaviortopographically incompatible with the target problembehavior and withheld following instances of the prob-lem behavior.

5. With DRA, reinforcement is delivered for a behaviorthat serves as a desirable alternative to the target prob-lem behavior and withheld following instances of theproblem behavior.

6. When using DRI/DRA, practitioners should do the following:

• Select incompatible or alternative behaviors that are pre-sent in the learner’s repertoire, require equal or less ef-fort than the problem behavior, are being emitted priorto intervention with sufficient frequency to provide op-portunities for reinforcement, and are likely to producereinforcement when the intervention ends.

• Select potent reinforcers that can be delivered when thealternative/incompatible behavior occurs and withheldfollowing instances of the problem behavior. The sameconsequence that has been maintaining the problem

496

Differential Reinforcement

behavior prior to intervention is often the most effec-tive reinforcer for DRI/DRA.

• Reinforce the alternative/incompatible behavior on acontinuous reinforcement schedule initially and thengradually thin the schedule of reinforcement.

• Maximize the difference between rates of reinforcementfor the alternative/incompatible behavior and the prob-lem behavior by putting the problem behavior on an ex-tinction schedule.

• Combine DRI/DRA with other reductive procedures toproduce a more potent intervention.

Differential Reinforcement of Other Behavior

7. With DRO, reinforcement is contingent on the absence ofthe problem behavior during or at specific times (i.e., mo-mentary DRO).

8. On an interval DRO schedule, reinforcement is deliveredat the end of specific intervals of time if the problem didnot occur during the interval.

9. On a momentary DRO schedule, reinforcement is deliv-ered at specific moments in time if the problem is not oc-curring at those times.

10. Availability of reinforcement with interval and momen-tary DRO procedures can occur on fixed- or variable-time schedules.

11. When using DRO, practitioners should do the following:

• Establish an initial DRO time interval that ensures that thelearner’s current level of behavior will produce frequentreinforcement when the DRO contingency is applied.

• Be careful not to reinforce inadvertently other inappro-priate behaviors.

• Deliver reinforcement at the intervals or moments intime specified by the DRO schedule contingent on theabsence of the problem behavior and the absence of anyother significant inappropriate behaviors.

• Gradually increase the DRO interval based on decreasesin the problem behavior.

• Extend DRO in other settings and times of day after theproblem behavior is substantially reduced in the treat-ment setting.

• Combine DRO with other reductive procedures.

Differential Reinforcement of Low Rates of Responding

12. DRL schedules produce low, consistent rates of responding.

13. Reinforcement on a full-session DRL schedule is deliv-ered when responding during an entire instructional ortreatment session is equal to or below a criterion limit.

14. On an interval DRL schedule, the total session is dividedinto equal intervals and reinforcement is provided at theend of each interval in which the number of responses dur-ing the interval is equal to or below a criterion limit.

15. Reinforcement on a spaced-responding DRL schedule follows each occurrence of the target behavior that is sep-arated from the previous response by a minimum inter-response time (IRT).

16. When using DRL, practitioners should do the following:

• Not use DRL if a problem behavior needs to be re-duced quickly.

• Not use DRL with self-injurious or other violent behaviors.

• Select the most appropriate DRL schedules: full-ses-sion or interval DRL when it is acceptable that the rateof the problem behavior reaches zero or as an initial steptoward the goal of eliminating the behavior; spaced-responding DRL for reducing the rate of a behavior tobe maintained in the learner’s repertoire.

• Use baseline data to guide the selection of the initial re-sponse or IRT limits.

• Gradually thin the DRL schedule to achieve the desiredfinal rate of responding.

• Provide feedback to help the learner monitor the rate ofresponding.

497

antecedent interventionbehavioral momentumfixed-time schedule (FT)functional communication

training (FCT)

high-probability (high-p) requestsequence

noncontingent reinforcement (NCR)variable-time schedule (VT)

Behavior Analyst Certification Board® BCBA® & BCABA®Behavior Analyst Task List,© Third Edition

Content Area 9: Behavior Change Procedures

9-1 Use antecedent-based interventions, such as: contextual or ecologicalvariables, establishing operations, and discriminative stimuli.

9-5 Use response-independent (time-based) schedules of reinforcement.

9-23 Use behavioral momentum.

9-26 Use language acquisition/communication training procedures.

© 2006 The Behavior Analyst Certification Board, Inc.,® (BACB®) all rights reserved. A current version of thisdocument may be found at www.bacb.com. Requests to reprint, copy, or distribute this document and ques-tions about this document must be submitted directly to the BACB.

Antecedent Interventions

Key Terms

From Chapter 23 of Applied Behavior Analysis, Second Edition. John O. Cooper, Timothy E. Heron, William L. Heward. Copyright © 2007by Pearson Education, Inc. All rights reserved.

498

Applied behavior analysts traditionally em-phasized the three-term contingency: how con-sequences affect behavior, and how differential

consequences produce stimulus discrimination (SD) andstimulus control. Rarely did applied behavior analystsaddress how an antecedent event itself affected behav-ior. However, this situation changed following the pub-lication of two greatly influential articles: one onestablishing operations (Michael, 1982) and the otheron functional analysis (Iwata, Dorsey, Slifer, Bauman,& Richman, 1982/1994). The convergence of motivat-ing operations (MOs) and functional behavior assess-ments allowed applied behavior analysts to conceptuallyalign applied research on the effects of antecedent con-ditions other than stimulus control (e.g., SDs) to basicprinciples of behavior. This convergence advanced ourunderstanding of how antecedents affect behavior, a his-torically underemphasized area of applied behavioranalysis.

Practitioners have long used antecedents effectivelyto develop desirable behaviors, to diminish problem be-haviors, and to design environments that select adaptivebehaviors in social, academic, leisure, and work en-vironments. Arguably, teachers may use antecedent interventions more frequently than consequence arrange-ments (e.g., reinforcement, punishment, extinction) tochange behavior. The long-established practice of ar-ranging antecedent conditions for behavior change addssupport to an increasing experimental database of desir-able behavioral outcomes that are functionally related tochanging antecedent conditions (e.g., Wilder & Carr,1998). Table 1 shows common behavior challenges

and the antecedent interventions that practitioners haveused to address them.

Defining AntecedentInterventionsConceptual Understanding of Antecedent Interventions

Some textbooks and journal articles classify all antecedent-based behavior change strategies under a single term suchas antecedent procedures, antecedent control, antecedentmanipulations, or antecedent interventions. Although eco-nomical, using the same term to identify interventionsbased on stimulus control (SDs) and interventions involv-ing motivating operations (MOs) can lead to confusionabout, or failure to recognize, the different functions ofthese antecedent events. SDs evoke behavior because theyhave been correlated with increased availability of rein-forcement. The evocative function of MOs, however, is in-dependent of the differential availability of effectivereinforcement. MOs (e.g., an establishing operation) in-crease the current frequency of certain types of behavioreven when an effective reinforcer is not available. Appliedbehavior analysts should be cognizant of the different fac-tors underlying the evocative functions of SDs and MOs.

In addition to improved conceptual clarity and con-sistency, understanding the different reasons for theevocative functions of SDs and MOs has important ap-plied implications. Antecedent treatments involving stim-ulus control must include manipulating consequent

Table 1 Examples of Behavior Challenges and Antecedent Interventions That Might Address Them

Challenging behavior situation Antecedent intervention

Disruptive or noncompliant behaviors associated with doing Provide a choice: “Do you want to do your homework first, or homework or taking a bath. take a bath first?”

Poor socialization and communication skills among persons Switch the seating arrangement to a family-style configuration.with developmental disabilities eating at a cafeteria-style-arranged dinner table.

A student misbehaves by disrupting other students. Move the misbehaving student’s desk closer to the teacher and away from the peers bothered by the student’s misbehavior.

Infant during the creeping and crawling stage can encounter Attach gates to stairways, plug electrical outlets, put locks on many potentially dangerous events. cabinet doors, remove table lamps.

Student causes disruptions when requested to complete a Present five sets of five problems each.math worksheet with 25 problems.

Some students misbehave when entering the resource room. The teacher pins an index card for each student on the bulletin Long transition times are clearly correlated with the mis- board. The card contained a personalized question. When behavior and lead to many difficult problem behaviors. entering the resource room, students take their cards from the

bulletin board, go to their desks, and write a response to the question on the card.

Antecedent Interventions

499

events, changing the differential availability of reinforce-ment in the presence and absence of the SD. Behaviorchange strategies based on motivating operations mustchange antecedent events. Understanding these differencesmay improve the development of more effective and effi-cient behavior change efforts involving antecedent events.

Classifying Functions of Antecedent Stimuli

Contingency Dependent

A contingency-dependent antecedent event is dependenton the consequences of behavior for developing evocativeand abative effects. All stimulus control functions arecontingency dependent. For example, in the presence of2 + 2 = ?, a student responds 4, not because of the stim-ulus 2 + 2 = ?, but because of a past reinforcement his-tory for saying 4, including perhaps a history ofnon-reinforcement for making any other response except4. This chapter uses the term antecedent control to iden-tify contingency-dependent antecedent events.

Contingency Independent

A contingency-independent antecedent event is not de-pendent on the consequences of behavior for developingevocative and abative effects. The antecedent event itselfaffects behavior–consequence relations. The effects ofmotivating operations are contingency independent. Forexample, sleep deprivation can influence the occurrencesof problem behaviors in the absence of a history of pair-

ing sleep deprivation with reinforcement or punishmentof those behaviors. This chapter uses the term antecedentintervention to identify behavior change tactics basedon contingency-independent antecedent events.1

Antecedent Intervention

Abolishing Operations

Applied behavior analysts have used several antecedentinterventions, singularly or in treatment packages, to de-crease the effectiveness of reinforcers that maintain prob-lem behaviors (i.e., abolishing operations). Table 2provides examples of antecedent interventions that usedabolishing operations to decrease the effectiveness of re-inforcers maintaining the problem behaviors and a cor-responding reduction of those behaviors.

Temporary Effects

Smith and Iwata (1997) reminded us that the effects ofMOs are temporary. Antecedent interventions by them-selves will not produce permanent improvements in be-havior. However, while using an antecedent interventionto diminish a problem behavior, a teacher or therapist cansimultaneously apply procedures such as extinction to re-duce the problem behavior and differential reinforcementof alternative behaviors to compete with the problem be-havior. Because of the temporary effects of MOs, an-tecedent interventions most often serve as one component

Table 2 Examples of Antecedent Interventions Using Abolishing Operations

Abolishing operation Example

Provide corrective prompts as an antecedent event. Antecedent corrective academic prompts reduced destructive behavior to zero (Ebanks & Fisher, 2003).

Provide presession exposure to stimuli that function as A father–son playtime preceding compliance sessions reinforcers. improved son’s compliance to father’s requests (Ducharme &

Rushford, 2001).

Provide free access to leisure activities. Manipulation of leisure items effectively competed with SIB maintained by automatic reinforcement (Lindberg et al., 2003).

Reduce noise levels. Reducing noise levels decreased stereotypical covering ears with hands (Tang et al., 2002).

Change levels of social proximity. Low levels of distant-proximity reduced aggressive behaviors (Oliver et al., 2001).

Offer choices. Escape-maintained problem behaviors were decreased when students had opportunities to choose among tasks (Romaniuk et al., 2002).

Increasing response effort. Increasing response effort for pica produced reductions in pica (Piazza et al., 2002).

Antecedent Interventions

1Because Chapter entitled “Stimulus Control” addressed contingency-dependent antecedent events (i.e., stimulus control), this chapter focuses oncontingency-independent antecedent events (i.e., MOs).

500

2This chapter uses the phrase presenting stimuli with known reinforcingproperties to describe the delivery of noncontinent reinforcers. Interpret-ing the NCR procedure as presenting a reinforcer, or presenting a rein-forcer noncontingent, on a fixed-time or variable-time schedule istechnically inconsistent with the functional definition of reinforcement(Poling & Normand, 1999). Reinforcement requires a response–reinforcerrelation. We use the term NCR in this chapter to describe the time-basedprocedures for reducing problem behaviors because the discipline of appliedbehavior analysis has continued its use, and the term NCR serves a usefuldescriptive purpose.

of a treatment package (e.g., combining an antecedent in-tervention with extinction, differential reinforcement ofalternative behavior, or other procedures). These kinds oftreatments may produce maintaining effects.

Three antecedent interventions with established experimental results are noncontingent reinforcement,high-probability request sequence, and functional com-munication training. The remainder of this chapter willelaborate on these three antecedent interventions and pre-sent definitions and guidelines for their effective use.

NoncontingentReinforcementNoncontingent reinforcement (NCR) is an antecedentintervention in which stimuli with known reinforcingproperties are delivered on a fixed-time (FT) or variable-time (VT) schedule independent of the learner’s behav-ior (Vollmer, Iwata, Zarcone, Smith, & Mazaleski, 1993).2

Noncontingent reinforcement may effectively diminishproblem behaviors because the reinforcers that maintain

the problem behavior are available freely and frequently.This enriched environment with positive stimuli mayfunction as an abolishing operation (AO), reducing themotivation to engage in the problem behavior.

NCR uses three distinct procedures that identify and deliver stimuli with known reinforcing properties:(a) positive reinforcement (i.e., social mediation), (b) neg-ative reinforcement (i.e., escape), and (c) automatic reinforcement (i.e., without social mediation). NCR pro-vides an important and effective intervention for prob-lem behaviors. It is a common treatment for persons withdevelopmental disabilities.

NCR with Positive Reinforcement

Kahng, Iwata, Thompson, and Hanley (2000) providedan excellent example of applying NCR with positive re-inforcement. A functional analysis showed that social-positive reinforcement maintained self-injurious behavior(SIB) or aggression in three adults with developmentaldisabilities. During baseline, each occurrence of SIB oraggression produced attention for two of the adults, anda small bit of food for the third adult. During the initialsessions of NCR, the adults received attention or smallbits of food on an initial fixed-time (FT) schedule (e.g.,5 seconds). Later, the schedule was thinned to a terminalcriterion of 300 seconds. Figure 1 presents the base-line and NCR performances of the three adults and showsthat the NCR procedure effectively decreased occurrencesof SIB and aggression.

8

6

4

2

0

Baseline Noncontingent Reinforcement

Julia

Susan

Matt

10 20 30 40 50Sessions

Extinction

25

20

15

10

5

0

10

8

6

4

2

0

Res

po

nse

s p

er M

inu

te

Figure 1 Number of SIB or aggressiveresponses per minute during baseline and NCRby three adults with developmental disabilities.From “A Comparison of Procedures for Programming NoncontingentReinforcement Schedules” by S. W. Kahng, B. A. Iwata, I. G. DeLeon,& M. D. Wallace 2000, Journal of Applied Behavior Analysis, 33, p. 426. Used by permission. Copyright 2000 by the Society for theExperimental Analysis of Behavior. Reproduced by permission.

Antecedent Interventions

501

NCR with Negative Reinforcement

Kodak, Miltenberger, and Romaniuk (2003) analyzedthe effects of NCR escape on the instructional task com-pliance and problem behaviors of Andy and John,4-year-old boys with autism. Andy’s task was to pointto cards with teacher-specified pictures, words, or letterson them. John used a marker to trace each letter of writ-ten words. Problem behaviors included resistingprompts, throwing materials, and hitting. During base-line, the therapist gave an instruction for task engage-ment, and contingent on problem behavior followingthe instruction, removed the task materials and turnedaway from the child for 10 seconds. During the NCRescape condition, the therapist used an initial FT 10-second schedule for escape, meaning that the studenthad a break from the instructional requests every 10 sec-onds of the session. The initial 10-second FT schedulewas thinned each time the boy achieved a criterion fortwo consecutive sessions: 10 seconds to 20 seconds, to30 seconds, to 1 minute, to 1.5 minutes, and finally toa terminal criterion of 2 minutes. The NCR escape pro-cedure increased compliance and decreased problembehaviors.

NCR with Automatic Reinforcement

Lindberg, Iwata, Roscoe, Worsdell, and Hanley (2003)used NCR as a treatment to decrease the self-injuriousbehavior (SIB) of two women with profound mental

retardation. A functional analysis documented that au-tomatic reinforcement maintained their SIB. The NCRprocedure provided Julie and Laura with free access toa variety of home-based, highly preferred leisure items(e.g., beads, string) that they could manipulate through-out the day. Figure 2 shows that NCR object ma-nipulation of preferred leisure items effectively diminished SIB, and the effects were maintained up toa year later. This experiment is important because itshowed that NCR object manipulation could competewith automatic reinforcement to reduce the occurrenceof SIB.

Using NCR Effectively

Enhancing Effectiveness

The following procedural recommendations identifythree key elements for enhancing the effectiveness ofNCR. (a) The amount and quality of stimuli withknown reinforcing properties influence the effective-ness of NCR. (b) Most treatments include extinctionwith NCR interventions. (c) Reinforcer preferences canchange during intervention. That is, the NCR stimulimay not continue competing with the reinforcers thatmaintain the problem behavior. DeLeon, Anders,Rodriguez-Catter, and Neider (2000) recommended periodically using a variety of available stimuli withthe NCR intervention to reduce problems of changingpreferences.

100

80

60

40

20

0

% o

f In

terv

als

Baseline Noncontingent Reinforcement

Julie

Laura

Object Manipulation

3 months

30

25

20

15

10

5

0SIB

(R

esp

on

ses

per

Min

ute

)

10 20 30 40 50 60 7012 months

Sessions

100

80

60

40

20

0

Ob

ject

Man

ipu

lati

on

(%

In

terv

als)

SIB

Figure 2 Levels of SIB and object manip-ulation exhibited by Julie and Laura duringobservations at home while NCR was imple-mented daily.From “Treatment Efficacy of Noncontingent Reinforcement duringBrief and Extended Application” by J. S. Lindberg, B. A. Iwata, E. M. Roscoe, A. S. Worsdell, & G. P. Hanley, 2003, Journal ofApplied Behavior Analysis, 36, p. 14. Copyright 2003 by theSociety for Experimental Analysis of Behavior. Reproduced bypermission.

Antecedent Interventions

502

Functional Behavior Assessment

The effectiveness of using NCR is dependent on the cor-rect identification of the positive, negative, or automaticreinforcers maintaining the problem behavior. Advancesin functional behavior assessments have greatly improvedthe effectiveness of NCR by facilitating the identifica-tion of maintaining contingences of reinforcement (Iwataet al., 1982/1994).3

Emphasizing NCR

Applied behavior analysts can enhance the effective-ness of an NCR intervention by presenting a greateramount of stimuli with known reinforcing propertiesthan the rate of reinforcement in the non-NCR condi-tion. For example, Ringdahl, Vollmer, Borrero, and Con-nell (2001) found that NCR was ineffective when thebaseline condition and the NCR condition contained asimilar amount of reinforcer delivery. NCR was effec-tive, however, when the NCR schedule was denser (i.e.,continuous reinforcement) than the baseline schedule.Applied behavior analysts can use the rates of rein-forcement during baseline to establish an initial NCRschedule to ensure a discrepancy between baseline andNCR conditions.

Ringdahl and colleagues (2001) suggested threeprocedures for emphasizing reinforcement during theNCR intervention: (a) increase the delivery of stimuliwith known reinforcing properties, (b) use an obviouslydifferent schedule of reinforcement at treatment onset(e.g., continuous reinforcement), and (c) combine dif-ferential reinforcement of other behavior (DRO) withthe NCR treatment package. DRO will decrease the ad-ventitious reinforcement of the problem behavior fromthe time-based NCR schedule.

Time-Based NCR Schedules

Most applications of NCR use a fixed-time schedule(FT) for the delivery of stimuli with known reinforcingproperties. The interval of time for the presentation ofthese stimuli remains the same from delivery to delivery.When applied behavior analysts program the NCR timeinterval to vary from delivery to delivery, it is called avariable-time schedule (VT). For example, an NCR VTschedule of 10 seconds means that, on the average, stim-uli with known reinforcing properties are presented every10 seconds. This VT schedule could use time intervals

such as 5, 7, 10, 12 and 15 seconds, arranged to occur inrandom sequence. Even though most applications of theNCR procedures have used FT schedules, VT schedulescan be effective also (Carr, Kellum, & Chong, 2001).

Setting the initial NCR time schedule is an impor-tant aspect of the NCR procedure. The initial schedulecan have an have an impact on the effectiveness of theintervention (Kahng, Iwata, DeLeon, & Wallace, 2000).Applied behavior analysts consistently recommend aninitial dense FT or VT schedule (e.g., Van Camp, Ler-man, Kelley, Contrucci, & Vorndran, 2000). The therapistcan set a dense time value (e.g., 4 seconds) arbitrarily.Usually, however, it is more effective to set the initialtime value based on the number of occurrences of theproblem behavior, which will ensure frequent contactwith the NCR stimuli.

The following procedure can be used to determinean initial NCR schedule: Divide the total duration of allbaseline sessions by the total number of occurrences ofthe problem behavior recorded during baseline, and setthe initial interval at or slightly below the quotient. Forexample, if the participant emitted 300 aggressive actsduring 5 days of the baseline, and each baseline sessionwas 10 minutes in duration, then 3,000 seconds dividedby 300 responses produces a quotient of 10 seconds. Ac-cordingly, these baseline data suggest an initial FT in-terval of 7 to 10 seconds.

Thinning Time-Based Schedules

Applied behavior analysts use a dense FT or VT sched-ule to begin the NCR procedure. They thin the scheduleby adding small time increments to the NCR interval.However, thinning a time-based schedule is best begunonly after the initial NCR interval has produced a reduc-tion in the problem behavior.

Applied behavior analysts have used three proce-dures to thin NCR schedules: (a) constant time increase,(b) proportional time increase, and (c) session-to-sessiontime increase or decrease (Hanley, Iwata, & Thompson,2001; Van Camp et al., 2000).

Constant Time Increase. A therapist can increasethe FT or VT schedule intervals by using a constantduration of time, and decrease the amount of time thatthe learner has access to the NCR stimuli by a con-stant amount of time. For example, a therapist can in-crease the schedule interval by 7 seconds at eachopportunity, and each time decrease access to thestimuli by 3 seconds.

Proportional Time Increase. A therapist can in-crease the FT or VT schedule intervals proportionately,

Antecedent Interventions

3Chapter entitled “Functional Behavior Assessment” provides a detaileddescription of functional behavior assessment.

503

Table 3 Possible Advantages and Disadvantages of NCR

Advantages

NCR is easier to apply than other positive reductive techniques, which require monitoring student behavior for the contingentdelivery of the reinforcer (Kahng, Iwata, DeLeon, & Wallace, 2000).

NCR helps create a positive learning environment, which is always desirable during treatment.

A package treatment that includes NCR with extinction may reduce extinction-induced response bursts (Van Camp et al., 2000).

Chance pairings of appropriate behavior and NCR delivery of stimuli with known reinforcing properties could strengthen andmaintain those desirable behaviors (Roscoe, Iwata, & Goh, 1998).

Disadvantages

Free access to NCR stimuli may reduce motivation to engage in adaptive behavior.

Chance pairings of problem behavior and NCR delivery of stimuli with known reinforcing properties could strengthen the problembehavior (Van Camp et al., 2000).

NCR escape (i.e., negative reinforcement) can disrupt the instructional process.

meaning that each time, the schedule interval is increasedby the same proportion of time. For example, each timeinterval is increased by 5% (e.g., 60 seconds = initial FTschedule; first schedule increase = 90 seconds [5% of 60= 30]; second increase = 135 seconds [5% of 90 = 45]).

Session-to-Session Time Increase or Decrease. Atherapist can use the learner’s performance to changethe schedule interval on a session-to-session basis. Forexample, at the end of a session, the therapist estab-lishes a new NCR time interval for the next session bydividing the number of problem behaviors that occurredin that session by the duration of the session and usingthat quotient as the next session’s FT interval.

A therapist will decrease the interval if the problembehavior starts to worsen during schedule thinning. Theduration of the NCR interval can be increased again aftercontrol of the problem behavior has been reestablished,but in more gradual increments.

Setting Terminal Criteria

Applied behavior analysts usually select an arbitrary ter-minal criterion for NCR schedule thinning. Kahng andcolleagues (2000) reported that the terminal criterion ofa 5-minute FT schedule has been used commonly in ap-plied research, and that it seems to be a practical and ef-fective criterion. In addition, they reported that researchhas not established an advantage for a terminal criterionof a 5-minute FT schedule over denser schedules (e.g., 3minutes) or thinner schedules (e.g., 10 minutes).

Considerations for Using NCR

NCR makes an effective intervention. It has advantages inaddition to effectiveness, and some disadvantages. Table3 lists the advantages and disadvantages of using NCR.

High-ProbabilityRequest SequenceWhen using a high-probability (high-p) request se-quence, the teacher presents a series of easy-to-followrequests for which the participant has a history of com-pliance (i.e., high-p requests); when the learner complieswith several such high-p requests in sequence, the teacherimmediately gives the target request (i.e., low-p). The be-havioral effects of the high-probability request sequencesuggests the abative effects of an abolishing operation(AO) by (a) reducing the value of reinforcement for non-compliance to the low-probability requests (i.e., reduc-ing the value of escape from requests), and (b) reducingthe aggression and self-injury often associated with low-p requests.

To apply the high-p request sequence, the teacher ortherapist selects two to five short tasks with which thelearner has a history of compliance. These short tasksprovide the responses for the high-p requests. The teacheror therapist presents the high-p request sequence imme-diately before requesting the target task, the low-p re-quest. Sprague and Horner (1990) used the followingdressing anecdote to explain the high-p procedure:

Typical Instruction

Teacher: “Put on the shirt.” (low-p request)

Student: Avoids the hard task by throwing a tantrum

Typical Instruction with High-p Request Sequence

Teacher: “Give me five.” (high-p request)

Student: Slaps teacher’s outstretched hand

Teacher: “All right, nice job! Now, take this ball and putit in your pocket.” (high-p request)

Student: Puts the ball in pocket

Antecedent Interventions

504

Teacher: “Great! That is right! Now, put on the shirt.”(low-p request)

Student: Puts on shirt with assistance

Student compliance provides opportunities for the de-velopment of many important behaviors. Noncompliance,however, is a prevalent problem with persons with devel-opmental disabilities and behavior disorders. The high-prequest sequence provides a nonaversive procedure forimproving compliance by diminishing escape-maintainedproblem behaviors. The high-p request sequence may de-crease excessive slowness in responding to requests andin the time used for completing tasks (Mace et al., 1988).

Engelmann and Colvin (1983) provided one of thefirst formal descriptions of the high-p request sequencein their compliance training procedure for managing se-vere behavior problems. They used the term hard task toidentify the procedure of giving three to five easy re-quests immediately before requesting compliance with adifficult task. Applied behavior analysts have used sev-eral labels to identify this intervention, includinginterspersed requests (Horner, Day, Sprague, O’Brien,

& Heathfield, 1991), pretask requests (Singer, Singer, &Horner, 1987), and behavioral momentum (Mace, &Belfiore, 1990). Currently, most applied behavior ana-lysts identify this antecedent intervention with the labelhigh-p request sequence.

Killu, Sainato, Davis, Ospelt, and Paul (1998) eval-uated the effects of the high-p request sequence on com-pliant responding to low-p requests and occurrences ofproblem behaviors by three preschool children with de-velopmental delays. They used a compliance criterion of80% or higher for selecting high-p requests for two chil-dren, and a 60% criterion for the third child. Complianceof less than 40% was used to select the low-p requests.

The request sequence began with the experimenter ora trainer presenting three to five high-p requests. When achild complied to at least three consecutive high-p re-quests, a low-p request was immediately presented. Praisewas provided immediately following each compliant re-sponse. Figure 3 shows the children’s performances before, during, and after the high-p sequence. The sequence delivered by two different trainers increasedcompliant responding to the low-p requests of the three

109876543210

Baseline High-Probability Request Sequence Maintenance Follow-Up

109876543210

John

BarryExperimenterSecondary Trainer

109876543210

Nu

mb

er o

f R

esp

on

ses

William

5 10 15 20 25 30 35 40 45 50 55 3 4 5 6Sessions Weeks

Figure 3 Number of compliant responses to low-probability requests delivered by theinvestigator and second trainer acrosssessions and conditions. Participants weregiven 10 low-p requests each session. Dashedlines indicate student absences.From “Effects of High-Probability Request Sequences onPreschoolers’ Compliance and Disruptive Behavior” by K. Killu, D. M. Sainato, C. A. Davis, H. Ospelt, & J. N. Paul, 1998, Journal ofBehavioral Education, 8, p. 358. Used by permission.

Antecedent Interventions

505

Figure 4 Considerations for using the high-p request sequence.

1. Do not use the high-p request sequence just after an occurrence of the problembehavior. The student might learn that responding to a low-p request with the problembehavior will produce a series of easier requests.

2. Present the high-p request sequence at the beginning and throughout theinstructional period to reduce the possibility of problem behaviors producingreinforcement (Horner et al., 1991).

3. Teachers might knowingly or unknowingly let instruction drift from low-p requests toonly high-p requests, and select easy tasks to avoid student escape-motivatedbehavior, a possible outcome from escape-motivated aggression and self-injuryassociated with low-p requests (Horner et al., 1991).

children. Compliant responding maintained across timeand settings.

Using the High-p RequestSequence Effectively

Selecting from Current Repertoire

The tasks selected for the high-p request sequence shouldbe in the learner’s current repertoire, occur with regular-ity of compliance, and have a very short duration of oc-currence. Ardoin, Martens, and Wolfe (1999) selectedhigh-p requests by (a) creating a list of requests that cor-responded to student compliance, (b) presenting each request on the list for five separate sessions, and (c) se-lecting as high-p requests only those tasks that the studentcomplied with 100% of the time.

Mace (1996) reported that the effectiveness of thehigh-p sequence increases, apparently, as the number ofhigh-p requests increases. A high-p request sequence withfive requests may be more effective than a sequence withtwo requests; but the increase in effectiveness may havea trade-off in efficiency. For example, if the same, ornearly the same, effectiveness can be obtained with twoor three high-p requests as can be obtained with five orsix, a teacher might select the shorter sequence becauseit is more efficient. When participants consistently com-ply with the low-p requests, the trainer should graduallyreduce the number of high-p requests.

Presenting Requests Rapidly

The high-p requests should be presented in rapid suc-cession, with short interrequest intervals. The first low-prequest should immediately follow the reinforcer forhigh-p compliance (Davis & Reichle, 1996).

Acknowledging Compliance

The learner’s compliance should be acknowledged im-mediately. Notice how the teacher in the previous dress-ing example acknowledged and praised the student’s

compliance before presenting the next request (“All right,nice job!”).

Using Potent Reinforcers

Individuals may emit aggression and self-injurious be-haviors to escape from the demands of the low-p requests.Mace and Belfiore (1990) cautioned that social praisemay not increase compliance if motivation for escape be-havior is high. Therefore, high-quality positive stimuliimmediately following compliance will increase the ef-fectiveness of the high-p intervention (Mace, 1996).Figure 4 lists consideration for the application of the high-p request sequence.

FunctionalCommunication TrainingFunctional communication training (FCT) establishesan appropriate communicative behavior to compete withproblem behaviors evoked by an establishing operation(EO). Rather than changing EOs, functional communi-cation training develops alternative behaviors that aresensitive to the EOs. This is in contrast to the non-contingent reinforcement and high-probability requestsequence interventions that alter the effects of EOs.

Functional communication training is an applicationof differential reinforcement of alternative behavior(DRA) because the intervention develops an alternativecommunicative response as an antecedent to diminish theproblem behavior (Fisher, Kuhn, & Thompson, 1998).The alternative communicative response produces the re-inforcer that has maintained the problem behavior, mak-ing the communicative response functionally equivalentto the problem behavior (Durand & Carr, 1992). The al-ternative communicative responses can take many forms,such as vocalizations, signs, communication boards, wordor picture cards, vocal output systems, or gestures (Brownet al., 2000; Shirley, Iwata, Kahng, Mazaleski, & Ler-man, 1997).

Antecedent Interventions

506

100

75

50

25

0

Baseline Functional Communication Training

Mattat Store

5 10 15 20 25 30 35 40

Challenging Behavior

Trained Response

100

75

50

25

0

Allisonat Mall

5 10 15 20 25 30 35 40100

75

50

25

0

Mikeat Movies

5 10 15 20 25 30 35 40100

75

50

25

0

Ronat Store

5 10 15 20 25 30 35 40100

75

50

25

0

Davidat Library

5 10 15 20 25 30 35 40

Sessions

Per

cen

tage

of

Inte

rval

s

Figure 5 Percentage of intervals of challengingbehavior by each of the five participants in baseline andFCT in community settings. The hatched bars show thepercentage of intervals of unprompted communicationby each student.From “Functional Communication Training Using Assistive Devices: RecruitingNatural Communities of Reinforcement” by V. M. Durand, 1999, Journal ofApplied Behavior Analysis, 32, p. 260. Copyright 1999 by the Society forExperimental Analysis of Behavior. Reproduced by permission.

Carr and Durand (1985) defined functional com-munication training as a two-step process: (a) complet-ing a functional behavior assessment to identify thestimuli with known reinforcing properties that maintainthe problem behavior, and (b) using those stimuli as re-inforcers to develop an alternative behavior to replacethe problem behavior. FCT provides an effective treat-ment for many problem behaviors maintained by socialattention.

FCT-based interventions typically involve severalbehavior change tactics in addition to teaching the al-ternative communicative response. For example, appliedbehavior analysts often use a combination of responseprompting, time-out, physical restraint, response block-ing, redirection, and extinction with the problem behavior.

Durand (1999) used FCT in school and communitysettings to reduce the problem behaviors of five studentswith severe disabilities. Durand first completed functionalbehavior assessments to identify the objects and activitiesmaintaining the problem behaviors. Following the func-tional assessments, the students learned to use a com-munication device that produced digitized speech withwhich they could request the objects and activities iden-tified during the functional behavior assessments. Thefive students reduced the occurrences of their problembehaviors in a school and in a community setting in whichthey used digitized speech to communicate with personsin the community. Figure 5 presents data on the per-centage of intervals of problem behaviors occurring incommunity settings for each student. These data are so-cially significant because they show the importance ofteaching skills that recruit reinforcement in natural set-tings, thereby promoting generalization and maintenanceof intervention effects.

Effective Use of FCT

Dense Schedule of Reinforcement

The alternative communicative response should producethe reinforcers that maintain the problem behavior on acontinuous schedule of reinforcement during the earlystages of communication training.

Decreased Use of Verbal Prompts

While teaching the alternative communicative response,verbal prompts such as look or watch me, are used often.After the communicative response is established firmly,the trainer should gradually reduce the verbal promptsand if possible eliminate them altogether to remove anyprompt dependence associated with the intervention (Mil-tenberger, Fuqua, & Woods, 1998).

Behavior Reduction Procedures

The effectiveness of FCT is likely to be enhanced if thetreatment package includes extinction for the problembehavior and other behavior-reduction procedures such astime-out (Shirley et al., 1997).

Schedule Thinning

Thinning the reinforcement schedule for a firmly estab-lished communicative response is an important part ofthe FCT treatment package. The time-based proceduresdescribed earlier for thinning NCR schedules—constanttime increase, proportional time increase, and session-to-session time increase or decrease—are not appropriatefor schedule thinning with the alternative communica-tive response. They are incompatible with the methods

Antecedent Interventions

507

Table 4 Possible Advantages and Disadvantages of Functional Communication Training

Advantages

Excellent chance for generalization and maintenance of the alternative communicative response because the communicativeresponse often functions to recruit reinforcement from significant others (Fisher et al., 1998).

May have high social validity. Participants report preferences for FCT over other procedures to diminish behavior (Hanley, Piazza,Fisher, Contrucci, & Maglieri, 1997).

When the alternative communicative behavior and the problem behavior have the same schedule of reinforcement (e.g., FR 1), anFCT intervention may be effective without using extinction (Worsdell, Iwata, Hanley, Thompson, & Kahng, 2000).

Disadvantages

The FCT treatment package usually includes extinction. May produce undesirable effects.

The extinction procedure is very difficult to use consistently, allowing for intermittent reinforcement of problem behaviors.

Participants may emit inappropriately high rates of the alternative communicative response to recruit reinforcement (Fisher et al.,1998).

Recruitment of reinforcement can occur at inconvenient or impossible times for the caregiver (Fisher et al., 1998).

The fact that FCT leaves intact the environment that evoked the problem behavior, may limit its overall effectiveness (McGill, 1999).

used to differentially reinforce the alternative commu-nicative behavior because the FCT intervention does notalter the EO that evokes the problem behavior. The al-ternative communicative behavior must remain sensitiveto the evocative function of the EO to compete with theproblem behavior. For example, consider a child with de-velopmental disabilities who has a history of engagingin self-stimulatory behavior when presented with diffi-cult tasks. A therapist teaches the child to ask for assis-tance with the difficult tasks (i.e., the alternativecommunicative behavior), which reduces the self-stimu-latory behavior. After firmly establishing asking for as-sistance, therapists and caregivers would not want todecrease giving assistance with the tasks when asked(thinning the reinforcement schedule). A decrease in giv-ing assistance has the potential to produce the recoveryof the self-stimulatory behavior by breaking the alterna-tive communicative behavior-reinforcer contingency.

Hanley and colleagues (2001) recommended a pro-cedure for schedule thinning that used a dense fixed-interval schedule of reinforcement (e.g., FI 2 seconds, FI3 seconds) during the initial teaching of the alternativecommutation response. Once the communicative re-sponse was established, they suggested gradually thin-ning the FI schedule. This procedure, in contrast totime-based procedures, maintained the contingency be-tween responding and reinforcement. They cautioned thatthinning the FI schedule during FCT interventions couldproduce undesirable high rates of the alternative com-municative response that could disrupt the home or class-room settings. Hanley and colleagues further suggestedthe use of picture cues and external “clocks” to announcewhen reinforcement is available as a possible way to con-trol the undesirably high rate of the communicative re-sponse. Table 4 summarizes the advantages anddisadvantages of using FCT.

SummaryDefining Antecedent Interventions

1. The term antecedent refers to the temporal relation ofstimuli or events coming before an occurrence of behavior.

2. The convergence of research on motivating operations(MOs) and functional behavior assessments has allowedapplied behavior analysts to conceptually align appliedresearch on the effects of antecedent conditions otherthan stimulus control (e.g., SDs) to basic principles ofbehavior.

3. Functions of antecedent stimuli or events can be classi-fied as contingency dependent (stimulus control) or con-tingency independent (MO).

4. Conceptually, this chapter uses the term antecedent inter-vention to identify behavior-change tactics based oncontingency-independent antecedent stimuli.

5. Applied behavior analysts have used several antecedentinterventions, singularly or in treatment packages, to de-crease the effectiveness of reinforcers that maintain prob-lem behaviors (i.e., abolishing operations).

Antecedent Interventions

508

Antecedent Intervention

6. Because of the temporary effects of MOs, antecedent in-terventions most often serve as one component of a multi-component treatment package (i.e., in which antecedentinterventions are paired with extinction, differential rein-forcement, or other procedures).

Noncontingent Reinforcement

7. Noncontingent reinforcement (NCR) consists of present-ing of stimuli with known reinforcing properties on afixed-time (FT) or variable-time (VT) schedule indepen-dent of the participant’s behavior.

8. NCR uses three distinct procedures that identify and de-liver stimuli with known reinforcing properties: (a) posi-tive reinforcement (i.e., social mediation), (b) negativereinforcement (i.e., escape), and (c) automatic reinforce-ment (i.e., without social mediation).

9. An NCR-enriched environment may function as an abol-ishing operation (AO), reducing the motivation to engagein the problem behavior.

High-Probability Request Sequence

10. The behavioral effects of the high-probability (high-p) re-quest sequence suggests the abative effects of an AO by (a) reducing the potency of reinforcement for noncompli-ance to the low-probability requests (i.e., reducing the valueof escape from requests), and (b) reducing the aggressionand self-injury often associated with low-p requests.

11. To apply the high-p request sequence, the teacher or ther-apist selects two to five short tasks with which the learnerhas a history of compliance. These short tasks provide theresponses for the high-p requests. The teacher or therapistpresents the high-p request sequence immediately beforerequesting the target task, the low-p request.

12. Applied behavior analysts have used several labels to iden-tify this intervention, including interspersed requests, pre-task requests, and behavioral momentum.

Functional Communication Training

13. Functional communication training (FCT) is a form of dif-ferential reinforcement of alternative behavior (DRA) because the intervention develops an alternative communi-cative response as an antecedent to diminish the problembehavior.

14. Functional communication training is an antecedent in-tervention package for establishing an appropriate com-municative behavior to compete with problem behaviorsevoked by an establishing operation (EO). Rather than al-tering the value of EOs as does NCR and the high-prequest sequence, functional communication training develops alternative behaviors that are sensitive to the EOsthat maintain the problem behavior.

15. The alternative communicative response produces the re-inforcer that has maintained the problem behavior, mak-ing the communicative response functionally equivalentto the problem behavior.

Antecedent Interventions

509

Functional Behavior Assessment

Key Terms

conditional probabilitycontingency reversaldescriptive functional behavior

assessment

functional analysisfunctional behavior assessment

(FBA)functionally equivalent

indirect functional assessment

Behavior Analyst Certification Board® BCBA® & BCABA®Behavior Analyst Task List,© Third Edition

Content Area 4: Behavioral Assessment

4-1 State the primary characteristics of and rationale for conducting a descriptiveassessment.

4-2 Gather descriptive data.

(a) Select various methods.

(b) Use various methods.

4-3 Organize and interpret descriptive data.

(a) Select various methods.

(b) Use various methods.

4-4 State the primary characteristics of and rationale for conducting a functionalanalysis as a form of assessment.

4-5 Conduct functional analyses.

(a) Select various methods.

(b) Use various methods.

4-6 Organize and interpret functional analysis data.

(a) Select various methods.

(b) Use various methods.

© 2006 The Behavior Analyst Certification Board, Inc.,® (BACB®) all rights reserved. A current version of thisdocument may be found at www.bacb.com. Requests to reprint, copy, or distribute this document and ques-tions about this document must be submitted directly to the BACB.

This chapter was written by Nancy A. Neef and Stephanie M. Peterson.From Chapter 24 of Applied Behavior Analysis, Second Edition. John O. Cooper, Timothy E. Heron, William L. Heward. Copyright © 2007by Pearson Education, Inc. All rights reserved.

510

Functional Behavior Assessment

When it is time to wash hands before lunch, onechild turns the handles of the faucet and placesher hands under the running water, but another

child screams and tantrums. Why? Consistent with thescientific precept of determinism those behaviors are lawfully related to other events in the environment.Functional behavior assessment (FBA) enables hy-potheses about the relations among specific types of en-vironmental events and behaviors. Specifically, FBA isdesigned to obtain information about the purposes (func-tions) a behavior serves for a person. This chapter describes the basis for FBA, its role in the interventionand prevention of behavior difficulties, and alternativeapproaches to functional assessment.

Functions of BehaviorEvidence from decades of research indicates that bothdesirable and undesirable behaviors, whether washinghands or screaming and tantrumming, are learned andmaintained through interaction with the social and phys-ical environment. These behavior–environment interac-tions are described as positive or negative reinforcementcontingencies. Behaviors can be strengthened by either“getting something” or “getting out of something.”

FBA is used to identify the type and source of rein-forcement for challenging behaviors as the basis for in-tervention efforts designed to decrease the occurrence ofthose behaviors. FBA can be thought of as a reinforcer as-sessment of sorts. It identifies the reinforcers currentlymaintaining problem behavior. Those reinforcers mightbe positive or negative social reinforcers provided bysomeone who interacts with the person, or automatic re-inforcers produced directly by the behavior itself. Theidea behind FBA is that if these reinforcement contin-gencies can be identified and interventions can be de-signed to decrease problem behavior and increaseadaptive behavior by altering these contingencies. FBAfosters proactive, positive interventions for problem be-havior. Although reinforcement contingencies are dis-cussed in other chapters, a brief review of their role inFBA is warranted.

Positive Reinforcement

Social Positive Reinforcement (Attention)

Problem behavior often results in immediate attention fromothers, such as head turns; surprised facial expressions;reprimands; attempts to soothe, counsel, or distract; andso on. These reactions can serve to positively reinforceproblem behavior (even if inadvertently), and the problembehavior is then more likely to occur in similar circum-

stances. Problem behavior maintained by positive rein-forcement in the form of reactions from others can oftenoccur in situations in which attention is otherwise infre-quent, whether because the person does not have a reper-toire to gain attention in desirable ways, or because othersin the environment are typically otherwise occupied.

Tangible Reinforcement

Many behaviors result in access to reinforcing materialsor other stimuli. Just as pressing a button on the televisionremote changes the channel to a desired television show,problem behaviors can produce reinforcing outcomes. Achild may cry and tantrum until a favorite television showis turned on; stealing another child’s candy produces ac-cess to the item taken. Problem behaviors may developwhen they consistently produce a desired item or event.This often occurs because providing the item temporar-ily stops the problem behavior (e.g., tantrum), althoughit can have the inadvertent effect of making the problembehavior more probable in the future under similar circumstances.

Automatic Positive Reinforcement

Some behaviors do not depend on the action of others toprovide an outcome; some behaviors directly producetheir own reinforcement. For example, thumb suckingmight be reinforced by physical stimulation of either thehand or the mouth. A behavior is assumed to be main-tained by automatic reinforcement only after social rein-forcers have been ruled out (e.g., when the behavioroccurs even when the individual is alone).

Negative Reinforcement

Social Negative Reinforcement (Escape)

Many behaviors are learned as a result of their effective-ness in terminating or postponing aversive events. Hang-ing up the phone terminates interactions with atelemarketer; completing a task or chore terminates re-quests from others to complete it or the demands associ-ated with the task itself. Problem behaviors can bemaintained in the same way. Behaviors such as aggres-sion, self-injurious behavior (SIB), and bizarre speechmay terminate or avoid unwanted interactions with oth-ers. For example, noncompliance postpones engagementin a nonpreferred activity, and disruptive classroom be-havior often results in the student being sent out of theclassroom, thereby allowing escape from instructionaltasks or teacher demands. All of these behaviors can bestrengthened by negative reinforcement to the extent that

511

Functional Behavior Assessment

they serve to allow the individual to escape or avoid dif-ficult or unpleasant tasks, activities, or interactions.

Automatic Negative Reinforcement

Aversive stimulation, such as a physically painful or un-comfortable condition, is a motivating operation thatmakes its termination reinforcing. Behaviors that directlyterminate aversive stimulation are therefore maintainedby negative reinforcement that is an automatic outcomeof the response. Automatic negative reinforcement canaccount for behaviors that are either appropriate or harm-ful. For example, putting calamine lotion on a poison ivyrash can be negatively reinforced by alleviation of itching,but intense or prolonged scratching that breaks the skincan be negatively reinforced in the same manner. Someforms of SIB may serve to distract from other sources ofpain, which may account for their correlation with specificmedical conditions (e.g., DeLissovoy, 1963).

Function versus Topography

Several points can be made from the previous discussionof the sources of reinforcement for behavior. It is impor-tant to recognize that environmental influences do notmake distinctions between desirable and undesirabletopographies of behavior; the same reinforcement con-tingencies that account for desirable behavior can alsoaccount for undesirable behavior. For example, the childwho washes and dries her hands before lunch has prob-ably received praise for doing so. A child who frequentlyengages in tantrums in the same situation may have re-ceived attention (in the form of reprimands). Both formsof attention have the potential to reinforce the respectivebehaviors.

Likewise, the same topography of behavior can servedifferent functions for different individuals. For exam-ple, tantrums may be maintained by positive reinforce-ment in the form of attention for one child, and bynegative reinforcement in the form of escape for anotherchild (e.g., Kennedy, Meyer, Knowles, & Shukla, 2000).

Because different behaviors that look quite differentcan serve the same function, and behavior of the sameform can serve different functions under different condi-tions, the topography, or form, of a behavior often re-veals little useful information about the conditions thataccount for it. Identifying the conditions that account fora behavior (its function), on the other hand, suggests whatconditions need to be altered to change the behavior. As-sessment of the function of a behavior can therefore yielduseful information with respect to intervention strategiesthat are likely to be effective.

Role of Functional BehaviorAssessment in Interventionand PreventionFBA and Intervention

If the cause-and-effect relation between environmentalevents and a behavior can be determined, that relationcan be altered, thereby diminishing subsequent occur-rences of a problem behavior. FBA interventions can con-sist of at least three strategic approaches: alteringantecedent variables, altering consequent variables, andteaching alternative behaviors.

Altering Antecedent Variables

FBA can identify antecedents that might be altered so theproblem behavior is less likely to occur. Altering the an-tecedents for problem behavior can change and/or elim-inate either (a) the motivating operation for problembehavior or (b) the discriminative stimuli that triggerproblem behavior. For example, the motivating operationfor tantrums when a child is asked to wash her hands be-fore lunch could be modified by changing the character-istics associated with lunch so that the avoidance ofparticular events is no longer reinforcing (e.g., initially re-ducing table-setting demands, altering seating arrange-ments to minimize taunts from a sibling or peer, reducingsnacks before lunch and offering more preferred foodsduring lunch). Alternatively, if the FBA shows that run-ning water is the discriminative stimulus that triggersproblem behavior when a child is asked to wash herhands, the child might be given waterless antibacterialhand gel instead. In this case, the discriminative stimulusfor problem behavior has been removed, thereby de-creasing problem behavior.

Altering Consequence Variables

FBA can also identify a source of reinforcement to beeliminated for the problem behavior. For example, anFBA that indicates tantrums are maintained by social neg-ative reinforcement (avoidance or escape) suggests a va-riety of treatment options, which, by altering that relation,are likely to be effective, such as the following:

1. The problem behavior can be placed on extinctionby ensuring that the reinforcer (e.g., avoidance oflunch) is no longer delivered following problembehavior (tantrums).

2. The schedule might be modified so that handwashing follows (and thereby provides escapefrom) an event that is less preferred.

512

Functional Behavior Assessment

Teaching Alternative Behaviors

FBA can also identify the source of reinforcement to beprovided for appropriate replacement behaviors. Alter-native appropriate behaviors that serve the same function(i.e., produce the same reinforcer) as tantrums could betaught. For example, the student might be taught to toucha card communicating “later” after washing his hands toproduce a delay in being seated at the lunch table.

FBA and Default Technologies

Interventions based on an FBA are more likely to be ef-fective than those selected arbitrarily (e.g., Ervin et al.,2001; Iwata et al., 1994b). Understanding why a behav-ior occurs (its function) often suggests how it can bechanged for the better. On the other hand, premature ef-forts to treat problem behavior before seeking an under-standing of the purposes it serves for a person can beinefficient, ineffective, and even harmful. For example,suppose that a time-out procedure is implemented in anattempt to attenuate the problem of the child who con-sistently tantrums when she is asked to wash her handsbefore lunch. The child is removed from the hand-wash-ing activity to a chair in the corner of the room. It may be,however, that the events that typically follow washinghands (those associated with lunch time, such as the de-mands of setting the table or interactions with others) areaversive for the child; tantrums have effectively servedto allow the child to avoid those events. In this case, theintervention would be ineffective because it has donenothing to alter the relation between tantrums and theconsequence of postponing the aversive events associ-ated with lunch. In fact, the intervention may exacerbatethe problem if it produces a desired outcome for the child.If stopping the hand-washing activity and having the childsit on a chair as “time-out” for tantrumming enables thechild to avoid the aversive lunchtime events—or to es-cape them altogether—tantrums may be more likelyunder similar circumstances in the future. When the time-out intervention proves unsuccessful, other interventionsmight be attempted. Without understanding the functionthat the problem behavior serves, however, the effective-ness of those interventions cannot be predicted.

At best, a trial-and-error process of evaluating arbi-trarily selected interventions can be lengthy and ineffi-cient. At worst, such an approach may cause the problembehavior to become more frequent or severe. As a result,caregivers might resort to increasingly intrusive, coer-cive, or punishment-based interventions, which are oftenreferred to as default technologies.

FBA can decrease reliance on default technologiesand contribute to more effective interventions in severalways. When FBAs are conducted, reinforcement-based

interventions are more likely to be implemented than areinterventions that include a punishment component (Pe-lios, Morren, Tesch, & Axelrod, 1999). In addition, the ef-fects of interventions based on FBAs are likely to be moredurable than those that do not take the function of prob-lem behavior into account. If contrived contingencies aresuperimposed on unknown contingencies that are main-taining the behavior, their continuation is often neces-sary to maintain improvements in the behavior. If thosesuperimposed contingencies are discontinued, the be-havior will continue to be influenced by the unchangedoperative contingencies.

FBA and Prevention

By furthering understanding of the conditions underwhich certain behaviors occur, FBA can also contribute tothe prevention of difficulties. Although problem behaviormay be suppressed without regard to its function by usingpunishment procedures, additional behaviors not subjectto the punishment contingencies may emerge because themotivating operations for problem behavior remain. Forexample, contingent loss of privileges might eliminatetantrums that occur whenever a child is asked to wash herhands, but it will not eliminate avoidance as a reinforceror the conditions that establish it as a reinforcer. Thus,other behaviors that result in avoidance may develop, suchas aggression, property destruction, or running away.These unintended effects are less likely to occur with in-terventions that address (rather than override or competewith) the reinforcing functions of problem behavior.

On a broader scale, the accumulation of FBA datamay further assist in prevention efforts by identifying theconditions that pose risks for the future development ofproblem behaviors. Preventive efforts can then focus onthose conditions. For example, based on data from 152analyses of the reinforcing functions of self-injurious be-havior (SIB), Iwata and colleagues (1994b) found thatescape from task demands or other aversive stimuli ac-counted for the behavior in the largest proportion of cases.The authors speculated that this outcome might have beenan unintended result of a move toward providing moreaggressive treatment. For example, if a child tantrumswhen she is required to wash her hands, the teacher mightassume that she does not know how to wash her hands.The teacher might decide to replace playtime with a pe-riod of intensive instruction on hygiene. Rather than de-creasing problem behavior, such interventions mayexacerbate it. The data reported by Iwata and colleagues(1994b) suggest that preventive efforts should be directedtoward modifying instructional environments (such asproviding more frequent reinforcement for desirable be-havior, opportunities for breaks, or means to request and

513

Functional Behavior Assessment

obtain help with difficult tasks) so that they are less likelyto serve as sources of aversive stimulation (motivatingoperations) for escape.

Overview of FBA MethodsFBA methods can be classified into three types: (a) func-tional (experimental) analysis, (b) descriptive assess-ment, and (c) indirect assessment. The methods can beordered on a continuum with respect to considerationssuch as ease of use and the type and precision of infor-mation they yield. Selecting the method or combinationof methods that will best suit a particular situation re-quires consideration of each method’s advantages andlimitations. We discuss functional analysis first becausedescriptive and indirect methods of functional assess-ment developed as an outgrowth of functional analysis.As noted later, functional analysis is the only FBAmethod that allows practitioners to confirm hypothesesregarding functional relations between problem behaviorand environmental events.

Functional (Experimental) Analysis

Basic Procedure

In a functional analysis, antecedents and consequencesrepresenting those in the person’s natural environment arearranged so that their separate effects on problem behav-ior can be observed and measured. This type of assess-ment is often referred to as an analog because antecedentsand consequences similar to those occurring in the nat-ural routines are presented in a systematic manner, but theanalysis is not conducted in the context of naturally oc-

curring routines. Analog conditions are often used becausethey allow the behavior analyst to better control the envi-ronmental variables that may be related to the problembehavior than can be accomplished in naturally occurringsituations. Analogs refer to the arrangement of variablesrather than the setting in which assessment occurs. Re-search has found that functional analyses conducted innatural environments (e.g., classroom settings) often yieldthe same (and, in some cases, clearer) results compared tothose conducted in simulated settings (Noell, Van-Dertteyden, Gatti, & Whitmarsh, 2001).

Functional analyses typically are comprised of fourconditions: three test conditions—contingent attention,contingent escape, and alone—and a control condition, inwhich problem behavior is expected to be low because re-inforcement is freely available and no demands are placedon the individual (see Table 1). Each test condition contains a motivating operation (MO) and a potentialsource of reinforcement for problem behavior. The con-ditions are presented systematically one at a time and inan alternating sequence to identify which conditions pre-dictably result in problem behavior. Occurrences of prob-lem behavior are recorded during each session. Sessionsare repeated to determine the extent to which problembehavior consistently occurs more often under one ormore conditions relative to another.

Interpreting Functional Analyses

The function that problem behavior serves for a personcan be determined by visually inspecting a graph of theresults of the analysis to identify the condition(s) underwhich high rates of the behavior occurred. A graph foreach potential behavioral function is shown in Figure1. Problem behavior is expected to be low in the playcondition because no motivating operations for problem

Table 1 Motivating Operations and Reinforcement Contingencies for Typical Control and Test Conditions of a Functional Analysis

Condition Antecedent conditions (motivating operation) Consequences for problem behavior

Play Preferred activities continuously available, social Problem behavior is ignored or neutrally redirected.(control) attention provided, and no demands are placed on the

person.

Contingent Attention is diverted or withheld from the person. Attention in the form of mild reprimands or soothing attention statements (e.g., “Don’t do that.You’ll hurt someone.”).

Contingent Task demands are delivered continuously using a three- Break from the task provided by removing task materials escape step prompting procedure (e.g., [1] “You need to fold the and stopping prompts to complete the task.

towel.” [2] Model folding the towel. [3] Provide hand-over-hand assistance to fold the towel.)

Alone Low level of environmental stimulation (i.e., therapist, Problem behavior is ignored or neutrally redirected.task materials, and play materials are absent).

514

Functional Behavior Assessment

40

30

20

10

0

Nu

mb

er o

f P

rob

lem

Beh

avio

rs p

er M

inu

te

2 4 6 8 10 12

Contingent Attention

Alone

Contingent Escape

Free Play

Sessions

Positive Reinforcement (Attention) Function

40

30

20

10

0

Nu

mb

er o

f P

rob

lem

Beh

avio

rs p

er M

inu

te

2 4 6 8 10 12

Sessions

Negative Reinforcement (Escape) Function

Contingent Escape

AloneContingent Attention Free Play

40

30

20

10

0

Nu

mb

er o

f P

rob

lem

Beh

avio

rs p

er M

inu

te

2 4 6 8 10 12

Sessions

Automatic Function

40

30

20

10

0

Nu

mb

er o

f P

rob

lem

Beh

avio

rs p

er M

inu

te

2 4 6 8 10 12

Sessions

Undifferentiated Pattern (Inconclusive Results,

Possible Automatic Function)

Alone

Free Play

Contingent Attention

Contingent Escape

Contingent Escape

Free Play

AloneContingent Attention

Figure 1 Data patterns typical of each behavioral function during afunctional analysis.

behavior are present. Elevated problem behavior in thecontingent attention condition suggests that problem be-havior is maintained by social positive reinforcement(see graph in top left of Figure 1). Elevated problem behavior in the contingent escape condition suggeststhat problem behavior is maintained by negative rein-forcement (graph in top right of Figure 1). Elevatedproblem behavior in the alone condition suggests thatproblem behavior is maintained by automatic rein-forcement (graph in bottom left of Figure 1). Furtheranalysis is needed to determine if the source of the au-tomatic reinforcement is positive or negative. Problembehavior may be maintained by multiple sources of re-inforcement. For example, if problem behavior is ele-vated in the contingent attention and contingent escapeconditions, it is most likely maintained by both positiveand negative reinforcement.

If problem behavior occurs frequently in all condi-tions (including the play condition), or is variable acrossconditions, responding is considered undifferentiated (seegraph in bottom right of Figure 1). Such results are

inconclusive, but can also occur with behavior that ismaintained by automatic reinforcement.

Functional analysis has been replicated and extendedin hundreds of studies, thereby demonstrating its gener-ality as an approach to the assessment and treatment ofa wide range of behavior difficulties. (See the 1994 spe-cial issue of the Journal of Applied Behavior Analysis fora sample of such applications.)

Advantages of Functional Analysis

The primary advantage of functional analysis is its abil-ity to yield a clear demonstration of the variable(s) thatrelate to the occurrence of a problem behavior. In fact,functional (experimental) analyses serve as the standardof scientific evidence by which other assessment alter-natives are evaluated, and represent the method mostoften used in research on the assessment and treatment ofproblem behavior (Arndorfer & Miltenberger, 1993). Be-cause functional analyses allow valid conclusions con-cerning the variables that maintain problem behavior,

515

Functional Behavior Assessment

they have enabled the development of effectivereinforcement-based treatments and less reliance on pun-ishment procedures (Ervin et al., 2001; Iwata et al.,1994b; Pelios et al., 1999).

Limitations of Functional Analysis

A risk of functional analysis, however, is that the assess-ment process may temporarily strengthen or increase theundesirable behavior to unacceptable levels, or possiblyresult in the behavior acquiring new functions. Second,although little is known about the acceptability of func-tional analysis procedures to practitioners (Ervin et al.,2001), the deliberate arrangement of conditions that setthe occasion for, or potentially reinforce, problem be-havior can be counterintuitive to persons who do not un-derstand its purpose (or that such conditions are analogsfor what occurs in the natural routine). Third, some be-haviors (e.g., those that, albeit serious, occur infrequently)may not be amenable to functional analyses. Fourth, func-tional analyses that are conducted in contrived settingsmight not detect the variable that accounts for the occur-rence of the problem behavior in the natural environment,particularly if the behavior is controlled by idiosyncraticvariables that are not represented in the functional analy-sis conditions (e.g., Noell et al., 2001). Finally, the time,effort, and professional expertise required to conduct andinterpret functional analyses have been frequently citedas obstacles to its widespread use in practice (e.g., Spreat& Connelly, 1996; 2001, Volume 2 issue of School Psy-chology Review).

Of course, untreated or ineffectively treated problembehaviors also consume a great deal of time and effort(without a constructive long-term outcome), and imple-mentation of effective treatments (based on an under-standing of the variables that maintain them) is likely torequire skills similar to those involved in conductingfunctional analyses. These concerns have led to researchon ways to enhance the practical use of functional analy-ses, such as methods of practitioner training (Iwata et al.,2000; Pindiprolu, Peterson, Rule, & Lignugaris/Kraft,2003), abbreviated assessments (Northup et al., 1991),and the development of alternative methods of FBA de-scribed in the following sections.

Descriptive Functional Behavior Assessment

As with functional analyses, descriptive functional be-havior assessment encompasses direct observation ofbehavior; unlike functional analyses, however, observa-tions are made under naturally occurring conditions.Thus, descriptive assessments involve observation of the

problem behavior in relation to events that are notarranged in a systematic manner. Descriptive assess-ments have roots in the early stages of applied behavioranalysis; Bijou, Peterson, and Ault (1968) initially de-scribed a method for objectively defining, observing,and coding behavior and contiguous environmentalevents. This method has been used subsequently to iden-tify events that may be correlated with the target behav-ior. Events that are shown to have a high degree ofcorrelation with the target behavior may suggest hy-potheses about behavioral function. We describe threevariations of descriptive analysis: ABC (antecedent-behavior-consequence) continuous recording, ABC nar-rative recording, and scatterplots.

ABC Continuous Recording

With ABC continuous recording, an observer records oc-currences of the targeted problem behaviors and selectedenvironmental events in the natural routine during a pe-riod of time. Codes for recording specific antecedents,problem behaviors, and consequences can be developedbased on information obtained from a functional assess-ment interview or ABC narrative recording (describedlater). For example, following an interview and observa-tions using narrative recording, Lalli, Browder, Mace,and Brown (1993) developed stimulus and response codesto record the occurrence or nonoccurrence of antecedent(e.g., one-to-one instruction, group instruction) and sub-sequent events (attention, tangible reinforcement, escape)for problem behavior during classroom activities.

With ABC continuous recording, the occurrence ofa specified event is marked on the data sheet (using par-tial interval, momentary time sampling, or frequencyrecording) (see Figure 2). The targeted environmentalevents (antecedents and consequences) are recordedwhenever they occur, regardless of whether problem be-havior occurred with it. Recording data in this mannermay reveal events that occur in close temporal relationswith the target behavior. For example, descriptive datamay show that tantrums (behavior) often occur when astudent is given an instruction to wash her hands (ante-cedent); the data may also show that tantrums are typi-cally followed by the removal of task demands. Apossible hypothesis in this situation is that disruptionsare motivated by academic demands and are maintainedby escape from those demands (negative reinforcement).

Advantages of ABC Continuous Recording. De-scriptive assessments based on continuous recordinguse precise measures (similar to functional analyses),and in some cases the correlations may reflect causal

516

Functional Behavior Assessment

Figure 2 Sample data collection form for ABC continuous recording.

ABC Recording FormObserver: R. Van NormanTime begin: 9:30 A.M. Time end: 10:15 A.M.Date: January 25, 2006

Antecedent Behavior Consequence

❑ Task prompt /instruction ❑ Social attention❑x Attention diverted ❑x Reprimand❑ Social interaction ❑x Tantrum ❑ Task demand❑ Engaged in preferred activity ❑ Aggression ❑ Access to preferred item❑ Preferred activity removed ❑ Task removed❑ Alone (no attention/no activities) ❑ Attention diverted

❑x Task prompt /instruction ❑ Social attention❑ Attention diverted ❑ Reprimand❑ Social interaction ❑x Tantrum ❑ Task demand❑ Engaged in preferred activity ❑ Aggression ❑ Access to preferred item❑ Preferred activity removed ❑x Task removed❑ Alone (no attention/no activities) ❑ Attention diverted

❑x Task prompt /instruction ❑ Social attention❑ Attention diverted ❑ Reprimand❑ Social interaction ❑x Tantrum ❑ Task demand❑ Engaged in preferred activity ❑ Aggression ❑ Access to preferred item❑ Preferred activity removed ❑x Task removed❑ Alone (no attention/no activities) ❑ Attention diverted

❑ Task prompt /instruction ❑ Social attention❑x Attention diverted ❑ Reprimand❑ Social interaction ❑x Tantrum ❑ Task demand❑ Engaged in preferred activity ❑ Aggression ❑ Access to preferred item❑ Preferred activity removed ❑ Task removed❑ Alone (no attention/no activities) ❑x Attention diverted

❑x Task prompt /instruction ❑ Social attention❑ Attention diverted ❑ Reprimand❑ Social interaction ❑ Tantrum ❑ Task demand❑ Engaged in preferred activity ❑ Aggression ❑ Access to preferred item❑ Preferred activity removed ❑ Task removed❑ Alone (no attention/no activities) ❑ Attention diverted

❑x Task prompt /instruction ❑ Social attention❑ Attention diverted ❑ Reprimand❑ Social interaction ❑x Tantrum ❑ Task demand❑ Engaged in preferred activity ❑ Aggression ❑ Access to preferred item❑ Preferred activity removed ❑x Task removed❑ Alone (no attention/no activities) ❑ Attention diverted

❑ Task prompt /instruction ❑ Social attention❑x Attention diverted ❑ Reprimand❑ Social interaction ❑ Tantrum ❑ Task demand❑ Engaged in preferred activity ❑ Aggression ❑ Access to preferred item❑ Preferred activity removed ❑ Task removed❑ Alone (no attention/no activities) ❑ Attention diverted

(continued )

517

Functional Behavior Assessment

Figure 2 (continued)

Antecedent Behavior Consequence

❑x Task prompt /instruction ❑ Social attention❑ Attention diverted ❑ Reprimand❑ Social interaction ❑x Tantrum ❑ Task demand❑ Engaged in preferred activity ❑ Aggression ❑ Access to preferred item❑ Preferred activity removed ❑x Task removed❑ Alone (no attention/no activities) ❑ Attention diverted

❑ Task prompt /instruction ❑ Social attention❑ Attention diverted ❑x Reprimand❑x Social interaction ❑x Tantrum ❑ Task demand❑ Engaged in preferred activity ❑ Aggression ❑ Access to preferred item❑ Preferred activity removed ❑ Task removed❑ Alone (no attention/no activities) ❑ Attention diverted

❑x Task prompt /instruction ❑ Social attention❑ Attention diverted ❑ Reprimand❑ Social interaction ❑x Tantrum ❑ Task demand❑ Engaged in preferred activity ❑ Aggression ❑ Access to preferred item❑ Preferred activity removed ❑x Task removed❑ Alone (no attention/no activities) ❑ Attention diverted

❑ Task prompt /instruction ❑ Social attention❑ Attention diverted ❑ Reprimand❑x Social interaction ❑x Tantrum ❑ Task demand❑ Engaged in preferred activity ❑ Aggression ❑ Access to preferred item❑ Preferred activity removed ❑ Task removed❑ Alone (no attention/no activities) ❑x Attention diverted

Source Recording form developed by Renée Van Norman. Used by permission.

relations (e.g., Sasso et al., 1992). Because the assess-ments are conducted in the context in which the prob-lem behaviors occur, they are likely to provide usefulinformation for designing a subsequent functionalanalysis if that proves necessary. In addition, they donot require disruption to the person’s routine.

Limitations of ABC Continuous Recording.Although descriptive analyses of this type may show acorrelation between particular events and the problembehavior, such correlations can be difficult to detect inmany situations. This is especially likely if the influen-tial antecedents and consequences do not reliably pre-cede and follow the behavior. In such cases, it may benecessary to analyze descriptive data by calculatingconditional probabilities. A conditional probability isthe likelihood that a target problem behavior will occurin a given circumstance. Given the example providedin Figure 2, the conditional probability of tantrums is computed by calculating (a) the proportion of occur-rences of tantrums that were preceded by the an-tecedent of an instruction and (b) the proportion ofoccurrences of tantrums for which task removal was

the consequence. In this example, 9 instances of tan-trums were recorded, and 6 of those occurrences werefollowed by task removal. Thus, the conditional proba-bility that tantrums occur in the presence of task de-mands and are followed by escape is .66. The closer theconditional probability is to 1.0, the stronger the hy-pothesis is that escape is a variable maintaining prob-lem behavior.

However, conditional probabilities can be mislead-ing. If a behavior is maintained by intermittent rein-forcement, the behavior might occur often even thoughit is not followed consistently by a particular conse-quence. For example, the teacher might send the studentto time-out only when tantrums are so frequent or se-vere that they are intolerable. In this case, only a smallproportion of tantrums would be followed by that con-sequence, and the conditional probability would be low.One possibility, therefore, is that a functional relationthat exists (e.g., tantrums negatively reinforced by es-cape) will not be detected. Furthermore, the child’s cur-rent behavior intervention plan might require threerepetitions of the instruction and an attempt to providephysical assistance before time-out is implemented. In

518

Functional Behavior Assessment

that situation, the conditional probability of tantrumsbeing followed by attention would be high. A descriptiveanalysis might, therefore, suggest a functional relation(e.g., tantrums positively reinforced by attention) thatdoes not exist. Perhaps for these reasons, studies thathave used conditional probability calculations to exam-ine the extent to which descriptive methods lead to thesame hypotheses as functional analyses have generallyfound low agreement (e.g., Lerman & Iwata, 1993; Noellet al., 2001).

ABC Narrative Recording

ABC narrative recording is a form of descriptive assess-ment that differs from continuous recording in that (a)data are collected only when behaviors of interest are ob-served, and (b) the recording is open-ended (any eventsthat immediately precede and follow the target behaviorare noted) (see Figure 3). Because data are recorded onlywhen the target behavior occurs, narrative recording maybe less time-consuming than continuous recording. Onthe other hand, narrative recording has several disadvan-tages in addition to those described earlier.

Limitations of Narrative Recording. Because nar-rative recording data are seldom reported in publishedresearch, their utility in identifying behavioral functionhas not been established. However, ABC narrativerecording might identify functional relations that do notexist because antecedent and consequent events arerecorded only in relation to the target behavior; itwould not be evident from ABC data if particularevents occurred just as often in the absence of the targetbehavior. For example, ABC data might erroneously in-dicate a correlation between peer attention and disrup-tion even though peer attention also occurred frequentlywhen the student was not disruptive.

Another potential limitation of ABC narrative record-ing concerns its accuracy. Unless observers receive ade-quate training, they may report inferred states orsubjective impressions (e.g., “felt embarrassed,” “wasfrustrated”) instead of describing observable events inobjective terms. In addition, given the likelihood that anumber of environmental events occur in close temporalproximity to one another, discriminating the events thatoccasion a behavior can be difficult. ABC narrativerecording may be best suited as a means of gathering pre-liminary information to inform continuous recording orfunctional analyses.

Scatterplots

Scatterplot recording is a procedure for recording theextent to which a target behavior occurs more often at

particular times than others (Symons, McDonald, &Wehby, 1998; Touchette, MacDonald, & Langer, 1985).Specifically, scatterplots involve dividing the day intoblocks of time (e.g., a series of 30-minute segments).For every time segment, an observer uses different sym-bols on an observation form to indicate whether the tar-get problem behavior occurred a lot, some, or not at all.After data have been collected over a series of days,they are analyzed for patterns (specific time periods thatare typically associated with problem behavior). If a re-curring response pattern is identified, the temporal dis-tributions in the behavior can be examined for a relationto particular environmental events. For example, a timeperiod in which the behavior occurs often might be cor-related with increased demands, low attention, certainactivities, or the presence of a particular person. If so,changes can be made on that basis.

Advantages of Scatterplots. The primary advan-tage of scatterplots is that they identify time periodsduring which the problem behavior occurs. Such infor-mation can be useful in pinpointing periods of the daywhen more focused ABC assessments might be con-ducted to obtain additional information regarding thefunction of the problem behavior.

Limitations of Scatterplots. Although scatterplotsare often used in practice, little is known about theirutility. It is unclear whether temporal patterns are rou-tinely evident (Kahng et al., 1998). Another problem isthat obtaining accurate data with scatterplots may bedifficult (Kahng et al., 1998). In addition, the subjectivenature of the ratings of how often the behavior occurs(e.g., “a lot” versus “some”) can contribute to difficul-ties with interpretation (standards for these valuesmight differ across teachers or raters).

Indirect Functional Behavior Assessment

Indirect functional assessment methods use struc-tured interviews, checklists, rating scales, or question-naires to obtain information from persons who arefamiliar with the person exhibiting the problem be-havior (e.g., teachers, parents, caregivers, and/or theindividual him- or herself) to identify possible condi-tions or events in the natural environment that cor-relate with the problem behavior. Such procedures arereferred to as “indirect” because they do not involvedirect observation of the behavior, but rather solicit information based on others’ recollections of the behavior.

519

Functional Behavior Assessment

Behavioral Interviews

Interviews are used routinely in assessment. The goal of abehavioral interview is to obtain clear and objective infor-mation about the problem behaviors, antecedents, and con-sequences. This might include clarifying descriptions ofthe behavior (consequences); when (times), where (settings,activities, events), with whom, and how often it occurs;what typically precedes the behavior (antecedents); whatthe child and others typically do immediately following thebehavior (consequences); and what steps have previouslybeen taken to address the problem, and with what result.Similar information might be solicited about desirable be-havior (or the conditions under which undesirable behav-ior does not occur) to identify patterns or conditions thatpredict appropriate versus problem behavior. Informationcan also be obtained about the child’s apparent preferences(e.g., favorite items or activities), skills, and means of com-municating. A skillful interviewer poses questions in a waythat evokes specific, complete, and factual responses aboutevents, with minimal interpretation or inferences.

Lists of interview questions have been published, andthey provide a consistent, structured format for obtaininginformation through either an interview or questionnaireformat. For example, the Functional Assessment Inter-view (O’Neill et al., 1997) has 11 sections, which includedescription of the form (topography) of the behavior, gen-eral factors that might affect the behavior (medications,staffing patterns, daily schedule), antecedents and out-comes of the behavior, functional behavior repertoires,communication skills, potential reinforcers, and treat-ment history.

A form of the Functional Assessment Interview forstudents who can serve as their own informants is alsoavailable (Kern, Dunlap, Clarke, & Childs, 1995;O’Neill et al., 1997). Questions include the behavior(s)that cause trouble for the students at school, a descrip-tion of the student’s class schedule and its relation toproblem behavior, rating of intensity of behaviors on ascale of 1 to 6 across class periods and times of day, as-pects of the situation related to the behavior (e.g., dif-ficult, boring, or unclear material; peer teasing; teacherreprimands), other events that might affect the behavior(e.g., lack of sleep, conflicts), consequences (what oc-curs when the individual engages in the behavior), pos-sible behavior alternatives, and potential strategies fora support plan.

Two other questionnaires are the Behavioral Diag-nosis and Treatment Information Form (Bailey & Pyles,1989) and the Stimulus Control Checklist (Rolider & VanHouten, 1993), which also address questions about theconditions under which the behavior occurs or doesn’toccur, and how often. In addition, they include questionsabout physiological factors that might affect the behavior.

Behavior Rating Scales

Behavior rating scales designed for functional assessmentask informants to estimate the extent to which behavioroccurs under specified conditions, using a Likert scale (e.g.,never, seldom, usually, always). Hypotheses about the func-tion of a behavior are based on the scores associated witheach condition. Those conditions that are assigned thehighest cumulative or average rating are hypothesized to berelated to the problem behavior. For example, if an infor-mant states that problem behavior always occurs when de-mands are placed on a child, a negative reinforcementhypothesis might be made. Features of several behaviorrating scales are summarized in Table 2.

Advantages of Indirect FBA

Some indirect assessment methods can provide a usefulsource of information in guiding subsequent, more ob-jective assessments, and contribute to the developmentof hypotheses about variables that might occasion ormaintain the behaviors of concern. Because indirectforms of FBA do not require direct observation of prob-lem behavior, many people view them as convenient.

Limitations of Indirect FBA

A major limitation of indirect FBA is that informants maynot have accurate and unbiased recall of behavior and theconditions under which it occurred, or be able to reportsuch recollections in a way that conforms to the require-ments of the question. Perhaps for these reasons, little re-search exists to support the reliability of the informationobtained from indirect assessment methods. The Motiva-tion Assessment Scale (MAS) is one of the few behaviorrating scales that has been evaluated for its technical ad-equacy. Several studies have evaluated the interrater agree-ment of the MAS, and almost all of them have found it tobe low (Arndorfer, Miltenberger, Woster, Rotvedt, &Gaffaney, 1994; Barton-Arwood, Wehby, Gunter, & Lane,2003; Conroy, Fox, Bucklin, & Good, 1996; Crawford,Brockel, Schauss, & Miltenberger, 1992; Newton &Sturmey, 1991; Sigafoos, Kerr, & Roberts, 1994; Zarcone,Rodgers, & Iwata, 1991). Barton-Arwood and colleagues(2003) also evaluated the technical adequacy of the Prob-lem Behavior Questionnaire (PBQ). They reported vari-able intrarater agreement and questionable stability of theidentified behavioral function with both the PBQ andMAS as applied to students with emotional or behavioraldisorders. Because of the lack of empirical data to sup-port their validity and interrater agreement, indirect meth-ods of FBA are not recommended as the principal meansof identifying the functions of behaviors. These scalescan, however, provide information useful for forming ini-tial hypotheses, which can later be tested.

520

Functional Behavior Assessment

Table 2 Overview of Behavior Rating Scales Used to Assess Possible Functions of Problem Behavior

Example item and possible Behavior rating scale Functions assessed Format and number of items function

Motivation Assessment Sensory reinforcement, 16 questions (4 for each of 4 Does the behavior seem to Scale (MAS) (Durand & escape, attention, and functions), 7-point scale from occur in response to your Crimmins, 1992) tangible reinforcement always to never talking to other persons in the

room? (attention)

Motivation Analysis Rating Sensory reinforcement, 6 statements (2 for each of 3 The behavior stops occurring Scale (MARS) (Wieseler, escape, and attention functions), 4-point scale from shortly after you stop working Hanson, Chamberlain, & always to never or making demands on this Thompson, 1985) person. (escape)

Problem Behavior Peer attention, teacher Questions, 7-point range When the problem behavior Questionnaire (PBQ) attention, escape/avoid peer occurs, do peers verbally (Lewis, Scott, & Sugai, 1994) attention, escape/avoid respond to or laugh at the

teacher attention, and student? (peer attention)assessment of setting events

Functional Analysis Screening Social reinforcement (attention, Yes or no as to whether state- When the behavior occurs, do Tool (FAST) (Iwata & DeLeon, preferred items), social re- ments are descriptive you usually try to calm the 1996) inforcement (escape), auto- person down or distract the

matic reinforcement by person with preferred sensory stimulation, auto- activities (leisure items, matic reinforcement by pain snacks, etc.)? (social attenuation reinforcement, attention,

preferred items)

Questions About Behavioral Attention, escape, nonsocial, Statements, 4-point range Participant engages in the Function (QABF) (Paclawskyj, physical, tangible behavior to try to get a Matson, Rush, Smalls, & reaction from you. (attention)Vollmer, 2000)

Conducting a FunctionalBehavior AssessmentGiven the strengths and limitations of the different FBAprocedures, FBA can best be viewed as a four-stepprocess as follows:

1. Gather information via indirect and descriptiveassessment.

2. Interpret information from indirect and descriptiveassessment and formulate hypotheses about thepurpose of problem behavior.

3. Test hypotheses using functional analysis.

4. Develop intervention options based on the functionof problem behavior.

Gathering Information

It is often helpful to begin the FBA process by con-ducting functional assessment interviews with the per-son’s teacher, parent, caregiver, and/or others who workclosely with the person. The interview can be helpfulin preparing the evaluator to conduct direct observa-tions by identifying and defining the target problem be-

haviors, identifying and defining potential antecedentsand consequences that may be observed, and gainingan overall picture of the problem behavior as well asthe strengths of the person. The interview can also helpdetermine if other assessments are warranted before amore extensive FBA is conducted. For example, if theinterview reveals that the person has chronic ear infec-tions that are currently untreated, a medical evaluationshould be conducted before further behavioral assess-ment takes place.

In many cases conducting an interview with the per-son who has problem behavior can be helpful if he has thelanguage skills to understand and respond to interviewquestions. Sometimes the person has useful insights re-garding why he displays problem behavior in specificcontexts (Kern et al., 1995).

At this point, conducting direct observations of theproblem behavior within the natural routine is useful. Suchobservations help confirm or disconfirm the informationobtained through the interviews. If it is not clear when theproblem behavior occurs most often, a scatterplot analy-sis may be useful to determine when further behavioralobservations should be conducted. When the problematictime periods have been determined, the behavior analyst

521

Functional Behavior Assessment

can conduct ABC assessments. Information obtained fromthe interviews is helpful in guiding the ABC assessmentbecause the evaluator should already have clear defini-tions of the target behavior(s), antecedents, and conse-quences. However, the behavior analyst must also bewatchful for additional, unexpected antecedents and con-sequences that might present themselves in the natural en-vironment. Teachers or caregivers sometimes overlook orare unaware of specific stimuli triggering or followingproblem behavior.

Interpreting Information and Formulating Hypotheses

Results from indirect assessments should be analyzedfor patterns of behavior and environmental events so thathypotheses regarding the function of the problem be-havior can be made. If problem behavior occurs mostfrequently when low levels of attention are available andproblem behavior frequently produces attention, a hy-pothesis that attention maintains the problem behavioris appropriate. If the problem behavior occurs most fre-quently in high-demand situations and often produces areprieve from the task (e.g., through time-out, suspen-sion, or another form of task delay), then a hypothesisthat escape maintains the problem behavior is appropri-ate. If problem behavior occurs in an unpredictable pat-tern or at high rates across the school day, a hypothesisthat the behavior is maintained by automatic reinforce-ment may be appropriate. In reviewing assessment re-sults and considering possible hypotheses, behavioranalysts should remember that behaviors may serve mul-tiple functions and different topographies of problem be-havior may serve different functions.

Hypothesis statements should be written in ABC for-mat. Specifically, the hypothesis statement should statethe antecedent(s) hypothesized to trigger the problem be-havior, the topography of problem behavior, and themaintaining consequence. For example:

Hypothesized function Antecedent Behavior Consequence

Escape from When Tonisha she screams termination ofhand washing is prompted and tantrums, hand and/or lunch. to wash her which is washing

hands in followed by . . . and lunch preparation by being sent for lunch, . . . to time-out.

Writing hypothesis statements in this manner is use-ful because it requires the behavior analyst to focus on po-tential avenues for intervention: modifying the antecedentand/or modifying the reinforcement contingencies (whichmay involve teaching a new behavior and altering whatbehaviors are reinforced or placed on extinction).

Testing Hypotheses

After hypotheses have been developed, a functional analy-sis can be conducted to test them. The functional analy-sis should always contain a control condition that servesto promote the lowest frequency of problem behavior. Formost individuals, this is the play condition, which con-sists of (a) continuous availability of preferred toys and/oractivities, (b) no demands, and (c) continuously availableattention. Then, conditions are selected to test specifichypotheses. For example, if the primary hypothesis is thatthe problem behavior is maintained by escape, then a con-tingent escape condition should be implemented. Othertest conditions may not need to be implemented.

Being selective about the test conditions imple-mented will help keep the functional analysis as brief aspossible; however, no conclusions can be made regardingadditional functions of problem behavior if additionaltest conditions are not implemented. For example, if playand contingent escape are the only conditions tested, andproblem behavior occurs most frequently in the contin-gent escape condition and seldom or never occurs in theplay condition, the conclusion that the problem behavioris maintained by escape is supported. However, becausea contingent attention condition was not implemented,one could not rule out the possibility that the problembehavior is also maintained by attention.

One way to test all possible hypotheses within a shortperiod of time is to use a brief functional analysis proce-dure (e.g., Boyagian, DuPaul, Wartel, Handler, Eckert, &McGoey, 2001; Cooper et al., 1992; Derby et al., 1992;Kahng & Iwata, 1999; Northup et al., 1991; Wacker et al.,1990). This technique involves implementing one sessioneach of the control condition and each of the test condi-tions. If an increase in problem behavior is observed inone of the test conditions, a contingency reversal is im-plemented to confirm the hypothesis rather than conduct-ing many repetitions of all of the conditions. For example,after conducting one of each of the control and test con-ditions, assume that Tonisha’s tantrums were elevated inthe contingent escape condition (see Figure 3). To con-firm this hypothesis, the behavior analyst could reinstatethe contingent escape condition, followed by a contin-gency reversal in which problem behavior no longer pro-duces escape. Rather, another behavior (often a request)is substituted for problem behavior. For example, in Ton-isha’s case, she could be prompted to form the manualsign “break,” and contingent on signing “break,” a taskbreak is provided. The contingency reversal is then fol-lowed by reinstatement of the contingent escape condi-tion, in which the sign “break” is placed on extinction andthe tantrums again produces escape. This process could berepeated for any and all conditions in which the problembehavior was elevated during the initial analysis.

522

Functional Behavior Assessment

15

10

5

0

Nu

mb

er o

f R

esp

on

ses

per

Min

ute

1 2 3 4 5 6 7

FreePlay Alone

ContingentAttention

Tantrums

Sign

"Break"

ContingentEscape

ContingentEscape

Escapefor

SigningContingent

Escape

Sessions

Contingency Reversal

Figure 3 Hypothetical data from brief functional analysis of Tonisha’stantrums. The closed data pointsrepresent tantrums, and the open datapoints represent manual signs forbreaks. The first four sessions representthe brief functional analysis, whileSessions 5 through 7 represent thecontingency reversal.

Kahng and Iwata (1999) found that brief functionalanalysis identified the function of problem behavior ac-curately (i.e., it yielded results similar to extended func-tional analyses) in 66% of the cases. Brief functionalanalyses are useful when time is very limited. However,given that brief functional analyses do not match the re-sults of extended functional analyses at least a third ofthe time, it is probably wise to conduct an extendedfunctional analysis whenever circumstances permit. Inaddition, when brief functional analyses are inconclu-sive, it is recommended that a more extended functionalanalysis be conducted (Vollmer, Marcus, Ringdahl, &Roane, 1995).

Developing Interventions

When an FBA has been completed, an intervention thatmatches the function of problem behavior can be devel-oped. Interventions can take many forms. Although FBAdoes not identify which interventions will be effective intreating problem behavior, it does identify antecedentsthat may trigger problem behavior, potential behavioraldeficits that should be remedied, and reinforcement con-tingencies that can be altered, as described earlier in thischapter. However, FBA does identify powerful reinforcersthat can be used as part of the intervention package. Theintervention should be functionally equivalent to prob-lem behavior. That is, if problem behavior serves an es-cape function, then the intervention should provide escape(e.g., in the form of breaks from task demands) for a moreappropriate response or involve altering task demands ina fashion that makes escape less reinforcing.

One effective way to design interventions is to re-view confirmed hypotheses to determine how the ABCcontingency can be altered to promote more positive be-havior. For example, consider the hypothesis developedfor Tonisha:

Hypothesized function Antecedent Behavior Consequence

Escape from When Tonisha she screams termination of hand wash- is prompted and tantrums, hand washing ing and/or to wash her which is and lunch by lunch. hands in followed being sent to

preparation by . . . time-out.for lunch, . . .

The antecedent could be altered by changing the timeof day when Tonisha is asked to wash her hands (so thatit does not precede lunch, thereby decreasing the moti-vation for escape-motivated tantrums).

Hypothesized function Antecedent Behavior Consequence

Escape from When Tonisha N/A (Problem N/A (The hand washing is prompted behavior is consequence and/or lunch. to wash her avoided.) is irrelevant

hands in because preparation problem for lunch, . . . behavior did

not occur.)Tonisha is prompted to wash her hands before recess.

523

Functional Behavior Assessment

The behavior could be altered by teaching Tonisha anew behavior (e.g., signing “break”) that results in thesame outcome (escape from lunch).

Hypothesized function Antecedent Behavior Consequence

Escape from When Tonisha she screams termination of hand washing is prompted and tantrums hand washing and/or lunch. to wash her Tonisha is and lunch.

hands in prompted to preparation sign “break,”for lunch, . . . which is

followed by . . .

Or, the consequences could be altered. For example, thereinforcer for problem behavior could be withheld so thatproblem behavior is extinguished.

Hypothesized Hypothesized function function Antecedent Behavior (consequence)

Escape from When Tonisha she screams terminationhand washing is prompted and tantrums, of handand/or lunch. to wash her which is washing and

hands in followed lunch by beingpreparation by . . . sent tofor lunch, . . . timeout.

continued presentation of hand-washing and lunch activities.

An intervention can also consist of several different com-ponents. For example, Tonisha could be taught a re-placement behavior (signing “break”), which results inbreaks from lunch, while tantrums are simultaneouslyplaced on escape extinction.

FBA can also help identify interventions that arelikely to be ineffective or that may worsen the problembehavior. Interventions involving time-out, in- or out-of-school suspension, or planned ignoring are contraindi-cated for problem behaviors maintained by escape.Interventions involving reprimands, discussion, or coun-seling are contraindicated for problem behaviors main-tained by attention.

A final word about intervention: When an interven-tion has been developed, FBA is not “done.” Assessmentis an ongoing practice that continues when interventionis implemented. It is important for continued monitoringof intervention effectiveness. The functions of behaviorare not static. Rather, they are dynamic and change overtime. Intervention may lose its effectiveness over timebecause the function of problem behavior may changeover time (Lerman, Iwata, Smith, Zarcone, & Vollmer,1994). In such cases, additional functional analyses mayneed to be conducted to revise the intervention.

Case Examples Illustratingthe FBA ProcessFBA is a highly idiosyncratic process. It is unusual forany two FBAs to be exactly the same because each per-son presents with a unique set of skills and behaviors, aswell as a unique history of reinforcement. FBA requiresa thorough understanding of behavioral principles to par-cel out the relevant information from interviews and ABCassessments, to form relevant hypotheses, and to testthose hypotheses. Beyond these skills, a solid under-standing of behavioral interventions (e.g., differential re-inforcement procedures, schedules of reinforcement, andtactics for promoting maintenance and generalization) isneeded to match effective treatments to the function ofchallenging behavior. This can seem like a dauntingprocess. In an attempt to demonstrate the application ofFBA across the idiosyncratic differences in people, wepresent four case examples.

Brian—Multiple Functions of Problem Behavior

Gathering Information

Brian was 13 years old and diagnosed with pervasive de-velopmental delay, oppositional defiant disorder, andattention-deficit/hyperactivity disorder. He had moder-ate delays in cognitive and adaptive skills. Brian dis-played several problem behaviors, including aggression,property destruction, and tantrums. Brian’s aggressionhad resulted in several of his teachers having bruises, andhis property destruction and tantrums frequently disruptedthe daily activities of the classroom.

A Functional Assessment Interview (O’Neill et al.,1997) was conducted with Brian’s teacher, Ms. Baker,who reported that Brian’s problem behavior occurredmost frequently when he was asked to perform a task thatrequired any kind of physical labor (e.g., shredding pa-pers) and occurred least during leisure activities. How-ever, Ms. Baker reported that Brian often engaged inproblem behavior when he was asked to leave a preferredactivity. She noted that Brian used complex speech (sen-tences), although he often used verbal threats (e.g., cursewords) and/or aggression, property destruction, andtantrums to communicate his wants and needs.

Because Brian was verbal, a Student-Assisted Func-tional Assessment Interview (Kern et al., 1995) was alsoconducted. In this interview, Brian reported that he foundhis math work too difficult but that writing and using acalculator were too easy. He reported that he sometimesreceived help from his teachers when he asked for it, thatsometimes teachers and staff noticed when he was doinga good job, and that he sometimes received rewards for

524

Functional Behavior Assessment

Table 3 Results of ABC Assessments for Brian’s Aggression, Property Destruction, and Tantrums

Antecedent Behavior Consequence

Adult attention diverted to another Yelled at teacher, “That’s not fair! Why do Told to “calm down”student; denied access to Nintendo by you hate me?!”teacher (i.e., told no when he asked if he could play it)

Teacher attending to another student Hit sofa, attempted to leave classroom Given choice of activity and verbal warning to stay in classroom

Teacher attention diverted to another Yelled, “Stop!” at another student Reprimand from teacher: “Don’t worry, student Brian. I will take care of it.”

Story time, teacher attending to other Laughed loudly Reprimand from teacher: “Stop it!”students

Story time, teacher listening to other Interrupted other students while they Reprimand from teacher: “You need to students were talking: “Hey, it’s my turn. I know listen.”

what happens next!”

doing good work. Brian indicated that his work periodswere always too long, especially those that consisted ofshredding papers. Brian reported that he had the fewestproblems in school when he was allowed to answer thephone (his classroom job), when he was completing mathproblems, and when he was playing with his Gameboy.He stated that he felt he had the most problems at schoolwhen he was outside playing with the other students be-cause they often teased him, called him names, andcursed at him.

An ABC assessment was conducted on two separateoccasions. The results of the ABC assessment are shownin Table 3.

Interpreting Information and Formulating Hypotheses

Based on the interviews and ABC assessments, the func-tion of Brian’s problem behavior was unclear. It was hy-pothesized that some of Brian’s problem behaviors were

maintained by access to adult attention and preferreditems. This hypothesis was a result of the ABC assess-ment, which indicated that many of Brian’s problem be-haviors occurred when adult attention was low or whenaccess to preferred items was restricted. Brian’s problembehavior often resulted in access to adult attention or pre-ferred activities. It was also hypothesized that Brian’sproblem behavior was maintained by escape because histeacher reported that Brian frequently engaged in problembehavior in the presence of task demands and becauseBrian reported that some of his work was too hard andwork periods lasted too long. Thus, a functional analysiswas conducted to test these hypotheses. The hypothesisstatements for Brian are summarized in Table 4.

Testing hypotheses

Next, a functional analysis was completed for Brian. Thefunctional analysis consisted of the same conditions asdescribed previously, with two exceptions. First, an alonecondition was not conducted because there was no reason

Table 4 Hypothesis Statements for Brian

Hypothesized function Antecedent Behavior Consequence

Gain attention from adults When adult or peer attention he engages in a variety of attention from adults and and peers is diverted from Brian, . . . problem behaviors, which peers.

result in . . .

Gain access to preferred toys When Brian’s access to pre- he engages in a variety of gaining access to preferred and activities ferred toys and activities is problem behaviors, which toys and activities.

restricted, . . . result in . . .

Escape from difficult and/or When Brian is required to he engages in a variety of the tasks being removed.nonpreferred tasks perform difficult or undesirable problem behaviors, which

tasks, . . . result in . . .

525

Functional Behavior Assessment

100

80

60

40

20

0

Freq

uen

cy o

f In

app

rop

riat

e B

ehav

ior

2 4 6 8 10 12 14 16 18 20 22 24

Sessions

ContingentAttention

ContingentEscape

FreePlay

ContingentTangible

Figure 4 Results of Brian’s functionalanalysis. Inappropriate behavior consists ofaggression, property destruction, andtantrums.Functional analysis conducted by Renée Van Norman andAmanda Flaute.

to believe that Brian’s problem behavior served an auto-matic function. Second, a contingent tangible conditionwas added because there was reason to believe Brian en-gaged in problem behavior to gain access to preferredtangibles and activities. This condition was just like theplay condition (i.e., Brian had access to adult attentionand preferred toys at the beginning of the session), exceptthat intermittently throughout the session, he was told itwas time to give his toy to the teacher and to play withsomething else (which was less desirable). If Brian com-plied with the request to give the toy to the teacher, he wasgiven a less preferred toy. If he engaged in problem be-havior, he was allowed to continue playing with his pre-ferred toys for a brief period of time.

The results of the functional analysis are shown inFigure 4. Notice that problem behavior never occurredin the play condition, but it occurred at high rates in allthree of the test conditions (contingent attention, escape,and tangible). These results indicated that Brian’s prob-lem behavior was maintained by escape, attention, andaccess to preferred items. During the play condition, how-ever, when continuous attention and preferred items wereavailable and no demands were placed on Brian, prob-lem behavior never occurred.

Developing an Intervention

Based on the results of the functional analysis, a multi-component intervention was implemented. The inter-vention components changed at different points in timedepending on the context. These components, as they re-late to the functions of problem behavior, are summa-rized in Table 5. For example, when Brian was engaged in a work task, it was recommended that he begiven frequent opportunities to request breaks. In addi-tion, the time-out intervention that the teacher had beenusing was discontinued in the work context. During

leisure times, when Brian had previously been expectedto play alone, the classroom schedule was rearranged sothat Brian could play and interact with peers. Brian wasalso taught to request toys appropriately while playingwith peers. In addition, several interventions aimed at in-creasing teacher attention for appropriate behavior wereimplemented. Brian was taught how to request teacherattention appropriately, and teachers began to respond tothese requests rather than ignore them (as they had pre-viously done). In addition, a self-monitoring plan was es-tablished, in which Brian was taught to monitor his ownbehavior and match his self-recordings to his teachers’.Accurate self-recording resulted in teacher praise and ac-cess to preferred activities with the teacher. Brian’s teach-ers also implemented their own plan to increase attentionand praise to Brian every 5 minutes as long as he was notengaged in problem behavior during independent work.

Kaitlyn—Attention Function for Problem Behavior

Gathering Information

Kaitlyn was 12 years old and diagnosed with attention-deficit/hyperactivity disorder. She also displayed somefine and gross motor deficits. Kaitlyn attended classes inboth her general education sixth-grade classroom and aspecial education classroom. She frequently displayedoff-task behavior, which consisted of being out of herseat, touching others (e.g., playing “footsie” under thedesk with peers), making noises, and talking out of turn.A Functional Assessment Interview (O’Neill et al., 1997)was conducted with Kaitlyn’s teacher, who stated that,typically, Kaitlyn asked questions constantly when shewas given a difficult task. The teacher also stated thatKaitlyn often became confused when her routines werechanged, and that she therefore needed a lot of assistance.

526

Functional Behavior Assessment

Table 5 Summary of Brian’s Intervention Components

Intervention Options for Attention Function

Intervention Antecedent Behavior Consequence

Teach a new behavior When adult or peer attention he will raise his hand and say, and adults and peers will (social attention) is diverted from Brian, . . . “Excuse me . . .” provide attention to Brian.

Teach a new behavior When adult or peer attention he will self-monitor his appro- and the teachers will provide is diverted from Brian, . . . priate independent work and him with one-on-one time if

match teacher recordings . . . he meets a specific criterion.

Change the antecedent During independent work to increase the probability which will increase adult times, adults will provide at- that Brian will appropriately opportunities to praise and tention to Brian every work independently, . . . attend to appropriate 5 minutes . . . behavior.

Change the antecedent Allow Brian to play with peers to increase the probability which will increase adult during leisure times . . . that Brian will play appropri- opportunities to praise

ately, . . . appropriate behavior and for peers to respond positively.

Intervention Options for Tangible Function

Intervention Antecedent Behavior Consequence

Teach a new behavior When Brian’s access to pre- he will say, “Can I have that and the teacher will provide ferred toys and activities is back, please?” . . . access to preferred toys and restricted, . . . activities.

Intervention Options for Escape Function

Intervention Antecedent Behavior Consequence

Teach a new behavior When Brian is required to he will say, “May I take a and the teacher will allow perform a difficult or undesir- break now?” . . . Brian to take a break from the able task, . . . task.

Change the reinforcement When Brian is required to and he engages in a variety of he will be required to continue contingency perform difficult or undesirable problem behaviors, . . . working on the task and the

tasks . . . time-out intervention will be discontinued.

Because there was only one teacher in the classroom and25 other students, Kaitlyn received little additional at-tention, and her teacher hypothesized that she engagedin off-task behavior to gain attention.

The results of an ABC assessment showed that Kait-lyn often received little adult attention in the classroom,except when she displayed off-task behavior. Althoughoff-task behavior occurred when attention was diverted,Kaitlyn was often engaged in demanding tasks simulta-neously. Thus, it was unclear which antecedent—the low

attention or the high-demand activity—was related to herproblem behavior.

Interpreting Information and Formulating Hypotheses

Based on the information obtained from interviews andABC assessments, it was hypothesized that Kaitlyn’s off-task behavior served an attention function. The hypothe-ses developed for Kaitlyn are summarized in Table 6.

Table 6 Hypotheses Regarding the Function of Kaitlyn’s Off-Task Behavior

Hypothesized function Antecedent Behavior Consequence

Primary hypothesis—Gain When teacher attention is she engages in off-task be- teacher attention attention from adults diverted from Kaitlyn, . . . havior, which results in . . . (reprimands, task prompts).

Alternative hypothesis— When Kaitlyn is required to she engages in off-task be- those tasks being removed.Escape from difficult academic work on academic tasks, . . . havior, which results in . . .tasks

527

Functional Behavior Assessment

100

20

15

10

5

0

Per

cen

tage

of

10-S

eco

nd

In

terv

als

wit

h O

ff-T

ask

Beh

avio

r

2 4 6 8 10 12 14 16 18 20Sessions

CA/Easy

CA/Difficult

Free Play

CA/FP

Escape

Figure 5 Results of the functional analysis forKaitlyn’s off-task behavior. FP = free play; CA/FP =contingent attention during free play activities;CA/Easy = contingent attention during easyacademic activities; CA/Difficult = contingentattention during difficult academic activities.Functional analysis conducted by Jessica Frieder, Jill Grunkemeyer, and Jill Hollway.

However, because Kaitlyn was typically observed in aca-demic activities that were paired with low attention sit-uations, it was unclear what role task difficulty playedin her off-task behavior. It was unclear whether she en-gaged in problem behavior to gain attention or becausethe academic tasks were too difficult. Would Kaitlyn en-gage in similar levels of problem behavior when playmaterials were present as compared to easy or difficultacademic materials while attention was diverted? It wasimportant to design a functional analysis that would ad-dress these questions.

Testing Hypotheses

Kaitlyn’s functional analysis demonstrates how func-tional analysis conditions can be constructed to test manykinds of hypotheses. Her functional analysis consistedof the standard play and escape conditions described pre-viously. However, several different contingent attentionconditions were implemented to determine whether de-mands interacted with low attention conditions to evokeproblem behavior.

Three contingent attention conditions were con-ducted: contingent attention during free play activities(CA/FP), contingent attention during easy academic ac-tivities (CA/Easy), and contingent attention during diffi-cult academic activities (CA/Difficult). In all threeconditions, attention was diverted from Kaitlyn until sheengaged in off-task behavior. Contingent on off-task be-havior, a teacher approached her and delivered a mildreprimand (e.g., “Kaitlyn, what are you doing? You’resupposed to be doing this activity. I need you to get backto your activity now”.). The conditions differed in the typeof activity Kaitlyn was asked to work on. During CA/FP,

1The MotivAider is a device that can be set to vibrate at specific intervalsof time. The MotivAider can be obtained at www.habitchange.com.

Kaitlyn was allowed to play with a game of her choice.During CA/Easy, Kaitlyn was required to solve single-digit addition problems. During CA/Difficult, Kaitlynwas required to solve multidigit subtraction problems thatrequired regrouping.

The results of the functional analysis are shown inFigure 5. Little off-task behavior occurred in the free play and escape conditions. Thus, the hypothesis thatKaitlyn engaged in off-task behavior to escape tasks wasnot supported. Increased off-task behavior was observedin all three of the contingent attention conditions. Thesedata suggested that Kaitlyn’s off-task behavior served anattention function regardless of the type of activity inwhich she was involved.

Developing an Intervention

Kaitlyn was verbal and often requested attention appro-priately. Therefore, Kaitlyn was taught to monitor her on-and off-task behavior when she was required to work orplay independently. Initially, Kaitlyn was taught to mon-itor her behavior every 10 seconds. (This was the longestshe was able to stay on task without prompting duringbaseline.) Both she and a classroom assistant wore a vi-brating timer (a MotivAider)1 that would not distract theother students. When the timer vibrated, Kaitlyn markedon a self-monitoring sheet if she was on or off task. Shethen looked up at the assistant. If Kaitlyn was on task,the assistant gave Kaitlyn a thumbs-up signal and a smile.

528

Functional Behavior Assessment

Occasionally, the assistant also approached Kaitlyn, gaveher a pat on the back, and said, “Way to stay on task!” Ifshe was not on task, the assistant looked away from her.At the end of 10 minutes, the assistant and Kaitlyn com-pared marks. If Kaitlyn was on task on 50 of 60 checksand she matched the teacher on 57 of 60 checks, she wasallowed to engage in a special activity with the assistantfor 5 minutes. Over time, the interval length and dura-tion of the observation period were increased until Kait-lyn worked appropriately for the entire period.

DeShawn—Automatic Function of Problem Behavior

Gathering Information

DeShawn was 10 years old and diagnosed with autism.He had severe developmental delays and was blind. Hetook Resperidol for behavioral control. DeShawn fre-quently threw objects across the room, dropped objectsand task materials off the table, and knocked objects fromtables with a sweeping motion of his arm, which inter-fered with work on academic tasks. A Functional As-sessment Interview (O’Neill et al., 1997) yielded littleuseful information because the teacher stated thatDeShawn’s throwing, dropping, and sweeping were un-predictable. She could not identify any antecedents thattypically preceded the problem behavior. The interventionin place at the time was to limit DeShawn’s access orproximity to objects that he might throw. ABC observa-tions conducted within the natural routine also producedlittle useful information because the teachers prevented

and blocked all occurrences of the problem behavior.Thus, very few occurrences of the problem behavior wereobserved. However, it was noted that DeShawn was rarelyactively involved in lessons. For example, when theteacher read a book to the class, DeShawn was unable toparticipate because he could not see the pictures. At othertimes, DeShawn was engaged in individual work tasks,but these tasks did not appear to be meaningful or ap-propriate for his skill level.

Interpreting Information and Formulating Hypotheses

It was difficult to formulate a hypothesis based on thelimited information from the teacher and direct obser-vations. Because DeShawn did not seem actively in-volved or interested in the classroom activities, it washypothesized that throwing, dropping, and sweepingwere automatically reinforcing. However, other hy-potheses, such as attention, escape, or multiple functions,could not be ruled out.

Testing Hypotheses

The functional analysis consisted of play, contingent at-tention, and contingent escape conditions. An alone con-dition was not conducted because there was not a roomthat allowed DeShawn to be observed and supervisedcovertly. The results of the functional analysis are shownin Figure 6. Throwing, dropping, and sweeping occurred at high and variable rates across all three conditions,

24

22

20

18

16

14

12

10

8

6

4

2

0

2 4 6 8 10

Sessions

Nu

mb

er o

f T

hro

ws,

Dro

ps,

an

d S

wee

ps

Contingent Attention

Free Play

Contingent Escape

Figure 6 Results of DeShawn’s func-tional analysis.Functional analysis conducted by Susan M. Silvestri, Laura LacyRismiller, and Jennie E. Valk.

529

Functional Behavior Assessment

Ch

ips

Tu

be

Pu

tty

Tw

irlin

g to

y

Ko

osh

bal

l

Slin

ky

5

4

3

2

1

0Nu

mb

er o

f T

imes

Eac

h S

tim

ulu

s W

as S

elec

ted

Stimulus

30

25

20

15

10

5

0

Mea

n D

ura

tio

n o

f It

em I

nte

ract

ion

(se

c) b

efo

re T

hro

wn

Figure 7 Results of DeShawn’s prefer-ence assessment. The bars indicate thenumber of times each stimulus was selected.The line graph represents the mean number of seconds DeShawn played with each itemstimulus before throwing it.Preference assessment conducted by Susan M. Silvestri, LauraLacy Rismiller, and Jennie E. Valk.

demonstrating an undifferentiated pattern. These resultswere inconclusive, but suggested an automatic reinforce-ment function for throwing, dropping, and sweeping.

Further analysis might have identified more preciselythe source of reinforcement (e.g., sweeping motion,sound of objects falling, a clear table surface). Instead, wesought to identify alternative reinforcers that might com-pete with automatically maintained problem behavior.

We conducted a forced-choice preference assessment(Fisher et al., 1992) to identify highly preferred stimuli.The results of this assessment are shown in Figure 7. DeShawn most frequently chose potato chips. Data werealso collected on the number of seconds DeShawn playedwith each item before he threw it. (He was allowed tohave access to each item for 30 seconds, so a data pointat 30 seconds indicates that he played with the item theentire time and never threw it.) No data were collected onhow long DeShawn “played” with potato chips becausehe did not play with them; he consumed them. However,it should be noted that DeShawn never threw the potatochips, nor did he throw any other objects in the area whenhe had access to potato chips. He immediately put themin his mouth and ate them. These two pieces of informa-tion indicated that potato chips were highly preferred andmight function as a reinforcer that could compete withthrowing, dropping, and sweeping. Although the tube that made noise was chosen second most frequently, De-Shawn typically threw it after playing with it for only 12seconds.

The next assessment evaluated whether potato chipsserved as a reinforcer for pressing a microswitch that said“Chip, please” and whether the use of potato chips would

compete with throwing (based on the microswitch as-sessment procedures of Wacker, Berg, Wiggins, Mul-doon, & Cavanaugh, 1985). First, baseline data werecollected on the number of appropriate switch presses(pressing the switch once to activate the voice); the num-ber of inappropriate switch presses (repeatedly pressingor banging the switch); the number of attempts to throwthe switch; and the number of throws, drops, and sweepsof other objects. During baseline, DeShawn was providedwith access to the microswitch; however, neither chipsnor the Slinky was delivered for pressing the switch.Next, the voice on the switch said “Chip, please,” and apotato chip was provided contingent on appropriateswitch pressing. In the next condition, the voice on theswitch said “Slinky, please,” and the Slinky (a nonpre-ferred toy) was provided contingent on appropriate switchpressing. Finally, the voice on the switch said “Chip,please” again, and a potato chip was provided contingenton appropriate switch pressing to form an ABCB rever-sal design.

The results of the microswitch assessment are shownin Figure 8. Appropriate microswitch presses in-creased when potato chips were provided contingently.Interestingly, no throwing, dropping, or sweeping oc-curred when the potato chip was available. Some inap-propriate switch presses also occurred, but appropriateswitch presses were more frequent. When microswitchpressing resulted in a nonpreferred toy, appropriate switchpressing decreased and throwing, dropping, and sweep-ing increased. This suggested that potato chips competedeffectively with throwing, dropping, and sweeping.

530

Functional Behavior Assessment

Figure 8 Results of DeShawn’smicroswitch assessment. During Baseline,touching the switch did not result in anytangible stimuli. During Switch = Chip,touching the switch resulted in a bite ofpotato chip. During Switch = Slinky,touching the switch resulted in access tothe Slinky for 30 seconds.Reinforcer assessment conducted by Renée Van Norman,Amanda Flaute, Susan M. Silvestri, Laura Lacy Rismiller, andJennie E. Valk.

Developing an Intervention

Based on these assessments, an intervention was devel-oped that involved giving DeShawn small bites of potatochips for participating in classroom activities appropri-ately. In addition, classroom activities and routines weremodified to increase DeShawn’s participation, and hiscurriculum was modified to include instruction on morefunctional activities.

Lorraine—Multiple TopographiesThat Serve Multiple Functions

Lorraine was 32 years old and functioned in the moder-ate range of mental retardation. She had a diagnosis ofDown syndrome and bipolar disorder with psychoticsymptoms, for which Zoloft and Resperidol were pre-scribed. She also took Tegretol for seizure control. Shewas verbal, but her verbal skills were low and her artic-ulation was poor. She communicated through somesigns, a simple communication device, gestures, andsome words.

Lorraine had resided in a group home for 9 years andattended a sheltered workshop during the day. Lorrainedisplayed noncompliance, aggression, and SIB in bothsettings, but the FBA focused on her problem behaviorin the group home, where it was more severe and fre-quent. Noncompliance consisted of Lorraine putting herhead down on the table, pulling away from people, orleaving the room when requests were made of her; ag-gression consisted of kicking others, throwing objects atothers, biting others, and squeezing others’ arms very

hard; SIB consisted of biting her arm, pulling her hair, orpinching her skin.

Gathering Information

Interviews were conducted with Lorraine, her parents,and workshop and group home staff. Lorraine’s parentsnoted that some of her behavior problems had increasedwhen changes in her medication had been made. Work-shop staff noted that Lorraine was more likely to haveproblem behavior at work if many people were aroundher. Workshop staff had also noted that noncompliancehad increased shortly after a dosage change in medication2 months previously. The group home staff noted thatthey were most concerned about Lorraine’s leaving thegroup home when she was asked to perform daily chores.Lorraine would often leave the group home and not returnuntil the police had picked her up. Many neighbors hadcomplained because Lorraine would sit on their porchesfor hours until the police came and removed her.

An ABC assessment was conducted at the workshopand group home to determine whether environmentalvariables differed across the two settings (e.g., the man-ner in which tasks were presented, the overall level of at-tention). At the workshop, Lorraine was engaged in ajewelry assembly task (one she reportedly enjoyed), andshe worked well for 2 1/2 hours. She appeared to workbetter when others paid attention to her and often becameoff task when she was ignored; however, no problem be-havior was observed at work. At the group home, ag-gression was observed when staff ignored Lorraine. No

14

12

10

8

6

4

2

0

Freq

uen

cy o

f B

ehav

ior

2 4 6 8 10 12 14 16 18 20 22 24Sessions

Baseline(Switch =Nothing) Switch = Potato Chip

Switch =Slinky Switch = Potato Chip

Appropriate SwitchPresses

Throwing,Dropping,Sweeping

531

Functional Behavior Assessment

100

80

60

40

20

0

Noncompliance

2 4 6 8 10 12

ContingentEscape

Free Play ContingentAttention

100

80

60

40

20

0

Aggression

2 4 6 8 10 12

ContingentEscape

Free PlayContingentAttention

100

80

60

40

20

0

SIB

2 4 6 8 10 12

ContingentEscape

Free Play

ContingentAttention

Sessions

Per

cen

tage

of

10-S

eco

nd

In

terv

als

wit

h P

rob

lem

Beh

avio

r

Figure 9 Results of Lorraine’s func-tional analysis.Functional analysis conducted by Corrine M. Murphy andTabitha Kirby.

other problem behavior occurred. No demands wereplaced on Lorraine in the group home during the ABCobservation. Group home staff rarely placed any demandson Lorraine in an attempt to avoid her problem behavior.

Interpreting Information and Formulating Hypotheses

Some of Lorraine’s problem behaviors seemed to be re-lated to a dosage change in her medication. Because Lor-raine’s physician judged her medication to be attherapeutic levels, a decision was made to analyze theenvironmental events related to her problem behavior.Observations of problem behavior during the ABC as-sessment were limited because workshop staff placedminimal demands on Lorraine to avoid problem behav-iors. However, Lorraine’s noncompliance reportedly oc-curred when demands were placed on her. Therefore, itwas hypothesized that these problem behaviors weremaintained by escape from task demands. Aggression oc-curred during the ABC assessment when Lorraine wasignored. Although SIB was not observed during the ABC

assessment, group home staff reported that Lorraine oftenengaged in SIB during the same situations that evokedaggression. Therefore, it was hypothesized that both ag-gression and SIB were maintained by attention.

Testing Hypotheses

The functional analysis consisted of free play, contingentattention, and contingent escape conditions (see Figure9). Because the problem behaviors may have served different functions, each problem behavior was codedand graphed separately. Noncompliance occurred mostfrequently during the contingent escape condition andrarely occurred during the free play or contingent atten-tion conditions. SIB occurred most frequently during thecontingent attention condition and rarely occurred duringthe free play or contingent escape conditions. These datasuggested that noncompliance served an escape function,and SIB served an attention function. As is often the casefor low-frequency, high-intensity behaviors, it was diffi-cult to form hypotheses about the function of aggressionbecause the behavior occurred rarely in any of the FBAconditions.

532

Functional Behavior Assessment

Developing an Intervention.

Different interventions were developed for the problembehaviors because results of the FBA suggested that thebehaviors served different functions. To address non-compliance, Lorraine was taught to request breaks fromdifficult tasks. Tasks were broken down into very smallsteps. Lorraine was presented with only one step of a taskat a time. Each time a task request was made, Lorrainewas reminded that she could request a break (either bysaying “Break, please” or by touching a break card). Ifshe requested a break, the task materials were removedfor a brief period of time. Then they were presented again.Also, if Lorraine engaged in noncompliance, she was notallowed to escape the task. Instead she was promptedthrough one step of the task and then another step of thetask was presented. Initially, Lorraine was allowed tocompletely escape the task if she appropriately requested

a break each time the task was presented. Over time, how-ever, she was required to complete increasing amountsof work before a break was allowed.

Intervention for aggression consisted of teachingLorraine appropriate ways to gain attention (e.g., tappingsomeone on the arm and saying, “Excuse me”) and teach-ing group home staff to regularly attend to Lorraine whenshe made such requests. In addition, because her articu-lation was so poor, a picture communication book wascreated to assist Lorraine in having conversations withothers. This communication book could be used to clar-ify words that staff could not understand. Finally, staffwere encouraged to ignore Lorraine’s SIB when it didoccur. In the past, staff had approached Lorraine andstopped her from engaging in SIB when it occurred. Thefunctional analysis demonstrated that this interventionmay have increased the occurrence of SIB, so this prac-tice was discontinued.

SummaryFunctions of Behavior

1. Many problem behaviors are learned and maintained viapositive, negative, and/or automatic reinforcement. In thisrespect, problem behavior can be said to have a “function”(e.g., to gain access to stimuli or to escape stimuli).

2. The topography, or form, of a behavior often reveals littleuseful information about the conditions that account forit. Identifying the conditions that account for a behavior (itsfunction) suggests what conditions need to be altered tochange the behavior. Assessment of the function of a be-havior can therefore yield useful information with respectto intervention strategies that are likely to be effective.

Role of Functional Behavior Assessment in Interventionand Prevention

3. FBA can lead to effective interventions in at least threeways: (a) it can identify antecedent variables that can bealtered to prevent problem behavior, (b) it can identifyreinforcement contingencies that can be altered so thatproblem behavior no longer receives reinforcement, and(c) it can help identify reinforcers for alternative re-placement behaviors.

4. FBA can decrease reliance on default technologies (in-creasingly intrusive, coercive, and punishment-based in-terventions) and contribute to more effective interventions.When FBAs are conducted, reinforcement-based inter-ventions are more likely to be implemented than are in-terventions that include a punishment component.

Overview of FBA Methods

5. FBA methods can be classified into three types: (a) func-tional (experimental) analysis, (b) descriptive assess-ment, and (c) indirect assessment. The methods can be

ordered on a continuum with respect to considerationssuch as ease of use and the type and precision of infor-mation they yield.

6. Functional analysis involves systematically manipulatingenvironmental events thought to maintain problem be-havior within an experimental design. The primary ad-vantage of functional analysis is its ability to yield a cleardemonstration of the variable(s) that relate to the occur-rence of a problem behavior. However, this assessmentmethod requires a certain amount of expertise to imple-ment and interpret.

7. Descriptive assessment involves observation of the prob-lem behavior in relation to events that are not arranged ina systematic manner and includes ABC recording (bothcontinuous and narrative) and scatterplots. The primaryadvantages to these assessment methodologies are thatthey are easier to do than functional analyses and they rep-resent contingencies that occur within the individual’s nat-ural routine. Caution must be exercised when interpretinginformation from descriptive assessments, however, be-cause they can be biased and unreliable, and they providecorrelations (as opposed to functional relations).

8. Indirect functional assessment methods use structured in-terviews, checklists, rating scales, or questionnaires to ob-tain information from persons who are familiar with theindividual exhibiting the problem behavior (e.g., teachers,parents, caregivers, and/or the individual him- or herself)to identify possible conditions or events in the natural en-vironment that correlate with the problem behavior. Again,these forms of FBA are easy to conduct, but they are lim-ited in their accuracy. As such, they are probably best re-served for hypothesis formulation. Further assessment ofthese hypotheses is often necessary.

533

Conducting a Functional Behavior Assessment

9. Given the strengths and limitations of the different FBAprocedures, FBA can best be viewed as a four-step process:

a. Gather information via indirect and descriptive assess-ment.

b. Interpret information from indirect and descriptive as-sessment and formulate hypotheses about the purposeof problem behavior.

c. Test hypotheses using functional analysis.

d. Develop intervention options based on the function ofproblem behavior.

10. A brief functional analysis can be used to test hypotheseswhen time is very limited.

11. When teaching an alternative behavior as a replacementfor problem behavior, the replacement behavior should befunctionally equivalent to the problem behavior (i.e., itshould be reinforced using the same reinforcers that pre-viously maintained the problem behavior).

Case Examples Illustrating the FBA Process

12. A person can display one problem behavior for more thanone reason, as shown in Brian’s case example. In suchcases, intervention may need to consist of multiple com-ponents to address each function of problem behavior.

13. Functional analyses can be tailored to test specific and/oridiosyncratic hypotheses, as shown in Kaitlyn’s example.

14. Undifferentiated problem behavior during a functionalanalysis may indicate an automatic reinforcement functionand warrants further evaluation, as shown in DeShawn’scase example. In such cases, alternative reinforcers cansometimes be identified and used effectively in an inter-vention to decrease problem behavior and improve adaptiveresponding.

15. Sometimes a person displays multiple topographies ofproblem behavior (e.g., self-injury and aggression), witheach topography serving a different function, as shown inLorraine’s example. In such cases, a different interventionfor each topography of problem behavior is warranted.

Functional Behavior Assessment

534

535

Verbal Behavior

Key Termsaudienceautocliticautomatic punishmentautomatic reinforcementconvergent multiple controlcopying a textdivergent multiple controlechoicformal similarity

generic (tact) extensionimpure tactintraverballistenermandmetaphorical (tact) extensionmetonymical (tact) extensionmultiple controlpoint-to-point correspondence

private eventssolistic (tact) extensionspeakertacttextualtranscriptionverbal behaviorverbal operant

Behavior Analyst Certification Board® BCBA® & BCABA®Behavior Analyst Task List,© Third Edition

Content Area 3: Principles, Processes and Concepts.

3-15 Define and provide examples of echoics and imitation.

3-16 Define and provide examples of mands.

3-17 Define and provide examples of tacts.

3-18 Define and provide examples of intraverbals.

Content Area 9: Behavior Change Procedures

9-25 Use language acquisition programs that employ Skinner’s analysis of verbalbehavior (i.e., echoics, mands, tacts, intraverbals).

9-26 Use language acquisition and communication training procedures.

© 2006 The Behavior Analyst Certification Board, Inc.,® (BACB®) all rights reserved. A current version of thisdocument may be found at www.bacb.com. Requests to reprint, copy, or distribute this document and ques-tions about this document must be submitted directly to the BACB.

This chapter was written by Mark L. Sundberg.

From Chapter 25 of Applied Behavior Analysis, Second Edition. John O. Cooper, Timothy E. Heron, William L. Heward. Copyright © 2007by Pearson Education, Inc. All rights reserved.

536

Why should applied behavior analysts be con-cerned with verbal behavior? A review of the de-finition of applied behavior analysis can provide

an answer to this question.

Applied behavior analysis is the science in which tacticsderived from the principles of behavior are applied toimprove socially significant behavior and experimenta-tion is used to identify the variables responsible for be-havior change (p. 20).

Note the point to improve socially significant behavior.The most socially significant aspects of human behaviorinvolve verbal behavior. Language acquisition, social in-teraction, academics, intelligence, understanding, think-ing, problem solving, knowledge, perception, history,science, politics, and religion are all directly relevant toverbal behavior. In addition, many human problems, suchas autism, learning disabilities, illiteracy, antisocial be-havior, marital conflicts, aggression, and wars, involveverbal behavior. In short, verbal behavior plays a centralrole in most of the major aspects of a person’s life, andin the laws, conventions, archives, and activities of a so-ciety. These topics are the main subject topics of mostintroductory psychology textbooks. These are sociallysignificant behaviors that applied behavior analysts needto address. However, the verbal analysis of these topicshas just begun, and a substantial amount of work has yetto be accomplished.

Verbal Behavior andProperties of LanguageForm and Function of Language

It is important in the study of language to distinguish be-tween the formal and functional properties of language(Skinner, 1957). The formal properties involve the topog-raphy (i.e., form, structure) of the verbal response,whereas the functional properties involve the causes ofthe response. A complete account of language must con-sider both of these elements.

The field of structural linguistics specializes in theformal description of language. The topography of whatis said can be measured by (a) phonemes: the individualspeech sounds that comprise a word; (b) morphemes: theunits with an individual piece of meaning; (c) lexicon:the total collection of words that make up a given lan-guage; (d) syntax: the organization of words, phrases, orclauses in sentences; (e) grammar: the adherence to es-tablished conventions of a given language; and (f) se-mantics: what words mean (Barry, 1998; Owens, 2001).

The formal description of a language can be accom-plished also by classifying words as nouns, verbs, prepo-sitions, adjectives, adverbs, pronouns, conjunctions, and 1For more detail see Mabry (1994, 1995) and Novak (1994).

articles. Other aspects of a formal description of languageinclude prepositional phrases, clauses, modifiers, gerunds,tense markers, particles, and predicates. Sentences thenare made up of the syntactical arrangement of the lexicalcategories of speech with adherence to the grammaticalconventions of a given verbal community. The formalproperties of language also include articulation, prosody,intonation, pitch, and emphasis (Barry, 1998).

Language can be formally classified without the pres-ence of a speaker or any knowledge about why thespeaker said what he did. Sentences can be analyzed asgrammatical or ungrammatical from a text or from a taperecorder. For example, incorrect use of word tense canbe identified easily from a recording of a child saying,“Juice all goned.”

A common misconception about Skinner’s analysisof verbal behavior is that he rejected the formal classi-fications of language. However, he did not find faultwith classifications or descriptions of the response, butrather with the failure to account for the “causes” orfunctions of the classifications. The analysis of how andwhy one says words is typically relegated to the field ofpsychology combined with linguistics; hence the field ofpsycholinguistics.

Theories of Language

A wide variety of theories of language attempt to identifythe causes of language. These theories can be classifiedinto three separate, but often overlapping, views: biolog-ical, cognitive, and environmental. The basic orientationof the biological theory is that language is a function ofphysiological processes and functions. Chomsky (1965),for example, maintained that language is innate to hu-mans.1 That is, a human’s language abilities are inher-ited and present at birth.

Perhaps the most widely accepted views of thecauses of language are those derived from cognitive psy-chology (e.g., Bloom, 1970; Piaget, 1952). Proponentsof the cognitive approach to language propose that lan-guage is controlled by internal processing systems that ac-cept, classify, code, encode, and store verbal information.Spoken and written language is considered to be the struc-ture of thought. Distinguishing between the biologicaland cognitive views is often difficult; many are mixed(e.g., Pinker, 1994) and invoke cognitive metaphors suchas storage and processing as explanations of languagebehaviors, or interchange the words brain and mind (e.g.,Chomsky, 1965).

Verbal Behavior

537

Developmentof Verbal BehaviorSkinner began working on a behavioral analysis of lan-guage in 1934 as a result of a challenge from Alfred NorthWhitehead,2 which Whitehead made when he was seatednext to Skinner at a dinner at the Harvard Society of Fel-lows. Skinner (1957) described the interaction as follows:

We dropped into a discussion of behaviorism which wasthen still very much an “ism” and of which I was a zeal-ous devotee. Here was an opportunity which I could notoverlook to strike a blow for the cause . . . . White-head . . . agreed that science might be successful in accounting for human behavior provided one made an exception of verbal behavior. Here, he insisted some-thing else must be at work. He brought the discussion to aclose with a friendly challenge: “Let me see you,” he said,“account for my behavior as I sit here saying ‘No blackscorpion is falling upon this table.’” The next morning Idrew up the outline of the present study. (p. 457)

It took Skinner 23 years to fill in the details of hisoutline, which he published in his book Verbal Behavior(1957). The end result was so significant to Skinner(1978) that he believed Verbal Behavior would prove tobe his most important work. However, Skinner’s use ofthe phrase prove to be 20 years after the book was pub-lished indicated that his analysis of verbal behavior hadnot yet had the impact that he thought it would.

There are several reasons for the slow appreciation ofVerbal Behavior. Soon after the book was published, itwas met with immediate challenges from the field of lin-guistics and the emerging field of psycholinguistics. Mostnotably was a review by Noam Chomsky (1959), a younglinguist from MIT who had published his own account oflanguage (Chomsky, 1957) the same year Verbal Behav-ior was published. Chomsky maintained that Skinner’sanalysis was void of any value. Chomsky criticized everyaspect of the analysis, but more so, he criticized the phi-losophy of behaviorism in general. However, a reading ofChomsky’s review will reveal to those who comprehendVerbal Behavior that Chomsky, like many scholars,gravely misunderstood Skinner’s radical behaviorism,which provided the philosophical and epistemologicalfoundations for Verbal Behavior (Catania, 1972; Mac-Corqoudale, 1970).

Skinner never responded to Chomsky’s review, andmany felt this lack of response was responsible for the

2Whitehead was perhaps the most prominent philosopher of the time,known best for his landmark three-volume set coauthored with BertrandRussell titled Principia Mathematica (1910, 1912, 1913).

widely held conclusion that Chomsky’s review was unan-swerable and that Chomsky made valid criticisms.MacCorquodale (1970) pointed out that the reason noone challenged Chomsky’s review was the condescend-ing tone of the review, in addition to the clear misunder-standings of Skinner’s behaviorism.

Skinner was not at all surprised by this reaction fromlinguists because of their emphasis on the structure oflanguage rather than its function. More recently however,a favorable review of Skinner’s book from within the fieldof linguistics was published, recognizing that Skinnerhas changed the history of linguistics (Andresen, 1991).

Although Skinner anticipated criticism from outsidethe field of behavior analysis, he probably did not expectthe general disinterest and often outspoken negative re-action to Verbal Behavior from within the field. A num-ber of behaviorists have examined this issue and havecollectively provided a list of reasons behavior analystsdid not immediately embrace Verbal Behavior (e.g., Esh-leman, 1991; Michael, 1984; E. Vargas, 1986). Perhapsmost troublesome to the behavior analysts of the timewas that Verbal Behavior was speculative and did notcontain experimental data (Salzinger, 1978).

The lack of research on verbal behavior continuedto concern behavior analysts well into the 1980s (e.g.,McPherson, Bonem, Green, & Osborne, 1984). However,this situation now appears to be changing, and a numberof advances in research and applications directly relate toVerbal Behavior (Eshleman, 2004; Sundberg, 1991,1998). Many of these advances are published in the jour-nal, The Analysis of Verbal Behavior.

Defining Verbal BehaviorSkinner (1957) proposed that language is learned behav-ior, and that it is acquired, extended, and maintained bythe same types of environmental variables and principlesthat control nonlanguage behavior (e.g., stimulus con-trol, motivating operations, reinforcement, extinction).He defined verbal behavior as behavior that is reinforcedthrough the mediation of another person’s behavior. Forexample, the verbal response “Open the door” can pro-duce the reinforcer of an open door mediated through thebehavior of a listener. This reinforcer is indirectly ob-tained, but is the same reinforcer that could be obtainednonverbally by opening the door.

Skinner defined verbal behavior by the function ofthe response, rather than by its form. Thus, any responseform can become verbal based on Skinner’s functionaldefinition. For example, the early differential crying of a2-month-old infant may be verbal, as would other re-sponses such as pointing, clapping for attention, gestures

Verbal Behavior

538

such as waving one’s arm for attention, writing, or typ-ing. In other words, verbal behavior involves a social in-teraction between a speaker and a listener.

Speaker and Listener

The definition of verbal behavior makes a clear distinc-tion between the behavior of the speaker and that of thelistener. Verbal behavior involves social interactions be-tween speakers and listeners, whereby speakers gain ac-cess to reinforcement and control their environmentthrough the behavior of listeners. In contrast with mostapproaches to language, Skinner’s verbal behavior is pri-marily concerned with the behavior of the speaker. Heavoided terms such as expressive language and receptivelanguage because of the implication that these are merelydifferent manifestations of the same underlying cogni-tive processes.

The listener must learn how to reinforce the speaker’sverbal behavior, meaning that listeners are taught to re-spond to words and interact with speakers. It is importantto teach a child to react appropriately to the verbal stim-uli provided by speakers, and to behave verbally as aspeaker. These are different functions, however. In somecases learning one type of behavior (i.e., speaker or lis-tener) facilitates learning another, but this must also beunderstood in terms of motivating operations, antecedentstimuli, responses, and consequences rather than in termsof learning the meanings of words as a listener and thenusing the words in various ways as a speaker.

Verbal Behavior: A Technical Term

In searching for what to call the subject matter of hisanalysis of language, Skinner wanted a term that (a) em-phasized the individual speaker, (b) referred to behaviorthat was selected and maintained by consequences; and (c) was relatively unfamiliar in the professions of speechand language. He selected the term verbal behavior. How-ever, in recent years verbal behavior has acquired a newmeaning, independent from Skinner’s usage. In the field of

speech pathology, verbal behavior has become synony-mous with vocal behavior. Also, in psychology, the termnonverbal communication, which became popular in the1970s, was contrasted with the term verbal behavior, im-plying that verbal behavior was vocal communication andnonverbal behavior was nonvocal communication. Theterm verbal has also been contrasted with quantitative asin GRE and SAT tests for college admissions. This dis-tinction suggests that mathematical behavior is not verbal.However, according to Skinner’s definition, much of math-ematical behavior is verbal behavior. Noting that verbalbehavior includes vocal-verbal behavior and nonvocal-verbal behavior is sometimes confusing for those learn-ing to use Skinner’s analysis.

Unit of Analysis

The unit of analysis of verbal behavior is the functionalrelation between a type of responding and the same in-dependent variables that control nonverbal behavior,namely (a) motivating variables, (b) discriminative stim-uli, and (c) consequences. Skinner (1957) referred to thisunit as a verbal operant, with operant implying a typeor class of behavior as distinct from a particular responseinstance; and he referred to a set of such units of a par-ticular person as a verbal repertoire. The verbal reper-toire can be contrasted with the units in linguistics thatconsists of words, phrases, sentences, and the meanlength of utterances.

Elementary VerbalOperantsSkinner (1957) identified six elementary verbal oper-ants: mand, tact, echoic, intraverbal, textual, and tran-scription. He also included audience relation and copyinga text as separate relations, but in this discussion the au-dience (or the listener) will be treated independently andcopying a text will be considered a type of echoic be-havior. Table 1 presents plain English descriptions of

Verbal Behavior

Table 1 Plain English Definitions of Skinner’s Six Elementary Verbal Operants

Mand Asking for reinforcers that you want. Saying shoe because you want a shoe.

Tact Naming or identifying objects, actions, events, etc. Saying shoe because you see a shoe.

Echoic Repeating what is heard. Saying shoe after someone else says shoe.

Intraverbal Answering questions or having conversations in which your words are controlled by other words. Saying shoewhen someone else says, What do you wear on your feet?

Textual Reading written words. Saying shoe because you see the written word shoe.

Transcription Writing and spelling words spoken to you. Writing shoe because you hear shoe spoken.

539

these terms. Technical definitions and examples of eachelementary verbal operant are provided in the followingsections.

Mand

The mand is a type of verbal operant in which a speakerasks for (or states, demands, implies, etc.) what he needsor wants. For example, the behavior of asking for direc-tions when lost is a mand. Skinner (1957) selected theterm mand for this type of verbal relation because theterm is conveniently brief and is similar to the plain Eng-lish words command, demand, and countermand. Themand is a verbal operant for which the form of the re-sponse is under the functional control of motivating op-erations (MOs) and specific reinforcement (see Table 2).For example, food deprivation will (a) make food effec-tive as reinforcement and (b) evoke behavior such as themand “cookie” if this behavior has produced cookies inthe past.

The specific reinforcement that strengthens a mandis directly related to the relevant MO. For example, ifthere is an MO for physical contact with one’s mother, thespecific reinforcement that is established is physical con-tact. The response form may occur in several topograph-ical variations such as crying, pushing a sibling, reachingup, and saying “hug.” All of these behaviors could bemands for physical contact if functional relations existamong the MO, the response, and the specific reinforce-ment history. However, the response form alone is insuf-ficient for the classification of a mand, or any other verbaloperant. For example, crying could also be a respondentbehavior if it were elicited by a conditioned or uncondi-tioned stimulus.

Mands are very important for the early develop-ment of language and for the day-to-day verbal inter-actions of children and adults. Mands are the firstverbal operant acquired by a human child (Bijou &Baer, 1965; Novak, 1996). These early mands usuallyoccur in the form of differential crying when a child ishungry, tired, in pain, cold, or afraid; or wants toys, at-tention, help, movement of objects and people, direc-tions, or the removal of aversive stimuli. Typicallydeveloping children soon learn to replace crying withwords and signs or other standard forms of communi-cation. Manding not only lets children control the de-livery of reinforcers, but it begins to establish thespeaker and listener roles that are essential for furtherverbal development.

Skinner (1957) pointed out that the mand is the onlytype of verbal behavior that directly benefits the speaker,meaning that the mand gets the speaker reinforcers suchas edibles, toys, attention, or the removal of aversive stim-

uli. As a result, mands often become strong forms of ver-bal behavior because of specific reinforcement, and thisreinforcement often satisfies an immediate deprivationcondition or removes some aversive stimulus. For exam-ple, young children often engage in a very high rate ofmanding because of its effects on listeners. In addition,much of the problem behaviors of children who haveweak, delayed, or defective verbal repertoires may bemands (e.g., Carr & Durand, 1985). Eventually, a childlearns to mand for verbal information with who, what,and where questions, and the acquisition of new verbalbehavior accelerates rapidly (Brown, Cazden, & Bellugi,1969). Ultimately, mands become quite complex and playa critical role in social interaction, conversation, academicbehavior, employment, and virtually every aspect ofhuman behavior.

Tact

The tact is a type of verbal operant in which a speakernames things and actions that the speaker has directcontact with through any of the sense modes. For ex-ample, a child saying “car” because he sees a car is atact. Skinner (1957) selected the term tact because itsuggests making contact with the physical environment.The tact is a verbal operant under the functional con-trol of a nonverbal discriminative stimulus, and it pro-duces generalized conditioned reinforcement (see Table2). A nonverbal stimulus becomes a discriminativestimulus (SD) with the process of discrimination train-ing. For example, a shoe may not function as an SD forthe verbal response “shoe” until after saying “shoe” inthe presence of a shoe produces differential reinforce-ment.

A wide variety of nonverbal stimuli evoke tact re-lations. For example, a cake produces nonverbal visual,tactile, olfactory, and gustatory stimuli, any or all ofwhich can become SDs for the tact “cake.” Nonverbalstimuli can be, for example, static (nouns), transitory(verbs), relations between objects (prepositions), prop-erties of objects (adjectives), or properties of actions(adverbs); that is, nonverbal stimuli can be as simple asa shoe, or as complex as a cancerous cell. A stimulusconfiguration may have multiple nonverbal properties,and a response may be under the control of those mul-tiple properties, as in the tact “The red truck is on thelittle table.” Nonverbal stimuli may be observable orunobservable (e.g., pain), subtle or salient (e.g., neonlights), relational to other nonverbal stimuli (e.g., size),and so on. Given the variation and ubiquity of nonver-bal stimuli, it is no surprise that the tact is a primarytopic in the study of language.

Verbal Behavior

540

Echoic

The echoic is a type of verbal operant that occurs whena speaker repeats the verbal behavior of another speaker.For example, a child saying “cookie” after hearing theword spoken by her mother is echoic. Repeating thewords, phrases, and vocal behavior of others, which iscommon in day-to-day discourse, is echoic also. Theechoic operant is controlled by a verbal discriminativestimulus that has point-to-point correspondence and for-mal similarity with the response (Michael, 1982) (seeTable 2).

Point-to-point correspondence between the stim-ulus and the response or response product occurs whenthe beginning, middle, and end of the verbal stimulusmatches the beginning, middle, and end of the response.Formal similarity occurs when the controlling an-tecedent stimulus and the response or response product(a) share the same sense mode (e.g., both stimulus andresponse are visual, auditory, or tactile) and (b) physi-cally resemble each other (Michael, 1982). In the echoicrelation the stimulus is auditory and the response pro-duces an auditory product (echoing what one hears), andthe stimulus and the response physically resemble eachother.

Echoic behavior produces generalized conditionedreinforcement such as praise and attention. The ability toecho the phonemes and words of others is essential forlearning to identify objects and actions. A parent mightsay, “That’s a bear, can you say bear?” If the child can re-spond “bear,” then the parent says “Right!” Eventually,the child learns to name a bear without the echoic prompt.This often occurs in a few trials. For example, if a childcan say “bear” (or a reasonable approximation) after aparent says “bear,” then it becomes possible to teach thechild to say “bear” in the presence of a picture of a bearor a bear at the zoo. The echoic repertoire is very impor-tant for teaching language to children with language de-lays, and it serves a critical role in the process of teachingmore complex verbal skills (e.g., Lovaas, 1977; Sund-berg & Partington, 1998).

Motor imitations can have the same verbal prop-erties as echoic behavior as demonstrated by their rolein the acquisition of sign language by children who aredeaf. For example, a child may learn to imitate the signfor cookie first, and then mand for cookie without animitative prompt. Imitation is also critical for teachingsign language to hearing children who are nonvocal.For the many children who do not have an adequateechoic repertoire for vocal language instruction, time isspent teaching echoic behavior rather than more usefultypes of verbal behavior. A strong imitative repertoirepermits a teacher to use sign language immediately to

instruct more advanced forms of language (e.g., mands,tacts, and intraverbals). This allows a child to learnquickly to communicate with others without using in-appropriate behavior (e.g., a tantrum) to get what hewants.

Skinner also presented copying a text as a type ofverbal behavior in which a written verbal stimulus haspoint-to-point correspondence and formal similarity witha written verbal response. Because this relation has thesame defining features as echoic and imitation as it relatesto sign language, the three will be treated as one cate-gory, echoic.

Intraverbal

The intraverbal is a type of verbal operant in which aspeaker differentially responds to the verbal behavior ofothers. For example, saying “the Buckeyes” as a resultof hearing someone else say “Who won the game Satur-day?” is intraverbal behavior. Typically developing chil-dren emit a high frequency of intraverbal responses inthe form of singing songs, telling stories, describing ac-tivities, and explaining problems. Intraverbal responsesare also important components of many normal intellec-tual repertoires, such as saying “Sacramento” as a resultof hearing “What is the capital of California?”; saying“sixty-four” as a result of hearing “eight times eight”; orsaying “antecedent, behavior, and consequence” whenasked, “What is the three-term contingency?” The in-traverbal repertoires of typical adult speakers includehundreds of thousands of such relations.

The intraverbal operant occurs when a verbal dis-criminative stimulus evokes a verbal response that doesnot have point-to-point correspondence with the verbalstimulus (Skinner, 1957). That is, the verbal stimulus andthe verbal response do not match each other, as they doin the echoic and textual relations. Like all verbal oper-ants except the mand, the intraverbal produces general-ized conditioned reinforcement. For example, in theeducational context, the reinforcement for correct an-swers usually involves some form of generalized condi-tioned reinforcement such as “Right!” or points, or theopportunity to move to the next problem or item (seeTable 2).

An intraverbal repertoire facilitates the acquisitionof other verbal and nonverbal behavior. Intraverbal be-havior prepares a speaker to respond rapidly and accu-rately with respect to further stimulation, and plays animportant role in continuing a conversation. For exam-ple, a child hears an adult speaker say “farm” in somecontext. If the stimulus farm evokes several relevant in-traverbal responses, such as “barn,” “cow,” “rooster,” or“horse,” then a child is better able to react to other parts

Verbal Behavior

541

of an adult’s verbal behavior that may be related to a re-cent trip to a farm. One might say that the child is nowthinking about farms and now has relevant verbal re-sponses at strength for further responses to the adult’sverbal behavior. An intraverbal stimulus probes the lis-tener’s repertoire and gets it ready for further stimulation.Collectively, mands, tacts, and intraverbals contribute toa conversation in the following ways: (a) A mand reper-toire allows a speaker to ask questions, (b) a tact reper-toire permits verbal behavior about an object or eventthat is actually present, and (c) an intraverbal repertoireallows a speaker to answer questions and to talk about(and think about) objects and events that are not physi-cally present.

Textual

Textual behavior (Skinner, 1957) is reading, without anyimplications that the reader understands what is beingread. Understanding what is read usually involves otherverbal and nonverbal operants such as intraverbal behav-ior and receptive language (e.g., following instructions,compliance). For example, saying “shoe” upon seeing thewritten word shoe is textual behavior. Understanding thatshoes go on a person’s feet is not textual. Understandingis typically identified as reading comprehension. Skinnerchose the term textual because the term reading refers tomany processes at the same time.

The textual operant has point-to-point correspon-dence, but not formal similarity, between the stimulusand the response product. For example, (a) the verbalstimuli are visual or tactual (i.e., in one modality) andthe response is auditory (i.e., another modality) and (b) the auditory response matches the visual or tactualstimuli. Table 2 presents a diagram of the textual relation.

Textuals and echoics are similar in three respects:(a) They both produce generalized conditioned rein-forcement, (b) both are controlled by antecedent verbalstimuli, and (c) there is point-to-point correspondencebetween the antecedent stimulus and the response. Theimportant difference between textuals and echoics isthat the response product of textual behavior (e.g., thespoken word) is not similar to its controlling stimulus(e.g., the written word evokes a spoken response or au-ditory response product). The textual operant does nothave formal similarity, meaning that the SDs are not inthe same sense mode and do not physically resemblethe textual response. Words are visual and comprisedof individual letters, whereas the reading response pro-duces an auditory response product (which often iscovert) comprising phonemes. The echoic responseproduct, however, does have formal similarity with itscontrolling verbal stimulus.

Transcription

Transcription consists of writing and spelling words thatare spoken (Skinner, 1957). Skinner also referred to thisbehavior as taking dictation, with the key repertoires in-volving not only the manual production of letters, butalso accurate spelling of the spoken word. In technicalterms, transcription is a type of verbal behavior in whicha spoken verbal stimulus controls a written, typed, orfinger-spelled response. Like the textual operant, there ispoint-to-point correspondence between the stimulus andthe response product, but no formal similarity (see Table2). For example, when asked to spell the spoken word“hat,” a response h-a-t is a transcription. The stimulusand the response product have point-to-point correspon-dence, but they are not in the same sense mode or do notphysically resemble each other. Spelling English words

Table 2 Antecedent and Consequent Controlling Variables for Six Elementary Verbal Operants

Antecedent Variables Response Consequence

Motivating operations (4 hours without water) Mand (“water, please”) Specific reinforcement (glass of water)

Nonverbal stimulus (see toy truck) Tact (“truck”) Generalized conditioned reinforcement (GCSR) (praise and approval)

Verbal stimulus with point-to-point correspondence Echoic (say “book”) GCSRand formal similarity (hear “book”)

Verbal stimulus without point-to point correspon- Intraverbal (say “dogs”) GCSRdence or formal similarity (hear “cats and . . .”)

Verbal stimulus with point-to-point correspondence, Textual (say “apple”) GCSRwithout formal similarity (see apple written)

Verbal stimulus with point-to-point correspondence, Transcription (write apple) GCSRwithout formal similarity (hear “apple”)

Verbal Behavior

542

is a difficult repertoire to acquire. Because many wordsin the English language are not spelled the way theysound, shaping an appropriate discriminative repertoire isoften difficult.

Role of the ListenerSkinner’s analysis of verbal behavior focuses on thespeaker, whereas most linguists and psycholinguistic ac-counts of language emphasize the listener. Skinner sug-gested that the listener’s role is less significant thantypically assumed because much of what is often de-scribed as listener behavior (e.g., thinking, understanding)is more correctly classified as speaker behavior. It is justthat often the speaker and listener reside within the sameskin (as we will see in the following discussion).

What role does the listener then play in Skinner’saccount of language? In his analysis of listener behav-ior, Skinner pointed out that a verbal episode requiresa speaker and a listener. The listener not only plays acritical role as a mediator of reinforcement for thespeaker’s behavior, but also becomes a discriminativestimulus for the speaker’s behavior. In functioning as adiscriminative stimulus, the listener is an audience forverbal behavior. “An audience, then, is a discrimina-tive stimulus in the presence of which verbal behavioris characteristically reinforced and in the presence ofwhich, therefore, it is characteristically strong” (Skin-ner, 1957, p. 172). When Skinner (1978) wrote “verylittle of the behavior of the listener is worth distin-guishing as verbal” (p. 122), he was referring to whenthe listener serves as a discriminative stimulus in therole of an audience.

A listener functions in additional roles, other than asa mediator of reinforcement and a discriminative stim-ulus. For example, verbal behavior functions as dis-criminative stimuli (i.e., stimulus control) when aspeaker talks to a listener. The question is, What are theeffects of verbal behavior on listener behavior? A verbaldiscriminative stimulus may evoke echoic, textual, tran-scription, or intraverbal operants of a listener. The lis-tener becomes a speaker when this occurs. This isSkinner’s point: The speaker and listener can and oftendo reside within the same skin, meaning that a listenerbehaves simultaneously as a speaker. The most signifi-cant and complex responses to verbal stimuli occur whenthey evoke covert intraverbal behavior from a listenerwho becomes a speaker and functions as her own audi-ence. For example, a speaker’s verbal discriminativestimuli related to Pavlov’s work on respondent condi-tioning, such as, “What was Pavlov’s technique?” mayevoke a listener’s covert intraverbal behavior such as

3In three of his books, Skinner devoted a full chapter to the topic of think-ing: Science and Human Behavior (1953, Chapter 16), Verbal Behavior(1957, Chapter 19), and About Behaviorism (1974, Chapter 7) with severalsections dedicated to the topic of understanding (e.g., Verbal Behavior,pp. 277–280; About Behaviorism, pp. 141–142). A behavior analysis ofthinking and understanding involves, in large part, situations in which bothlistener and speaker reside within the same skin.

thinking, “He paired the sound of a metronome withmeat powder.”3

Verbal stimulus control may also evoke a listener’snonverbal behavior. For example, when someone says“Shut the door,” the behavior of shutting a door is non-verbal, but shutting the door is evoked by verbal stim-uli. Skinner (1957) identified this type of listenerbehavior as understanding. “The listener can be said tounderstand a speaker if he simply behaves in an appro-priate fashion” (p. 277).

Verbal stimuli can become quite complex becauseseparating the verbal and nonverbal behaviors of the lis-tener is difficult (Parrott, 1984; Schoenberger, 1990,1991). For example, in following a directive to buy a cer-tain type and style of pipe fitting at the hardware store,success will involve both nonverbal behavior such as dis-criminating among pipe fittings and verbal behavior suchas self-echoic prompts (e.g., “I need a three-quarter-inchfitting, three-quarter-inch”), tacts of the fittings (e.g.,“This looks like three-quarter-inch”), and mands for in-formation (e.g., “Can you tell me if this will fit a three-quarter-inch pipe?”).

Identifying Verbal OperantsThe same word (i.e., topography or form of the behavior)can appear in the definitions of all of the elementary ver-bal operants because controlling variables define verbaloperants, not the form of the verbal stimuli. Verbal be-havior is not classified or defined by its topography orform (i.e., by the words themselves). The classification ofverbal operants can be accomplished by asking a seriesof questions regarding the relevant controlling variablesthat evoke a specific response form (see Figure 1). A sam-ple of verbal behavior classification exercise is presentedin Table 3

1. Does an MO control the response form? If yes,then the operant is at least part mand.

2. Does an SD control the response form? If yes, then:

3. Is the SD nonverbal? If yes, then the operant is atleast part tact.

4. Is the SD verbal? If yes, then:

Verbal Behavior

543

5. Is there point-to-point correspondence between theverbal SD and the response? If not, then the operantis at least part intraverbal. If there is point-to-pointcorrespondence, then:

6. Is there formal similarity between the verbal SD

and the response. If yes, then the operant must beechoic, imitative, or copying a text). If not, then theoperant must be textual or transcription.

Analyzing ComplexVerbal BehaviorAnalysis of more complex verbal behavior includes au-tomatic reinforcement, tact extensions (generalization),and private events. These topics are presented in the fol-lowing sections.

Automatic Reinforcement

A common misconception about reinforcement in thecontext of verbal behavior is that it occurs only when thelistener mediates the reinforcer. When behavior occurswithout the apparent delivery of reinforcement, it is oftenassumed that higher mental processes are at work (e.g.,Brown, 1973; Neisser, 1976). Intermittent reinforcementcan explain some behavior that occurs in the absence ofan observed consequence, but not all such behavior. Somebehavior is strengthened or weakened, not by externalconsequences, but by its response products, which havereinforcing or punishing effects. Skinner used the termsautomatic reinforcement and automatic punishment ina number of his writings simply to indicate that an ef-fective consequence can occur without someone provid-ing it (cf., Vaughan & Michael, 1982).

Verbal Behavior

Figure 1 Verbal behavior classification chart.

Controlling variables Verbal relation

UMO/CMO Yes Mand

NO

Nonverbal discrimination stimulus Yes Tact

No

Verbal discrimination stimulus Yes Point-to-point correspondence No Intraverbal

Yes

Formal similarity Yes Echoic Imitation

Copy a text

No Textual Transcription

544

Table 3 Verbal Behavior Classification Exercises

As a result of . . . One has a tendency to . . . This is a(n) . . .

1. seeing a dog say “dog” __________

2. hearing an airplane say “airplane” __________

3. wanting a drink say “water” __________

4. hearing “How are you?” say “I’m fine” __________

5. smelling cookies baking say “cookies” __________

6. tasting soup say “pass the salt” __________

7. hearing “book” write book __________

8. hearing “book” sign “book” __________

9. hearing “book” say “book” __________

10. hearing “book” say “read” __________

11. hearing “book” sign “read” __________

12. hearing “book” finger-spell “book” __________

13. seeing a book write book __________

14. wanting a book write book __________

15. signing “book” write book __________

16. hearing “what color is this?” say “red” __________

17. seeing a dog on the table say “get off” __________

18. seeing stop written hit the brakes __________

19. hearing “Skinner” write behavior __________

20. smelling smoke say “fire” __________

21. being hungry go to a store __________

22. seeing apple written sign “apple” __________

23. seeing 5 say “five” __________

24. wanting things say “thanks” __________

25. hearing “write your name” write your name __________

26. hearing “run” finger-spell “run” __________

27. seeing “home” signed sign “Battle Creek” __________

28. hearing a phone ring say “phone” __________

29. smelling a skunk say “skunk” __________

30. hearing “table” say “mesa” __________

31. being happy smile __________

32. hoping a pilot sees it writing SOS __________

33. wanting blue say “blue” __________

34. hearing “Red, white, and” say “blue” __________

35. tasting candy say “mmmm” __________

Provide examples of verbal behavior.

36. Give an example of a mand involving an adjective.

37. Give an example of a tact of a smell.

38. Give an example of a response that is part mand and part tact.

39. Give an example of a response that is part tact and part intraverbal.

40. Give an example of a tact involving multiple responses.

41. Give an example of an intraverbal using writing.

42. Give an example of receptive language using sign language.

Answers to verbal behavior classification exercises in Table 3:1. T; 2. T; 3. M; 4. IV; 5. T/M; 6. M; 7. TR; 8. IV; 9. E; 10. IV; 11. IV; 12. TR; 13. T; 14. M; 15. IV; 16. IV/T; 17. M; 18. NV; 19. IV; 20. T/M; 21. NV; 22. IV;23. IV; 24. M; 25. IV; 26. TR; 27. IV; 28. T/M; 29. T; 30. IV; 31. NV; 32. M; 33. M; 34. IV; 35. T; 36. “I want the red one.”; 37. “Someone is smoking.”;38. “My throat is dry.”; 39. “Macy’s.” When asked “Where did you buy that?”; 40. “That’s a big burger!”; 41. A response to an e-mail; 42. Stoppingwhen someone signs “stop.”

545

Verbal behavior can produce automatic reinforce-ment, which has a significant role in the acquisition andmaintenance of verbal behavior. For example, automaticreinforcement may explain why a typically developinginfant engages in extensive babbling without the appar-ent delivery of reinforcement. Skinner (1957) pointed outthat the exploratory vocal behavior of young childrencould produce automatic reinforcement when those ex-ploratory sounds match the speech sounds of parents,caregivers, and others.

Skinner (1957) described a two-stage conditioninghistory in establishing vocal responses as automatic re-inforcers. First, a neutral verbal stimulus is paired with anexisting form of conditioned or unconditioned rein-forcement. For example, a mother’s voice is paired withconditions such as presenting food and warmth and re-moving aversive stimuli (e.g., medication on a diaperrash). As a result, the mother’s voice, a previously neu-tral stimulus, becomes a conditioned reinforcer. Themother’s voice will now strengthen whatever behaviorprecedes it. Second, a child’s vocal response as eitherrandom muscle movement of the vocal cords or reflexivebehavior produces an auditory response that on occasionmay sound somewhat like the mother’s words, intona-tions, and vocal pitches. Thus, a vocal response can func-tion as reinforcement by automatically increasing thefrequency of a child’s vocal behavior.4

Automatic reinforcement also plays an important rolein the development of more complex aspects of verbalbehavior, such as the acquisition of syntax and gram-matical conventions. For example, Donahoe and Palmer(1994) and Palmer (1996, 1998) suggested that a child’suse of grammar produces automatic reinforcement whenit sounds like the grammar used by others in the envi-ronment, but is automatically punished when it soundsodd or unusual. Palmer (1996) referred to this asachieving parity.

The stimulus conditions that evoke automatically re-inforced behavior may be encountered everywhere be-cause each time a response is automatically reinforced itmay alter the evocative effect of any stimulus conditionthat might be present. For example, a person may persistin singing or humming a theme song from a movie whiledriving home from the movie because of the two-stageconditioning process described previously. However, thesong may be periodically evoked several hours, or evendays, after the movie because each time the song is re-

4Miller and Dollard (1941) were perhaps the first to suggest that a processlike automatic reinforcement might be partially responsible for an infant’shigh rate of babbling. Since then, many others have discussed and re-searched the role of automatic reinforcement in language acquisition (e.g.,Bijou & Baer, 1965; Braine, 1963; Miguel, Carr, & Michael, 2002; Mowrer,1950; Novak, 1996; Osgood, 1953; Smith, Michael, & Sundberg, 1996;Spradlin, 1966; Staats & Staats, 1963; Sundberg, Michael, Partington, &Sundberg, 1996; Vaughan & Michael, 1982; Yoon & Bennett, 2000).

peated, a new stimulus such as a traffic light, a street cor-ner, or a neon sign might acquire some degree of stimu-lus control. The next time the person comes in contactwith, say, a red light, there could be some tendency tosing or hum the song. This effect might explain what isoften termed delayed echolalia observed with childrenwho have autism (Sundberg & Partington, 1998). At pre-sent, however, there have been no empirical investiga-tions of the stimulus control involved in automaticconsequences, although it certainly seems like an inter-esting and important area of study.

Tact Extensions

Contingencies that establish stimulus and responseclasses and generalization allow a variety of novel anddifferent discriminative stimuli to evoke verbal behavior.Skinner (1957) said it this way:

[A] verbal repertoire is not like a passenger list on a shipor plane, in which one name corresponds to one personwith no one omitted or named twice. Stimulus control isby no means so precise. If a response is reinforced upona given occasion or class of occasions, any feature ofthat occasion or common to that class appears to gainsome measure of control. A novel stimulus possessingone such feature may evoke a response. There are sev-eral ways in which a novel stimulus may resemble astimulus previously present when a response was rein-forced, and hence there are several types of what wemay call “extended tacts” (p. 91).

Skinner (1957) distinguished four types of extendedtacts: generic, metaphorical, metonymical, and solistic.The distinction is based on the degree to which a novelstimulus shares the relevant or irrelevant features of theoriginal stimulus.

Generic Extension

In generic extension, the novel stimulus shares all of therelevant or defining features of the original stimulus. Forexample, a speaker who learns to tact “car” in the pres-ence of a white Pontiac Grand Am emits the tact “car” inthe presence of a novel blue Mazda RX-7. A generic tactextension is evoked by simple stimulus generalization.

Metaphorical Extension

In metaphorical extension, the novel stimulus sharessome but not all of the relevant features associated withthe original stimulus. For example, Romeo was experi-encing a beautiful, sunny, warm day, and the exceptionalweather elicited respondent behaviors (e.g., good feel-ings). When Romeo saw Juliet, whose presence elicited

Verbal Behavior

546

similar respondent behaviors like the sunny day, he said,“Juliet is like the sun.” The sun and Juliet evoked a sim-ilar effect on Romeo, controlling the metaphorical tactextension “Juliet is like the sun.”

Metonymical Extension

Metonymical extensions are verbal responses to novelstimuli that share none of the relevant features of the orig-inal stimulus configuration, but some irrelevant but re-lated feature has acquired stimulus control. Simply, oneword substitutes for another in metonymical tact exten-sions, meaning that a part is used for a whole. As exam-ples: Saying “car” when shown a picture of a garage, orsaying “the White House requested” in place of “Presi-dent Lincoln requested.”

Solistic Extension

Solistic extensions occur when a stimulus property thatis only indirectly related to the tact relation evokes sub-standard verbal behavior such as malaprops. For instance,using the solistic tact extension, a person may say “Youread good” instead of “You read well.” Saying “car” whenreferring to the driver of the car is a solistic tact extension.

Private Events

In 1945 Skinner first described radical behaviorism, hisphilosophy. At the core of radical behaviorism is theanalysis of private stimuli (see also Skinner, 1953, 1974).Verbal behavior under the control of private stimuli hasbeen a main topic of theoretical and philosophical analy-ses of behavior ever since. In 1957 Skinner stated, “Asmall but important part of the universe is enclosed withinthe skin of each individual. . . . It does not follow that . . .it is any way unlike the world outside the skin or insideanother’s skin” (p. 130).

A significant amount of day-to-day verbal behavioris controlled in part by private events. What is commonlyreferred to as thinking involves overt stimulus controland private events (e.g., covert stimulus control). Theanalysis of private stimulation and how it acquires stim-ulus control is complex because of two problems: (a) Theparticipant can directly observe the private stimuli, butthe applied behavior analyst cannot (a limiting factor inthe prediction and control of behavior), and (b) privatestimulus control of verbal episodes in the natural envi-ronment will likely remain private, no matter how sensi-tive instruments will become in detecting private stimuliand behaviors. Skinner (1957) identified four ways thatcaregivers teach young persons to tact their private stim-

ulation: Public accompaniment, collateral responses,common properties, and response reduction.

Public Accompaniment

Public accompaniment occurs when an observable stim-ulus accompanies a private stimulus. For example, a fa-ther may observe a child bump his head on a table whilechasing a ball. The public stimuli are available to the fa-ther, but not the private and more salient painful stimuliexperienced by the child. The father can assume that thechild is experiencing pain because of his own history ofbumping objects, and may say “Ouch,” or “You hurt your-self.” In this way, the father is using the bump (observablestimulus) as an opportunity to develop verbal behaviorunder the stimulus control of a private stimulus. This mayoccur with an echoic to the private event; later, the stim-ulus control transfers to the private stimuli. Specifically,the child may echo the father’s “ouch” while the painfulstimuli are present, and quickly (depending on the child’shistory of echoic to tact transfer) the painful stimuli aloneevokes the tact “ouch.”

Collateral Responses

Caregivers also teach young persons to tact their privatestimuli by using collateral responses (i.e., observable be-havior) that reliably occur with private stimuli. For ex-ample, the father may not observe the child bump hishead, but may see the child holding his head and crying.These collateral behaviors inform the father that a painfulstimulus is present. The same training procedures withpublic accompaniment can be used with collateral re-sponses. Because the painful private stimuli are salient,only one trial may be needed for the private stimuli toacquire stimulus control of the tact relation.

Parents should use public accompaniment and col-lateral responses during the beginning stages of tact train-ing. However, even after developing a repertoire of tactingprivate events, a parent or listener will have difficultyconfirming the actual presence of the private event as in“My stomach hurts,” or “I have a headache now.”

Also, learning to tact private behaviors is acquiredprobably with public accompaniment and collateral re-sponses; for example, private stimuli that evoke privateemotions (behaviors) that we tact with words such ashappiness, sadness, fear, and being upset. Learning totact such private events is difficult if the private stimula-tion is not present during training. For example, proce-dures that use pictures of people smiling and frowning(i.e., public stimuli) for teaching children to tact emo-tions will be less effective than procedures that use vari-ables to evoke pleasure or displeasure (i.e., privatestimuli) during training.

Verbal Behavior

547

Common Properties

The two procedures described earlier use public stimulito establish tacting of private events. Common proper-ties also involve public stimuli, but in a different way. Aspeaker may learn to tact temporal, geometrical, or de-scriptive properties of objects and then generalize thosetact relations to private stimuli. As Skinner (1957) noted,“[M]ost of the vocabulary of emotion is metaphorical innature. When we describe internal states as ‘agitated,’‘depressed,’ or ‘ebullient,’ certain geometrical, temporal,and intensive properties have produced a metaphoricalextension of responses” (p. 132). Much of our verbal be-havior regarding emotional events is acquired throughthis type of stimulus generalization.

Response Reduction

Most speakers learn to tact features of their own bodiessuch as movements and positions. The kinesthetic stim-uli arising from the movement and positions can acquirecontrol over the verbal responses. When movementsshrink in size (become covert), the kinesthetic stimulimay remain sufficiently similar to those resulting fromthe overt movements that the learner’s tact occurs as aninstance of stimulus generalization. For example, a childcan report imagining swimming, or can report self-talkabout a planned conversation with someone, or can reportthinking of asking for a new toy (Michael & Sundberg,2003). Responses produced by private covert verbal be-havior can evoke other verbal behavior and will be dis-cussed later in further detail.

Multiple ControlAll verbal behavior contains multiple functional relationsamong antecedents, behavior, and consequences. “[A]nysample of verbal behavior will be a function of manyvariables operating at the same time” (Skinner, 1957, p.228). The functional units of mands, tacts, echoics, in-traverbals, and textual relations form the foundation of averbal behavior analysis. A working knowledge of thesefunctional units is essential for understanding the analy-sis of multiple control and complex verbal behavior.

Convergent Multiple Control

Michael (2003) used the term convergent multiple con-trol to identify when the occurrence of a single verbalresponse is a function of more than one variable. The taskof an applied behavior analyst is to identify the relevantsources that control an instance of verbal behavior. Forexample, saying “Why did the United States enter WorldWar II?” may be evoked by (a) MOs (making it part

mand), (b) verbal discriminative stimuli (making it partechoic, intraverbal, or textual), (c) nonverbal stimuli(making it part tact), or the presence of a specific audi-ence. For example, it’s possible that an audience withcontempt for war (i.e., the MO) evoked a mand, “Why didthe United States enter World War II?” The question maybe more a function of this variable than an intraverbal re-lated to the ongoing conversation or the nonverbal or tex-tual stimuli that may be present in the room. On the otherhand, the speaker may have no strong MO for the an-swer, but asks the question because of its relation to MOsfor social reinforcement related to political involvement.

Divergent Multiple Control

Multiple control also occurs when a single antecedentvariable affects the strength of many responses. For ex-ample, a single word (e.g., football) will evoke a varietyof intraverbal responses from different people, and fromthe same person at different times. Michael (2003) usedthe term divergent multiple control to identify this typeof control. Divergent multiple control can occur also withmand and tact relations. A single MO may strengthen avariety of responses, such as food deprivation strength-ening the response “I’m hungry,” or “Let’s go to a restau-rant.” A single nonverbal stimulus can also strengthenseveral response forms, as when a picture of a carstrengthens the responses “car,” “automobile,” or “Ford.”

Thematic and Formal Verbal Operants

Skinner (1957) identified thematic and formal verbal op-erants that function as sources of multiple control. Thethematic verbal operants are mands, tacts, and intraver-bals and involve different response topographies con-trolled by a common variable. For an intraverbalexample, the SD “blue” can evoke verbal responses suchas “lake,” “ocean,” and “sky.” The formal verbal oper-ants are echoic (imitation, copying a text) and textual(and transcription) and are controlled by a common vari-able, with point-to-point correspondence. For example,the SD “ring” can evoke verbal responses such as “sing,”“wing,” and “spring.”

Multiple Audiences

The role of the audience raises the issue of multiple au-diences. Different audiences may evoke different re-sponse forms. For example, two applied behavioranalysts talking (i.e., a technical speaker and a technicalaudience) will likely use different response forms thanwill a behavior analyst speaking with a parent (i.e., a

Verbal Behavior

548

technical speaker and a nontechnical audience). Apositive audience has special effects, especially a largepositive audience (e.g., as in a rally for a certain cause),as does a negative audience. When the two audiences arecombined, the effects of the negative audience are mostobvious: “[When a] seditious soapbox orator sees a po-liceman approaching from a distance, his behavior de-creases in strength as the negative audience becomesmore important” (Skinner, 1957, p. 231).

Elaborating Multiple Control

Convergent multiple control occurs in most instances ofverbal behavior. An audience is always a source of stim-ulus control related to verbal behavior, even when aspeaker serves as his own audience. In addition, it is alsothe case that more than one of the controlling variables re-lated to the different verbal operants may be relevant toa specific instance of verbal behavior. Convergent controloften occurs with MOs and nonverbal stimuli, resultingin a response that is part mand and part tact. For exam-ple, saying “You look great” may be partly controlled bythe nonverbal stimuli in front of a speaker (a tact), butalso by MOs related to wanting to leave soon or wantingto avoid potential aversive events (a mand). Skinner(1957) identified this particular blend of controlling vari-ables as evoking impure tacts (i.e., impure because anMO affects the tact relation).

Verbal and nonverbal stimuli can also share controlover a particular response. For example, a tendency tosay “green car” may be evoked by the verbal stimulus“What color is the car?” and the nonverbal stimulus of thecolor green.

Multiple sources of control can be any combinationof thematic or formal sources, even multiple sources fromwithin a single verbal operant, such as multiple tacts ormultiple intraverbals. Skinner pointed out that becausethese separate sources may be additive, the “multiple cau-sation produces many interesting verbal effects, includ-ing those of verbal play, wit, style, the devices of poetry,formal distortions, slips, and many techniques of verbalthinking” (pp. 228–229). Additional sources of controloften reveal themselves—for example, when a speakerin the presence of an obese friend who is wearing a newhat emits the “Freudian slip,” “I like that fat on you.”Multiple sources of control often provide the basis forverbal humor and listener enjoyment.

Autoclitic RelationThis chapter has emphasized that a speaker can, and oftendoes, function as her own listener. The analysis of howand why a speaker becomes a listener of her own verbalbehavior and then manipulates her verbal behavior with

additional verbal behavior addresses the topic of the auto-clitic relation. Skinner (1957) introduced the term auto-clitic to identify when a speaker’s own verbal behaviorfunctions as an SD or an MO for additional speaker ver-bal behavior. In other words, the autoclitic is verbal be-havior about a speaker’s own verbal behavior. Theconsequences for this behavior involve differential rein-forcement from the ultimate listener, meaning that thelistener discriminates whether to serve or not serve as amediator of reinforcement for those verbal stimuli. Aspeaker becomes a listener, and observer, of his own ver-bal behavior and its controlling variables, and then in turnbecomes a speaker again. This effect can be very rapidand typically occurs in the emission of a single sentencecomposed of the two levels of responses.

Primary and SecondaryVerbal Operants

Michael (1991, 1992) suggested that applied behavioranalysts classify verbal behavior about a speaker’s ownverbal behavior as primary (Level 1) and secondary(Level 2) verbal operants. In Level 1, MOs and/or SDsare present and affect the primary verbal operant. Thespeaker has something to say. In Level 2, the speaker ob-serves the primary controlling variables of her own ver-bal behavior and her disposition to emit the primaryverbal behavior. The speaker discriminates these con-trolling variables and describes them to the listener. Asecondary verbal operant enables the listener’s behavioras a mediator of reinforcement. For example, an MO orSD evokes the response, “She is in Columbus, Ohio.” It is important for listeners as reinforcement mediators todiscriminate the primary variables controlling thespeaker’s behavior. The verbal operant “She is in Colum-bus, Ohio,” does not inform the listener as to why thespeaker said it. “I read in the Columbus Dispatch that sheis in Columbus, Ohio,” informs the listener of the pri-mary controlling variable. The first level is “She is inColumbus, Ohio” (the primary verbal operant), and thesecond level is “I read in the Columbus Dispatch” (theautoclitic).

Autoclitic Tact Relations

Some autoclitics inform the listener of the type of pri-mary verbal operant the autoclitic accompanies (Peter-son, 1978). The autoclitic tact informs the listener ofsome nonverbal aspect of the primary verbal operant andis therefore controlled by nonverbal stimuli. For example,a child’s statement, “I see Mommy” may contain an au-toclitic tact. The primary verbal operant (i.e., the tact) isthe nonverbal SD of (a) the child’s mother; (b) the re-sponse, “Mommy”; and (c) the associated reinforcement

Verbal Behavior

549

history. The secondary verbal operant (i.e., the autoclitictact) is the speaker’s tact informing that a nonverbal SD

evoked the primary verbal operant. In this case, the non-verbal SD was the visual stimulus of the child’s mother,and the response “I see” informs the listener of the sourceof control that evoked the primary tact. If the child heardhis mother, but did not see his mother, the autoclitic tact“I hear” would be appropriate.

The listener may challenge the existence and natureof the autoclitic tact—for example, “How do you knowit’s mommy?” The challenge is one way that effectiveautoclitic behavior is shaped and brought under appro-priate stimulus control.

Autoclitic tacts also inform the listener of the strengthof the primary operant. In the examples of the verbal stim-uli, “I think it is Mommy” and “I know it is Mommy,” “Ithink” informs the listener that the source of control forthe primary tact “Mommy” is weak; “I know” is strong.

Autoclitic Mand Relations

Speakers use autoclitic mands frequently to help the lis-tener present effective reinforcers (Peterson, 1978). Aspecific MO controls the autoclitic mand, and its role isto mand the listener to react in some specific way to theprimary verbal operant. “I know it is Mommy” may con-tain an autoclitic mand. For example, if “I know” is nota tact of response strength, it may be an MO in the samesense as “Hurry up.”

Autoclitic mands occur everywhere, but listenershave difficulty recognizing the MO controlling the auto-clitic mand because the sources of control are private.For example, hidden agendas as autoclitic mands oftenrevel themselves only to the careful observer. For exam-ple, a primary intraverbal such as an answer to a questionabout the sale of a product may contain autoclitic mands,as in “I’m sure you will be pleased with the sale,” inwhich “I’m sure you will be pleased” is controlled by thesame MO that might control the response “Don’t ask mefor any details about the sale.”

Developing Autoclitic Relations

Speakers develop autoclitic relations in several ways. Forexample, a father is wrapping a gift for his child’s mother,and the child nearby says, “Mommy.” The father may askthe child to identify the primary variables controlling theresponse by asking, “Did you see her?” The father maydifferentially respond to “I see” indicating that clearly“Mommy” is a tact and hide the gift; rather than a mandfor “Mommy,” in which case, he keeps wrapping the gift.The source of control for the response “Mommy” couldbe the gift, as in, “That is for Mommy.” “That is for” (i.e.,the autoclitic) informs the father that the gift is a nonver-

bal stimulus controlling the primary tact, “Mommy,” andthe father continues wrapping the gift. As Skinner (1957)pointed out, “[A]n autoclitic affects the listener by indi-cating either a property of the speaker’s behavior or thecircumstances responsible for that property” (p. 329).

Early language learners seldom emit autoclitic re-sponses. Skinner was clear on this point: “In the absenceof any other verbal behavior whatsoever autoclitics can-not occur. . . . It is only when [the elementary] verbal op-erants have been established in strength that the speakerfinds himself subject to the additional contingencieswhich establish autoclitic behavior” (p. 330). Thus, earlylanguage intervention program should not include auto-clitic training.

Applications of Verbal BehaviorSkinner’s analysis of verbal behavior provides a concep-tual framework of language that can be quite beneficialfor applied behavior analysts. Viewing language aslearned behavior involving a social interaction betweenspeakers and listeners, with the verbal operants as thebasic units, changes how clinicians and researchers ap-proach and ameliorate problems related to language.Skinner’s theory of language has been successfully ap-plied to an increasing number of human areas. For ex-ample, the analysis has been used for typical languageand child development (e.g., Bijou & Baer, 1965), ele-mentary and high school education (e.g., Johnson &Layng, 1994), college education (e.g., Chase, Johnson,& Sulzar-Azaroff, 1985), literacy (e.g., Moxley, 1990),composition (e.g., J. Vargas, 1978), memory (e.g., Palmer,1991), second language acquisition (e.g., Shimamune &Jitsumori, 1999), clinical interventions (e.g., Layng &Andronis, 1984), behavior problems (e.g., McGill, 1999),traumatic brain injury (e.g., Sundberg, San Juan, Dawdy,& Arguelles, 1990), artificial intelligence (e.g., Stephens& Hutchison, 1992), ape language acquisition (e.g., Sav-age-Rumbaugh, 1984), and behavioral pharmacology(e.g., Critchfield, 1993). The most prolific application ofSkinner’s analysis of verbal behavior has been to lan-guage assessment and intervention programs for childrenwith autism or other developmental disabilities. This areaof application will be presented in more detail in the fol-lowing sections.

Language Assessment

Most standardized language assessments designed forchildren with language delays seek to obtain an ageequivalent score by testing a child’s receptive and expressive language abilities (e.g., Peabody Picture

Verbal Behavior

550

Vocabulary Test III [Dunn & Dunn, 1997], Comprehen-sive Receptive and Expressive Vocabulary Test [Hammill& Newcomer, 1997]). Although this information is help-ful in many ways, the tests do not distinguish among themand, tact, and intraverbal repertoires, and important lan-guage deficits cannot be identified. For example, thesetests assess language skills under the control of discrim-inative stimuli (e.g., pictures, words, questions); how-ever, a substantial percentage of verbal behavior is underthe functional control of MOs. Manding is a dominatingtype of verbal behavior, yet rarely is this repertoire as-sessed in standardized testing. It is quite common to findchildren with autism or other developmental disabilitieswho are unable to mand, but have extensive tact and re-ceptive repertoires. If a language assessment fails to iden-tify delayed or defective language skills that are relatedto MO control, an appropriate intervention program maybe difficult to establish. A similar problem is the failureto adequately assess the intraverbal repertoire with moststandardized assessments.

If a child with language delays is referred for a lan-guage assessment, the behavior analyst should examinethe current effectiveness of each verbal operant in addi-tion to obtaining a standardized test from a speech andlanguage pathologist. The behavior analyst would startby obtaining information about the child’s mand reper-toire. When known motivating operations are at strength,what behavior does the child engage in to obtain the re-inforcer? When the reinforcer is provided, does the mandbehavior cease? What is the frequency and complexityof the various mand units? Information regarding thequality and strength of the echoic repertoire can revealpotential problems in producing response topographiesthat are essential for other verbal interactions. A thor-ough examination of the tact repertoire will show the na-ture and extent of nonverbal stimulus control over verbalresponses, and a systematic examination of the receptiveand intraverbal repertoires will show the control by ver-bal stimuli. Thus, a more complete understanding of alanguage deficit, and hence a more effective language in-tervention program, can be obtained by determining thestrengths and weaknesses of each of the verbal operants,as well as a number of other related skills (e.g., Parting-ton & Sundberg, 1998; Sundberg, 1983; Sundberg &Partington, 1998).

Language Intervention

Skinner’s analysis suggests that a complete verbal reper-toire is composed of each of the different elementary op-erants, and separate speaker and listener repertoires. Theindividual verbal operants are then seen as the bases forbuilding more advanced language behavior. Therefore, alanguage intervention program may need to firmly es-

tablish each of these repertoires before moving on to morecomplex verbal relations such as autoclitics or multiplycontrolled responses. Procedures for teaching mand,echoic, tact, and intraverbal repertoires will be presentedbriefly in the following sections, which will also include,some discussion of the relevant research.

Mand Training

As previously stated, mands are very important to earlylanguage learners. They allow a child to control the de-livery of reinforcers when those reinforcers are most valu-able. As a result, a parent or language trainer’s behavior(especially vocal behavior) can be paired with the rein-forcer at the right time (i.e., when the relevant MO foran item is strong). Mands also begin to establish a child’srole as a speaker, rather than just a listener, thus giving thechild some control of the social environment. If mandsfail to develop in a typical manner, negative behaviorssuch as tantrums, aggression, social withdrawal, or self-injury that serve the mand function (and thereby controlthe social environment) commonly emerge. Therefore, alanguage intervention program for a nonverbal child mustinclude procedures to teach appropriate manding. Theother types of verbal behavior should not be neglected,but the mand allows a child to get what he wants, whenhe wants it.

The most complicated aspect of mand training is thefact that the response needs to be under the functionalcontrol of the relevant MO. Therefore, mand training canonly occur when the relevant MO is strong, and ulti-mately the response should be free from additionalsources of control (e.g., nonverbal stimuli). Another com-plication of mand training is that different response formsneed to be established and brought under the control ofeach MO. Vocal words are of course the most common re-sponse form, but sign language, pictures, or written wordscan also be used.

The basic procedure for establishing mands consistsof using prompting, fading, and differential reinforce-ment to transfer control from stimulus variables to moti-vative variables (Sundberg & Partington, 1998). Forexample, if a child demonstrates an MO for watchingbubbles by reaching for the bottle of bubbles, then smil-ing and laughing as he watches the bubbles in the air, thetiming is probably right to conduct mand training. If thechild can echo the word “bubbles” or an approximationsuch as “ba,” teaching a mand can be easy (see Table 4).The trainer should first present the bottle of bubbles (anonverbal stimulus) along with an echoic prompt (a verbal stimulus) and differentially reinforce succes-sive approximations to “bubble” with blowing bubbles(specific reinforcement). The next step is to fade theechoic prompt to establish the response “bubble” under

Verbal Behavior

551

Table 4 Teaching a Mand by Transferring Stimulus Control to MO Control

Antecedent Behavior Consequences

Motivating operation

Nonverbal stimulus “Bubbles” Blow bubbles

Echoic prompt

Motivating operation

Nonverbal stimulus “Bubbles” Blow bubbles

Motivating operation “Bubbles” Blow bubbles

the multiple control of the MO and the nonverbal stimu-lus (the bottle of bubbles). The final step is to fade thenonverbal stimulus to bring the response form under thesole control of the MO.

The easiest mands to teach in an early language in-tervention program are usually mands for items for whichthe MO is frequently strong for the child and satiation isslow to occur (e.g., food, toys, videos). It is always im-portant to assess the current strength of a supposed MOby using choice procedures, observation of a child’s behavior in a free operant situation (i.e., no demands),latency to contacting the reinforcer, immediate con-sumption, and so forth. The goal of early mand trainingis to establish several different mands by bringing dif-ferent response forms (i.e., words) under the functionalcontrol of different MOs. It is important to note that MOsvary in strength across time, and the effects may be mo-mentary. In addition, the response requirement placedon a child may weaken the strength of an MO, makingmand training more difficult. Many additional strategiesexist for teaching early mands to more difficult learners,such as augmentative communication, physical prompts,verbal prompts, and more careful fading and differentialreinforcement procedures (see Sundberg & Partington,1998).

Manding continues to be an important part of a ver-bal repertoire as other verbal operants are acquired. Soonafter mands for edible and tangible reinforcers are ac-quired, a typical child learns to mand for actions (verbs),attention, removal of aversive stimuli, movement to cer-tain locations (prepositions), certain properties of items(adjectives) and actions (adverbs), verbal information(WH-questions), and so on. These mands are often moredifficult to teach to a child with language delays becausethe relevant MO often must be captured or contrived fortraining purposes (Sundberg, 1993, 2004). Fortunately,Michael’s (1993) classification of the different types ofMOs provides a useful guide for capturing or contriv-ing MOs. For example, capturing a transitive conditionedmotivative operation (CMO-T) in the natural environ-

ment involves capitalizing on a situation in which onestimulus increases the value of a second stimulus. A childwho likes fire trucks may see a fire truck parked outsidethe window. This stimulus condition increases the valueof a second stimulus condition, an opened door, and willevoke behavior that has resulted in doors opening in thepast. A skilled trainer would be watchful for these eventsand would be quick to conduct a mand trial for the word“open” or “out.” The work of Hart and Risley (1975)and their incidental teaching model exemplify this teach-ing strategy.

Transitive CMOs can also be contrived to conductmand training (e.g., Hall & Sundberg, 1987; Sigafoos,Doss, & Reichele, 1989; Sundberg, Loeb, Hale, & Eigen-heer, 2002). For example, Hall and Sundberg (1987) useda contrived CMO-T procedure with a deaf teenager withautism by presenting highly desired instant coffee with-out hot water. The coffee altered the value of hot waterand thereby evoked behavior that had been followed byhot water in the past. During baseline this behavior con-sisted of tantrums. Appropriate mands (i.e., signing “hotwater”) were easy to teach when this CMO-T was in ef-fect by using the transfer of control procedure describedearlier. In fact, a number of mands were taught by usingthis procedure, and often the procedure led to the emis-sion of untrained mands and a substantial reduction innegative behavior.

Mand training should be a significant part of any in-tervention program designed for children with autism orother severe language delays. Without an appropriatemand repertoire, a child cannot obtain reinforcementwhen MOs are strong, or have much control of the so-cial environment. As a result, people who interact with thechild may become conditioned aversive stimuli, and/orproblem behaviors may be acquired that serve the mandfunction. These behaviors and social relationships canbecome hard to change until replacement mands are es-tablished. Teaching mands early in a language interven-tion program may help to prevent the acquisition ofnegative behaviors as mands. In addition, parents and

Verbal Behavior

552

Table 5 Teaching Echoics by Using a Mand Frame and Transferring Control from Multiple Control to Echoic Control

Antecedent Behavior Consequences

Motivating operation

Nonverbal prompt “Bubbles” Blow bubbles

Echoic stimulus

Nonverbal prompt

Echoic stimulus “Bubbles” Praise (GCR)

Echoic stimulus “Bubbles” Praise (GCR)

Verbal Behavior

teachers are paired with successful manding and can be-come conditioned reinforcers. If people become more re-inforcing to a child, social withdrawal, escape andavoidance, and noncompliance may be reduced.

Echoic Training

For an early language learner the ability to repeat wordswhen asked to do so plays a major role in the develop-ment of other verbal operants (as in the bubbles exampleearlier). If a child can emit a word under echoic stimuluscontrol, then transfer of stimulus control procedures canbe used to bring that same response form under the con-trol of not only MOs, but also stimuli such as objects(tacts) and questions (intraverbal). Because many chil-dren with autism and other language delays are unableto emit echoic behavior, special training procedures arerequired to develop the echoic repertoire.

The first goal of echoic training is to teach the childto repeat the words and phrases emitted by parents andteachers when asked to do so. Once echoic control is ini-tially established, the goal becomes to establish a gener-alized repertoire in which the child can repeat novelwords and combinations. But the ultimate goal with theechoic repertoire is to transfer the response form to otherverbal operants. This transfer process can begin imme-diately and is not dependent on the acquisition of a gen-eralized repertoire. Several procedures will be describedto achieve the first goal of establishing initial echoic stim-ulus control.

The most common form of echoic training is directechoic training in which a vocal stimulus is presentedand successive approximations to the target response aredifferentially reinforced. This procedure involves a com-bination of prompting, fading, shaping, extinction, andreinforcement techniques. Speech therapists commonlyuse prompts such as pointing to the mouth, exaggeratedmovements, physical lip prompting, and mirrors to watchlip movement. Successive approximations to a target vo-calization are reinforced, and others are ignored. Theprompts are then faded, and pure echoic responses are

reinforced. For many children these procedures are ef-fective in establishing and strengthening echoic controland improving articulation. However, for some childrenthe procedures are ineffective, and additional measuresare necessary.

Placing an echoic trial within a mand frame canoften be a more effective procedure for establishingechoic stimulus control. The MO is a powerful inde-pendent variable in language training and can be tem-porally used to establish other verbal operants (e.g.,Carroll & Hesse, 1987; Drash, High, & Tudor, 1999;Sundberg, 2004; Sundberg & Partington, 1998). Forechoic training, an MO and nonverbal stimulus can beadded to the target echoic antecedent as a way to evokethe behavior (see Table 5). For example, if a childdemonstrates a strong MO for bubbles, an echoic trialcan be conducted while that MO is strong, and in thepresence of the nonverbal stimulus of the bottle of bub-bles. These additional sources of control can help toevoke the vocal response along with the echoic prompt“Say bubbles.” The specific reinforcement of blowingbubbles is then contingent on any successive approxi-mation to “bubbles.” These additional antecedent vari-ables must be faded out, and the reinforcement changedfrom the specific reinforcement to generalized condi-tioned reinforcement. For some learners, the transferfrom MO to echoic control may occur more quickly if apicture of the object is used rather than the actual object(this reduces the MO evocative effect).

Children with a low frequency of vocal behaviorsmay have difficulty establishing echoic control. For thesechildren procedures to simply increase any vocal behav-ior may facilitate the ultimate establishment of echoiccontrol. One method is to directly reinforce all vocal be-haviors. Taking this procedure one step further, if a childrandomly emits a particular sound, the behavior analystcan reinforce this behavior and conduct an echoic trialwith that sound immediately after the delivery of rein-forcement. Some children will repeat what they initiallyemitted, and this interaction sets up some of the basicvariables that may facilitate echoic control.

553

Verbal Behavior

Table 6 Teaching Tacts by Using a Mand Frame and Trans-ferring Control from Multiple Control to Nonverbal Control

Antecedent Behavior Consequences

Motivating operation

Nonverbal prompt “Bubbles” Blow bubbles

Echoic stimulus

Nonverbal prompt

Echoic stimulus “Bubbles” Praise (GCR)

Nonverbal stimulus “Bubbles” Praise (GCR)

Automatic reinforcement procedures can also beused to increase the frequency of vocal behavior. By pair-ing a neutral stimulus with an established form of rein-forcement, the neutral stimulus can become a conditionedreinforcer. For example, if just prior to blowing bubblesthe trainer emits the word bubbles, bubbles can becomea reinforcer. Research has shown that this pairing pro-cedure can increase the rate of a child’s vocal play andresult in the emission of targeted sounds and words thathad never occurred echoically (Miguel, Carr, & Michael,2002; Sundberg, Michael, Partington, & Sundberg, 1996;Smith, Michael, & Sundberg, 1996; Yoon & Bennett,2000). For example, Yoon and Bennett (2000) demon-strated that this pairing procedure was more successfulthan direct echoic training in producing targeted sounds.Individual children who have difficulty acquiring anechoic repertoire may benefit from this procedure or acombination of all the procedures described in this section.

Tact Training

The tact repertoire is extensive and often the primary focusof many language intervention programs. A child mustlearn to tact objects, actions, properties of objects and ac-tions, prepositional relations, abstractions, private events,and so on. The goal of the teaching procedures is to bringa verbal response under nonverbal stimulus control. If achild has a strong echoic repertoire, then tact training canbe quite simple. A language trainer can present a nonver-bal stimulus along with an echoic prompt, differentially re-inforce a correct response, and then fade the echoicprompt. However, for some children tact training is moredifficult, and special procedures may be required.

A mand frame can also be used to establish tacting(Carroll & Hesse, 1987). The procedure is similar to thatdescribed for teaching an echoic response. Training be-gins with an MO for a desired object, the nonverbal ob-ject, and an echoic prompt (see Table 6). Using the

bubbles example, the first and second steps are the same,with the goal being to free the response from motiva-tional control by providing generalized conditioned re-inforcement rather than specific reinforcement. As withechoic training, at this point in the procedure the transfermay occur more quickly if a picture of the object is usedrather than the actual object. Also, with some children itmay be more effective to fade the echoic prompt beforethe MO is faded. The third step in the procedure involvesfading out the echoic prompt and bringing the responseunder the sole control of the nonverbal stimulus, thus atact. Additional nonechoic verbal prompts may also behelpful, such as “What is that?,” but these too are verbalprompts that are additional sources of control that needto be accounted for in the analysis of tact acquisition(Sundberg & Partington, 1998).

Methods for teaching more complex tacts can alsomake use of the transfer of stimulus control procedure.For example, teaching tacts of actions requires that thenonverbal stimulus of movement be present and a re-sponse such as “jump” be brought under the control ofthe action of jumping. Teaching tacts involving prepo-sitions, adjectives, pronouns, adverbs, and so on, alsoinvolves the establishment of nonverbal stimulus con-trol. However, these advanced tacts are often morecomplex than they appear, and frequently the type ofstimulus control established in formal training may notbe the same type of stimulus control that evokes simi-lar tacts for typically developing children (Sundberg &Michael, 2001). For example, some training programswith early learners attempt to bring verbal behaviorunder the control of private stimuli, such as those in-volved in emotional states (sad, happy, afraid), pains,itches, a full bladder, hunger pangs, nausea, and soforth. Such verbal behavior is an important part of anyperson’s repertoire, but because the controlling vari-ables that are affecting the learner cannot be directlycontacted by the teacher or parent, accurate tact rela-tions are difficult to develop. An instructor cannot pre-sent the relevant private stimulus that is inside a

554

Verbal Behavior

person’s body, and therefore cannot differentially re-inforce correct tact responses in the same way that cor-rect tacts to objects and actions can be reinforced.Teaching a child to correctly say “itch” with respect toa stimulus coming from a portion of the child’s arm istrained indirectly as the teacher reacts to common pub-lic accompaniments of such stimuli (observing a skinrash) and collateral responses by the learner (observingthe child’s scratching). However, this method is fraughtwith difficulties (the rash may not itch, the scratchingmay be imitated), and such repertoires even in typicaladults are often quite imprecise.

Intraverbal Training

Many children with autism, developmental disabilities, orother language delays suffer from defective or nonexis-tent intraverbal repertoires, even though some can emithundreds of mands, tacts, and receptive responses. Forexample, a child may (a) say “bed” when hearing “bed”spoken by another person (echoic), (b) say “bed” whenhe sees a bed (tact), and even (c) ask for bed when he istired (mand), but (d) may not say “bed” when someoneasks, “Where do you sleep?” or says, “You sleep in a . . . .”In cognitive terms, this type of language disorder may bedescribed as a child’s failure to process the auditory stim-ulus, or explained in terms of other hypothesized internalprocesses. However, verbal stimulus control is not thesame as nonverbal stimulus control, and a response ac-quired as a tact for an early language learner may not au-tomatically occur as an intraverbal without special training(e.g., Braam & Poling, 1982; Luciano, 1986; Partington& Bailey, 1993; Watkins, Pack-Teixteiria, & Howard,1989).

In general, verbal stimulus control over verbal re-sponding is more difficult to establish than nonverbalcontrol. This does not mean that all intraverbals are harderthan all tacts. Some intraverbal behavior is simple andeasy to acquire. However, formal training on intraverbal

behavior for a language-delayed child should not occuruntil the child has well-established mand, tact, echoic,imitation, receptive, and matching-to-sample repertoires(Sundberg & Partington, 1998). A common mistake inearly intraverbal training is to attempt to teach intraver-bal relations too early, or intraverbals that are too complexand out of developmental sequence such as personal in-formation (e.g., “What is your name and phone num-ber?”). Some of the easiest intraverbal relations arefill-in-the-blanks with songs (e.g., “The wheels on the. . .)”and other fun activities (e.g., Peek-a- ). The goal of earlyintraverbal training is to begin to break verbal respond-ing free from mand, echoic, and tact sources of control.That is, no new response topographies are taught; rather,known words are brought under a new type of stimuluscontrol.

MOs can be helpful independent variables in facili-tating the transfer of stimulus control, as in the Peek-a-Boo example, and the mand frame procedures describedearlier for the echoic and tact. However, ultimately, achild needs to learn to emit intraverbal responses that arefree from MO control. For example, if a child likes bub-bles and the target intraverbal is for the verbal stimulus“You blow. . .” to evoke the verbal response “bubbles,”then intraverbal training should involve the transfer ofcontrol from MOs and nonverbal stimuli to verbal stim-uli (echoic prompts can also be used). The languagetrainer can present the verbal stimulus (e.g., “You blow . . .”)when the MO is strong along with the nonverbal stimu-lus (e.g., a bottle of bubbles). Then, the trainer can beginproviding generalized conditioned reinforcement ratherthan specific reinforcement, use a picture of the itemrather than the actual item, and finally, fade the nonver-bal prompt (see Table 7)

The intraverbal repertoire becomes increasing morevaluable to a child as the verbal stimuli and related re-sponses become more varied and complex. Common as-sociations (e.g., “Mommy and . . .”), fill-in-the-blanks(e.g., “You bounce a . . .”), animal sounds (e.g., “A kittygoes . . .”), and eventually what questions (e.g., “What

Table 7 Teaching Intraverbals by Using a Mand Frame andTransferring Control from Multiple Control to a Verbal Stimulus

Antecedent Behavior Consequences

Motivating operation

Nonverbal prompt “Bubbles” Blow bubbles

Verbal stimulus

Nonverbal prompt

Verbal stimulus “Bubbles” Praise (GCR)

Verbal stimulus “Bubbles” Praise (GCR)

555

do you eat?”) will help to strengthen intraverbal behav-ior by expanding the content and variation of the verbalstimuli and the verbal responses. In addition, these pro-cedures can help to develop verbal stimulus and responseclasses and more fluent intraverbal responding that is freefrom tact and echoic sources of stimulus control. Moreadvanced intraverbal training can be accomplished in avariety of ways (Sundberg & Partington, 1998). For ex-ample, the verbal stimulus can have multiple componentsinvolving conditional discriminations in which one ver-bal stimulus alters the evocative effect of another, as in“What do you eat for breakfast?” versus “What do you eatfor dinner?” Expansion prompts can also be used, such as“What else do you eat for lunch?,” as well as additionalWH-questions, such as “Where do you eat lunch?,” and“When do you eat lunch?” As with the other verbal reper-toires, typical developmental sequences can be a helpfulguide to the progression of increasingly complex in-traverbal behavior, as can the task analysis of the verbal

operants presented in the Assessment of Basic Learningand Language Skills: The ABLLS (Partington & Sund-berg, 1998).

Additional Aspects of Language Training

In addition to these four basic repertoires, there are sev-eral other components of a verbal behavior program andcurriculum, such as receptive language training,matching-to-sample, mixing and varying trials, multipleresponse training, sentence construction, conversationalskills, peer interaction, reading, and writing (Sundberg& Partington, 1998). Although a description of these pro-grams is beyond the scope of this chapter, many of theprocedures for teaching these skills involve the samebasic elements of the transfer of stimulus control meth-ods described here.

SummaryVerbal Behavior and Properties of Language

1. Verbal behavior is defined as behavior that is reinforcedthrough the mediation of another person’s behavior.

2. The formal properties of verbal behavior involve the topog-raphy (i.e., form, structure) of the verbal response.

3. The functional properties of verbal behavior involve thecauses (i.e., antecedents and consequences) of the response.

4. Skinner’s analysis of verbal behavior was met with strongopposition from the field of linguistics, and with indiffer-ence within the field of behavior analysis. However, Skin-ner predicted in 1978 that Verbal Behavior would prove tobe his most important work.

Defining Verbal Behavior

5. Verbal behavior involves a social interaction betweenspeakers and listeners, whereby speakers gain access toreinforcement and control their environment through thebehavior of listeners.

6. The verbal operant is the unit of analysis of verbal behav-ior and is the functional relation between a type of re-sponding and (a) motivating variables, (b) discriminativestimuli, and (c) consequences.

7. A verbal repertoire is a set of verbal operants emitted bya particular person.

Elementary Verbal Operants

8. The mand is a verbal operant in which the form of the re-sponse is under the functional control of motivating oper-ations (MOs) and specific reinforcement.

9. The tact is a verbal operant under the functional control ofnonverbal discriminative stimulus, and it produces gener-alized conditioned reinforcement.

10. The echoic is a verbal operant that consists of a verbal dis-criminative stimulus that has point-to-point correspon-dence and formal similarity with a verbal response.

11. Point-to-point correspondence between the stimulus andthe response or response product occurs when the begin-ning, middle, and end of the verbal stimulus matches thebeginning, middle, and end of the verbal response.

12. Formal similarity occurs when the controlling antecedentstimulus and the response or response product (a) sharethe same sense mode (e.g., both stimulus and response arevisual, auditory, or tactile) and (b) physically resembleeach other.

13. The intraverbal is a verbal operant that consists of a ver-bal discriminative stimulus that evokes a verbal responsethat does not have point-to-point correspondence.

14. The textual relation is a verbal operant that consists of averbal discriminative stimulus that has point-to-point cor-respondence between the stimulus and the response prod-uct, but does not have formal similarity.

15. The transcription relation is a verbal operant that con-sists of a verbal discriminative stimulus that controls awritten, typed, or finger-spelled response. Like the tex-tual relation, there is point-to-point correspondence be-tween the stimulus and the response product, but noformal similarity.

Verbal Behavior

556

Role of the Listener

16. The listener not only mediates reinforcement, but func-tions as a discriminative stimulus for verbal behavior.Often, much of the behavior of a listener is covert verbalbehavior.

17. An audience is a discriminative stimulus in the presenceof which verbal behavior is characteristically reinforced.

18. Classifying verbal responses as mands, tacts, intraverbals,etc., can be accomplished by an analysis of the relevantcontrolling variables.

Analyzing Complex Verbal Behavior

19. Automatic reinforcement is a type of conditioned rein-forcement in which a response product has reinforcingproperties as a result of a specific conditioning history.

20. Automatic punishment is a type of conditioned punish-ment in which a response product has punishing proper-ties as a result of a specific conditioning history.

21. In generic tact extension, the novel stimulus shares all ofthe relevant or defining features associated with the orig-inal stimulus.

22. In metaphorical tact extension, the novel stimulus sharessome, but not all, of the relevant features of the originalstimulus.

23. In metonymical tact extension, the novel stimulus sharesnone of the relevant features of the original stimulus con-figuration, but some irrelevant but related feature has ac-quired stimulus control.

24. In solistic tact extension, a stimulus property that is onlyindirectly related to the tact relation evokes substandardverbal behavior.

25. Private events are stimuli that arise from within some-one’s body.

26. Public accompaniment occurs when a publicly observablestimulus accompanies a private stimulus.

27. Collateral responses are publicly observable behaviors thatreliably occur with private stimuli.

28. Common properties involve a type of generalization in whichprivate stimuli share some of the features of public stimuli.

29. Response reduction is also a type of generalization inwhich kinesthetic stimuli arising from movement and po-sitions acquire control over the verbal responses. Whenmovements shrink in size (become covert), the kinestheticstimuli may remain sufficiently similar to those resultingfrom the overt movements.

Multiple Control

30. Convergent multiple control occurs when a single verbalresponse is a function of more than one controlling variable.

31. Divergent multiple control occurs when a single antecedentvariable affects the strength of many responses.

32. The thematic verbal operants are mand, tact, and in-traverbal, and involve different response topographies con-trolled by a common variable.

33. The formal verbal operants are echoic (and imitation as itrelates to sign language and copying a text), textual, andtranscription, and involve control by a common variablewith point-to-point correspondence.

34. Multiple audiences consist of two or more different audi-ences that may evoke different response forms.

35. Impure tacts occur when an MO shares control with a non-verbal stimulus.

Autoclitic Relation

36. The autoclitic relation involves two related but separatethree-term contingencies in which some aspect of aspeaker’s own verbal behavior functions as an SD or anMO for additional speaker verbal behavior.

37. Primary verbal behavior involves the elementary verbaloperants emitted by a speaker.

38. Secondary verbal behavior involves verbal responses con-trolled by some aspect of the speaker’s own ongoing ver-bal behavior.

39. The autoclitic tact informs the listener of some nonverbalaspect of the primary verbal operant and is therefore con-trolled by nonverbal stimuli.

40. The autoclitic mand is controlled by an MO and enjoinsthe listener to react in some specific way to the primaryverbal operant.

Applications of Verbal Behavior

41. The verbal operants can be used to assess a wide varietyof language deficits.

42. Mand training involves bringing verbal responses underthe functional control of MOs.

43. Echoic training involves bringing verbal responses underthe functional control of verbal discriminative stimuli thathave point-to-point correspondence and formal similaritywith the response.

44. Tact training involves bringing verbal responses under thefunctional control of nonverbal discriminative stimuli.

45. Intraverbal training involves bringing verbal responsesunder the functional control of verbal discriminativestimuli that lack point-to-point correspondence with theresponse.

Verbal Behavior

557

Contingency Contracting, Token Economy,and Group Contingencies

Key Terms

backup reinforcerbehavioral contractcontingency contractdependent group contingency

group contingencyhero procedureindependent group contingencyinterdependent group contingency

level systemself-contracttokentoken economy

Behavior Analyst Certification Board® BCBA® & BCABA®Behavior Analyst Task List,© Third Edition

Content Area 9: Behavior Change Procedures

9-18 Use contingency contracting (e.g., behavioral contracts).

9-19 Use token economy procedures, including levels systems.

9-20 Use independent, interdependent, and dependent group contingencies.

© 2006 The Behavior Analyst Certification Board, Inc.,® (BACB®) all rights reserved. A current version of thisdocument may be found at www.bacb.com. Requests to reprint, copy, or distribute this document and ques-tions about this document must be submitted directly to the BACB.

From Chapter 26 of Applied Behavior Analysis, Second Edition. John O. Cooper, Timothy E. Heron, William L. Heward. Copyright © 2007by Pearson Education, Inc. All rights reserved.

558

Contingency Contracting, Token Economy, and Group Contingencies

This chapter addresses contingency contract-ing, token economy, and group contingenciesas special applications of behavioral proce-

dures. Each application will be defined; its relationshipto behavioral principles will be explained; requisitecomponents of the procedure will be addressed; andguidelines for designing, implementing, and evaluatingeach will be presented. These topics are grouped be-cause they have several features in common. First, theyhave an effective and robust literature supporting theirinclusion. Second, they can be combined with other ap-proaches in package programs to provide an additiveeffect. Also, each can be used in individual and grouparrangements. The flexibility that these three special ap-plications provide makes them an attractive option forpractitioners.

Contingency ContractingDefinition of Contingency Contract

A contingency contract, also called a behavioral con-tract, is a document that specifies a contingent relation-ship between the completion of a specified behavior andaccess to, or delivery of, a specified reward such as freetime, a letter grade, or access to a preferred activity.

Typically, contracts specify how two or more peoplewill behave toward each other. Such quid pro quo agree-ments make one person’s behavior (e.g., preparing din-ner) dependent on the other person’s behavior (e.g.,washing and putting away the dishes by a prescribedtime the night before). Although verbal agreements maybe considered contracts in the legal sense, they are notcontingency contracts because the degree of specificityin designing, implementing, and evaluating a contin-gency contract far exceeds what is likely to occur in averbal arrangement between parties. In addition, thephysical act of signing the contract and its prominentvisibility during execution are integral parts of contin-gency contracts.

Contingency and behavior contracts have been usedto modify academic performance (Newstrom, McLaugh-lin, & Sweeney, 1999; Wilkinson, 2003), weight control(Solanto, Jacobson, Heller, Golden, & Hertz, 1994), ad-herence to medical regimens (Miller & Stark, 1994), andathletic skills (Simek, O’Brien, & Figlerski, 1994). In-deed, a compelling advantage of contingency contracts is their ability to be implemented alone or in packagedprograms that incorporate two or more interventions con-currently (De Martini-Scully, Bray, & Kehle, 2000).

Components of Contingency Contracts

There are three major parts in most contracts—a de-scription of the task, a description of the reward, and thetask record. Essentially, the contract specifies the per-son(s) to perform the task, the scope and sequence of thetask, and the circumstances or criterion for task comple-tion. Figure 1 shows a contingency contract imple-mented by the parents of a 10-year-old boy to help himlearn to get up and get ready for school each day.

Task

The task side of the contract consists of four parts. Whois the person who will perform the task and receive thereward—in this case, Mark. What is the task or behav-ior the person must perform—in this example, gettingready for school. When identifies the time that the taskmust be completed—every school day. How well is themost important part of the task side, and perhaps of theentire contract. It calls for the specifics of the task.Sometimes it is helpful to list a series of steps or sub-tasks so that the person can use the contract as a check-list of what must be done. Any exceptions should bewritten in this part.

Reward

The reward side of a contract must be as complete andaccurate as the task side (Ruth, 1996). Some people arevery good at specifying the task side of a contract; theyknow what they want the other person to do. When itcomes to the reward side, however, specificity is lost andproblems arise. Reward statements such as “Can watchsome television” or “Will play catch when I get achance” are not explicit, specific, or fair to the personcompleting the task.

On the reward side, Who is the person who will judgetask completion and control delivery of the reward. WithMark’s getting-ready-for-school contract, those personsare his parents. What is the reward. When specifies thetime that the reward can be received by the person earn-ing it. With any contract it is crucial that the reward comeafter successful task completion. However, many rewardscannot be delivered immediately following task comple-tion. In addition, some rewards have built-in, limited avail-ability and can be delivered only at certain times (e.g.,seeing the home-town baseball team play). Mark’s con-tract specifies that his reward, if earned, will be receivedon Friday nights. How much is the amount of reward that

559

Contingency Contracting, Token Economy, and Group Contingencies

Feb. 12, 2007Feb. 12, 2007

Figure 1 Example of a contingency contract.From Sign Here: A Contracting Book for Children and Their Parents (2nd ed., p. 31) by J. C. Dardig and W. I. Heward, 1981, Bridgewater, NJ:Fournies and Associates. Copyright 1981 by Fournies and Associates. Reprinted by permission.

can be earned by completing the task. Any bonus contin-gencies should be included; for example, “By adheringto her contract Monday through Friday, Eileen receivesan extra reward on Saturday and Sunday.”

Task Record

Including on the contract a place to record task comple-tion serves two purposes. First, recording task completionand reward delivery on the contract sets the occasion forall parties to review the contract regularly. Second, if acertain number of task completions are required to earnthe reward (e.g., if a child must dress herself each morn-ing before school for 5 days in a row), a check mark, smi-ley face, or star can be placed on the task record eachtime the task is completed successfully. Marking the con-tract in this manner helps the person remain focused untilthe assignment is completed and the reward is earned.Mark’s parents used the top row of boxes in the taskrecord to record the days of the school week. In the mid-dle row of boxes they placed gummed stars each dayMark met the conditions of his contract. In the bottomrow, Mark’s parents wrote comments about the progressof his contract.

Implementing ContingencyContracts

How Do Contracts Work?

At first glance the principle of behavior behind contin-gency contracts seems deceptively simple: A behavioris followed by a contingent reward—surely a case ofpositive reinforcement. Yet, in most contracts the re-ward, although contingent, is much too delayed to re-inforce the specified behavior directly; and manysuccessful contracts specify a reward that would not, infact, function as a reinforcer for the task even if it waspresented immediately after the task completion. Fur-ther, behavioral contracting is not a single procedurewith a single associated behavior and a single reinforcer.Contracting is more accurately conceptualized as an in-tervention package that combines several behavior prin-ciples and procedures.

So how do contracts work? Several principles, pro-cedures, and factors are likely to apply. Certainly rein-forcement is involved, but not in as simple or direct afashion as it might seem at first. Rule-governed behavioris probably involved (Malott, 1989; Malott & Garcia,1991; Skinner, 1969). A contract describes a rule: A

560

Contingency Contracting, Token Economy, and Group Contingencies

100

80

60

40

20

0

Perc

ent

Co

rrec

t C

apit

aliz

atio

n

5 1510

Spelling

Baseline Contingency Contracting

100

80

60

40

20

0

Perc

ent

Co

rrec

t P

un

ctu

atio

n

5 1510

Written Sentences

Sessions

specified behavior will be followed by a specified (andreasonably immediate) consequence. The contract servesas a response prompt to perform the target behavior andenables the effective use of a consequence (e.g., going tothe movies Saturday night), too delayed in itself to rein-force certain behaviors (e.g., practicing the trumpet onTuesday). Delayed consequences can help exert controlover behaviors performed hours and even days before ifthey are associated with and linked by verbal behaviorto the rule (e.g., “I’ve just finished my trumpet practice—that’s another check mark toward the movies on Satur-day”), or to interim token reinforcers (e.g., the checkmark on the contract after practicing). The physical vis-ibility of the contract may also function as a responseprompt for escaping “guilt” (Malott & Garcia, 1991). Atthis stage of knowledge development within the field, itcannot be said that contracting is simply positive rein-forcement loosely based on the Premack principle. It ismore likely a complex package intervention of relatedpositive and negative reinforcement contingencies andrule-governed behavior that operate alone and together.

Applications of ContingencyContracting

Contracting in the Classroom

The use of contracting in classrooms is well established.For example, teachers have employed contracting to ad-dress specific discipline, performance, and academicchallenges (Kehle, Bray, Theodore, Jenson, & Clark,2000; Ruth, 1996). Newstrom and colleagues (1999), forinstance, used a contingency contract with a middleschool student with behavior disorders to improve thewritten mechanics associated with spelling and writtenlanguage. After collecting baseline data on the percent-age of capitalization and punctuation marks used cor-rectly across spelling and sentence writing, a contingencycontract was negotiated and signed with the student thatspecified that improved performance would yield freetime on the classroom computer. The student was re-minded of the terms of the contract before each languagearts class that included spelling worksheets and journalwriting (i.e., sentences).

Figure 2 shows the results of the contingency contracting intervention. When baseline was in effectfor spelling and written sentences respectively, meanscores for both variables were in the 20% correct range.When contracting was initiated, the student’s perfor-mance for spelling and written sentences increased im-mediately to an average of approximately 84% correct.Because the percentage of correct performance increased

immediately when contingency contracting was imple-mented for spelling (Sessions 4 through 12), but did notincrease for written sentences until Session 11, a func-tional relation was demonstrated between contractingand improved performance. The authors also reportedpositive anecdotal evidence related to spelling and writ-ten language from other teachers with whom this stu-dent interacted.

Wilkinson (2003) used a contingency contract toreduce the disruptive behavior of a first-grade student.Disruptive behaviors included being off-task, refusalsto comply with work assignments and instructions,fighting with peers, and temper tantrums. A behavioralconsultation effort was launched with the classroomteacher that included problem identification, analysis,intervention, and evaluation. Contingency contractingconsisted of the student earning preferred rewards andsocial praise from the teacher for three behaviors: in-creased time on-task, appropriate interactions withother children, and compliance with teacher requests.Observations of her behavior over 13 sessions of base-line and contingency contracting showed a decrease in

Figure 2 Percentage of capitalization and punctua-tion marks used correctly on spelling worksheetsand journal writing during baseline and contingencycontracting.From “The Effects of Contingency Contracting to Improve the Mechanics ofWritten Language with a Middle School Student with Behavior Disorders”by J. Newstrom, T. F. McLaughlin, & W. J. Sweeney, 1999, Child & FamilyBehavior Therapy, 21 (1), p. 44. Copyright 1999 by The Haworth Press, Inc. Reprinted by permission.

561

Contingency Contracting, Token Economy, and Group Contingencies

the percentage of intervals with disruptive behaviorwhen the contingency contract was in effect. Wilkinsonreported that the student’s disruptive behavior de-creased substantially and remained low during a 4-weekfollow-up period.

Ruth (1996) conducted a 5-year longitudinal studywith emotionally disturbed students that blended con-tingency contracting with goal setting. After studentsnegotiated their contracts with their teachers, a goal-setting component was added that included statementsabout their daily and weekly goals and the criterionlevels for success. Results for the 37 out of 43 studentswho finished the program after 5 years showed that75% of daily goals, 72% of weekly goals, and 86% oftotal goals were reached. Ruth summarized the benefi-cial effects of combining strategies: “When [goal-setting] methods are incorporated into a contract, themotivational aspects of behavior contracting and goalsetting may combine to produce maximum effort andsuccess” (p. 156).

Contracting in the Home

Miller and Kelley (1994) combined contingency con-tracting and goal setting to improve the homework per-formance of four preadolescent students with historiesof poor homework completion and who were at risk forother academic problems (e.g., procrastination, beingoff-task, errors with submitted work). During baseline,parents recorded their children’s homework time on-task, type and accuracy of problem completion, andnumber of problems completed correctly. Subsequently,parents and children entered into a goal-setting and con-tingency contracting phase that was preceded by parenttraining on how to set and negotiate goals and write con-tracts. Each night, parents and children established theirrespective goals and negotiated a compromise goalbased on that interaction. Each week, renegotiations oc-curred for tasks, rewards, and sanctions should the con-tract not be met. A recording sheet was used to measureprogress.

Figure 3 shows the results of the study. When goalsetting was combined with contingency contracting, ac-curacy performance increased for all students. Miller andKelley’s findings reaffirm the notion that contingencycontracting can be combined successfully with otherstrategies to produce functional outcomes.

Clinical Applications of Contracting

Flood and Wilder (2002) combined contingency con-tracting with functional communication training to re-

duce the off-task behavior of an elementary-aged stu-dent diagnosed with attention-deficit/hyperactivity dis-order (ADHD) who had been referred to a clinic-basedprogram because his off-task behavior had reachedalarming levels. Antecedent assessment, functionalcommunication training, and contingency contractingwere conducted in a therapy room located in the clini-cal facility. Specifically, antecedent assessments wereconducted to determine the level of off-task behaviorwhen the difficulty of academic tasks varied from easyto difficult, and therapist attention varied from low tohigh. A preference assessment was also conducted.Using discrete trial training, the student was taught toraise his hand for assistance with tasks (e.g., “Can youhelp me with this problem?”). The therapist sat nearbyand responded to appropriate requests for assistanceand ignored other vocalizations. Once requesting as-sistance was mastered, the contingency contract wasestablished whereby the student could earn preferreditems, identified through the assessment, contingent onaccurate task completion. The results showed that dur-ing baseline, off-task performance was high for mathdivision and word problems. When the intervention wasintroduced, an immediate reduction in off-task behav-ior was noted for division and word problems. Also,the student’s accuracy in completing the division andword problems improved as well. Whereas during base-line conditions he solved correctly 5% and 33% of thedivision and word problems, respectively, during in-tervention he solved 24% and 92% of the problemscorrectly.

Using Contracting to Teach Self-Management to Children

Ideally, contingency contracting involves the active par-ticipation of the child throughout the development, im-plementation, and evaluation of the contract. For manychildren, contracting is a first experience in identify-ing specific ways they would like to act and then ar-ranging certain aspects of their environment to set theoccasion for and reward those acts. If more of the de-cision making for all parts of the contracting process is turned over to children gradually and systematically,they can become skilled at self-contracting. A self-contract is a contingency contract that a person makeswith herself, incorporating a self-selected task and re-ward as well as personal monitoring of task comple-tion and self-delivery of the reward. Self-contractingskills can be achieved by a multistep process of havingan adult prescribe virtually all of the elements of thetask and reward and gradually shifting the design ofthe elements to the child.

562

Contingency Contracting, Token Economy, and Group Contingencies

100

90

80

70

60

50

40

30

0

BL GS + CC BL GS + CC

Richard

100

90

80

70

60

50

40

30

0Jenny

100

90

80

70

60

50

40

30

0Adam

100

90

80

70

60

50

40

30

0Ann

2 4 6 8 10 12 14 16 18 20 22 24 26 28

Sessions

Per

cen

tage

of

Acc

ura

cy

Figure 3 Percentage of homework problemscompleted accurately during baseline and atreatment condition consisting of goal setting andcontingency contracting. Sessions correspond tosequential school days (i.e., Monday throughThursdays) on which subjects were assignedhomework. Data were not collected on days onwhich homework was not assigned.From “The Use of Goal Setting and Contingency Contracting forImproving Children’s Homework Performance” by D. L. Miller & M. L. Kelley, 1994, Journal of Applied Behavior Analysis, 27, p. 80.Copyright 1994 by the Society for the Experimental Analysis ofBehavior, Inc. Reprinted by permission.

Developing Contingency Contracts

Although teachers, therapists, or parents can unilaterallydetermine a contract for a child or client, contracting isusually more effective when all of the parties involvedplay an active role in developing the contract. Severalmethods and guidelines have been proposed for devel-oping contingency contracts (Dardig & Heward, 1981;

Downing, 1990; Homme Csanyi, Gonzales & Rechs,1970). Contract development involves the specification oftasks and rewards in a fashion agreeable and beneficial toeach party. Dardig and Heward (1981) described a five-step procedure for task and reward identification that canbe used by teachers and families.

Step 1: Hold a meeting. To get the entire group(family or class) involved in the contracting process, a

563

Contingency Contracting, Token Economy, and Group Contingencies

meeting should be held. At this meeting members candiscuss how contracts work, how they can help thegroup cooperate and get along better, and how con-tracts can help individuals meet personal goals. Par-ents or teachers should emphasize that they willparticipate in all of the steps leading up to and includ-ing implementation of contracts. It is important thatchildren view contracting as a behavior-exchangeprocess shared by all members of the group, not assomething adults impose on them. The field-testedlist-making procedures described in the followingsteps provide a simple and logical framework for theselection of tasks and rewards for family and class-room contracts. Most groups can complete the proce-dure within 1 to 2 hours.

Step 2: Fill Out List A. Each member completesthree lists prior to the actual writing of the contract. ListA (see Figure 4) is designed to help each member identify not only those tasks he can perform within thecontext of a contract, but also those tasks he alreadydoes to help the group. In this way positive attentioncan be focused on appropriate behaviors that individualmembers are currently completing satisfactorily.

Each member should be given a copy of List A. Every-one should be careful to describe all tasks as specificallyas possible. Then the completed lists can be put aside, and

the group can proceed to the next step. If a member is un-able to write, that person’s list can be completed orally.

Step 3: Fill Out List B. List B (see Figure 5) is de-signed to help group members identify possible contracttasks for other group members and helpful behaviorscurrently being completed by those persons. List B canalso identify areas where disagreement exists betweengroup members as to whether certain tasks are actuallybeing completed properly and regularly.

Each member should be given a copy of List B andasked to write his or her name in all three blanks at thetop. These lists can then be passed around the table sothat everyone has a chance to write at least one behavioron each side of everyone else’s list. Everyone writes onevery List B except his or her own, and each personshould be required to write at least one positive behavioron everyone else’s List B. After completion, these listsshould be set aside before moving to the next step.

Step 4: Fill Out List C. List C (see Figure 6) is simply a sheet with numbered lines on which eachgroup member identifies potential rewards he wouldlike to earn by completing contracted tasks. Participantsshould list not only everyday favorite things and activi-ties, but also special items and activities they may havewanted for a long time. It is all right if two or more

Figure 4 A form for self-identification of possible tasks for contingency contracts.

List A Name: Jean

Things I Do to Help My Family Other Ways I Could Help My Family and Myself

1. Feed Queenie and 1. Be on time forChippy supper

2. Clean up my 2. Turn off the lightsbedroom when I leave a room

3. Practice my piano 3. Dust the living room____________________ ____________________

4. Wash dishes 4. Clean up the back____________________ yard

5. Help Dad with the 5. Hang up my coat when laundry I get home from school

6. ____________________ 6. ________________________________________ ____________________

7. ____________________ 7. ________________________________________ ____________________

From Sign Here: A Contracting Book for Children and Their Parents (2nd ed., p. 111) by J. C. Dardig and W. L. Heward, 1981. Bridgewater, NJ: Fournies and Associates. Copyright 1981 by Fournies and Associates.Reprinted by permission.

564

Contingency Contracting, Token Economy, and Group Contingencies

Figure 5 A form for identifying potential contracting tasks for others.

List B Name: Bobby

THINGS Bobby DOES OTHER WAYS BobbyTO HELP THE FAMILY COULD HELP THE FAMILY

1. Vacuums when asked 1. Put his dirty clothes in_______________________ hamper_________________

2. Makes his bed________ 2. Do homework at night________________________ without being asked______

3. Reads stories to little__ 3. Make his own sandwichessister________________ for his school lunch______

4. Empties trash_________ 4. Clean and sponge off _________________________ table after supper _____

5. Rakes leaves_________ 5. ___________________________________________ _______________________

6. ____________________ 6. ___________________________________________ _______________________

7. ____________________ 7. ___________________________________________ ________________________

From Sign Here: A Contracting Book for Children and Their Parents (2nd ed., p. 113) by J. C. Dardig and W. L. Heward, 1981, Bridgewater, NJ: Fournies and Associates. Copyright 1981 by Fournies and Associates.Reprinted by permission.

Figure 6 A form for self-identification of possible rewards for contingencycontracts.

List C Name: Sue Ann

MY FAVORITE THINGS, ACTIVITIES, AND SPECIAL TREATS

1. Listening to records 2. Movies 3. Playing pinball4. Miniature golf 5. Swimming6. Ice skating7. Ice cream sundaes8. Aquarium and fish9. Picnics

10. Coin collection11. Riding a horse12. Fishing with Dad13. _____ _14. _____ _ 15. _____ _

From Sign Here: A Contracting Book for Children and Their Parents (2nd ed., p. 115) by J. C. Dardig and W. L. Heward, 1981, Bridgewater, NJ: Fournies and Associates. Copyright 1981 by Fournies and Associates.Reprinted by permission.

565

Contingency Contracting, Token Economy, and Group Contingencies

Figure 7 A contingency contract for a nonreader.

people indicate the same reward. After List C is com-pleted, each person should collect his two other listsand read them carefully, talking over any misunder-stood items.

Step 5: Write Contracts. The final step begins withchoosing a task for each person’s first contract. Discus-sion should move around the group with members tryingto help each other decide which task is the most impor-tant to start doing first. Everyone should write who isgoing to perform the task, exactly what the task is, howwell and when it has to be done, and any possible ex-ceptions. Everyone should also look at List C andchoose a reward that is neither excessive nor insignifi-cant but is fair for the selected task. Each membershould write who will control the reward, what the re-ward is, when it is to be given, and how much is to begiven. Everyone in the group should write one contractduring the first meeting.

Guidelines and Considerations in Implementing Contracts

In determining whether contingency contracting is anappropriate intervention for a given problem, the prac-titioner should consider the nature of the desired behav-ior change, the verbal and conceptual skills of theparticipant, the individual’s relationship with the per-son(s) with whom the contract will be made, and theavailable resources. The target behavior to be changed bya contingency contract must be in the person’s repertoirealready and must typically be under proper stimulus con-trol in the environment in which the response is desired.If the behavior is not in the individual’s repertoire, other

behavior-building techniques should be attempted (e.g.,shaping, chaining). Contracting is most effective withbehaviors that produce permanent products (e.g., com-pleted homework assignment, cleaned bedroom) or thatoccur in the presence of the person who is to deliver thereward (e.g., the teacher or parent).

Reading ability by the participant is not a prerequi-site for successful contracting; however, the individualmust be able to come under the control of the visual ororal statements (rules) of the contract. Contracting withnonreaders involves three types of clients: (a) preschool-ers with good verbal skills, (b) school-aged childrenwith limited reading skills, and (c) adults with adequatelanguage and conceptual skills but who lack readingand writing skills. Contracts using icons, symbols, pic-tures, photographs, audiotapes, or other nonword char-acterizations can be developed to suit the individualskills of children and adults in all three nonreadergroups (see Figure 7).

Persons who refuse to enter into a contingency con-tract are another consideration. Whereas many childrenare eager, or at least willing, to try a contract, some wantnothing to do with the whole idea. Using contingencycontracting in a collaborative approach (Lassman, Jo-livette, & Wehby, 1999) may reduce the likelihood ofnoncompliance, and following a step-by-step methodmay help to ensure that consensus is reached at each de-cision point in the contract (Downing, 1990). However,the reality is that some nonsigners may not agree to par-ticipate in a contingency contract, even with the best ofpositive approaches built into the system. In such cases,another behavior change strategy would likely be a bet-ter alternative for dealing with the target behavior. Nu-merous lists of rules and guidelines for effective

566

Contingency Contracting, Token Economy, and Group Contingencies

Table 1 Guidelines and Rules for Contingency Contracts

Guidelines and rules for contracts Comment

Write a fair contract. There must be a fair relationship between the difficulty of the task and the amount of the reward. The goal is to achieve a win–win situation for both parties, not for one party to gain an advantage over the other.

Write a clear contract. In many instances a contract’s greatest advantage is that it specifies each person’s expectations. When a teacher’s or parent’s expectations are explicit, performance is more likely to improve. Contingency contracts must say what they mean and mean what they say.

Write an honest contract. An honest contract exists if the reward is delivered at the time and in the amount specified when the task is completed as agreed. In an honest contract the reward is not delivered if the task has not been completed as specified.

Build in several layers of rewards. Contracts can include bonus rewards for beating the best daily, weekly, or monthly performance. Adding these bonuses increases the motivational effect.

Add a response cost contingency, Occasionally, it may be necessary to incorporate a “fine”—the removal of rewards—if theagreed-upon task is not completed.

Post the contract in a visible place. Public posting allows all parties to see progress toward achieving the goals of the contract.

Renegotiate and change a con- Contracting is designed to be a positive experience for all parties, not a tedious endurance tract when either party is consis- contest to determine survivors. If the contract is not working, reconsider the task, the reward tently unhappy with it. components, or both.

Terminate a contingency contract. A contingency contract is a means to an end, not the end product. Once independent and proficient performance is achieved, the contract can be terminated. Further, a contract can and should be terminated when one party or both parties consistently fail to live up to the terms of the contract.

contingency contracting have been published (e.g.,Dardig & Heward, 1976; Downing, 1990; Homme et al.,1970). Table 1 provides a list of the frequently citedguidelines and rules.

Evaluating Contracts

The evaluation of a contingency contract should focuson the objective measurement of the target behavior. Thesimplest way to evaluate a contract is to record the oc-curance of task completion. Including a task record onthe contract helps make evaluation a natural by-productof the contracting process. By comparing the task recordwith a precontract baseline of task completion, an ob-jective determination can be made as to whether im-provement has occurred. A good outcome is one thatresults in the specified task being completed more oftenthan it was before the contract.

Sometimes the outcome data indicate that the taskis being completed more often and more consistentlythan it was before the contract, but the parties involvedare still not happy. In such cases, either the originalproblem or goal that prompted the development of thecontract is not being attained, or one or more of theparticipants do not like the way in which the contractis being carried out. The first possibility results from se-lecting the wrong behavior for the task part of the con-

tract. For example, let us suppose that John, a ninth-grader, wants to improve on the Ds and Fs he has beengetting in algebra and writes a contract with his par-ents specifying as the task “studying his math” for 1 hour each school night. After several weeks John hasfailed to study for the required 1 hour on only twonights, but his in-school algebra performance remainsunchanged. Has John’s contract worked? The correctanswer is both yes and no. John’s contract was suc-cessful in that he was consistently completing the spec-ified task—1 hour of study each day. However, in termsof his original objective—better algebra grades—thecontract was a failure. John’s contract helped himchange the behavior he specified, but he specified thewrong task. Studying for 1 hour, for John at least, wasnot directly related to his goal. By changing his contractto require that he solve correctly 10 algebra equationseach night (the behavior required to get good grades onalgebra tests), his goal of obtaining better grades may be-come a reality.

It is also important to consider the participant’s re-actions to the contract. A contract that produces desiredchange in the specified target behavior but causes othermaladaptive or emotional responses may be an unac-ceptable solution. Having the client in the negotiation development of the contract and jointly conducting reg-ular progress checks helps to avoid this situation.

567

Contingency Contracting, Token Economy, and Group Contingencies

Token EconomyThe token economy is a highly developed and researchedbehavior change system. It has been applied successfullyin virtually every instructional and therapeutic setting pos-sible. The usefulness of a token economy in changing be-haviors that have been resistant to instruction or therapyis well established in the literature (Glynn, 1990; Musser,Bray, Kehle, & Jenson, 2001). In this section we describeand define token economy and outline effective proce-dures for using it in applied settings.

Definition of a Token Economy

A token economy is a behavior change system con-sisting of three major components: (a) a specified listof target behaviors; (b) tokens or points that partici-pants receive for emitting the target behaviors; and (c) a menu of backup reinforcer items—preferred items,activities, or privileges—that participants obtain by ex-changing tokens they have earned. Tokens function asgeneralized conditioned reinforcers for the target be-haviors. First, behaviors to be reinforced are identifiedand defined. Second, a medium of exchange is selected;that medium of exchange is a symbol, object, or item,called a token. Third, backup reinforcers are providedthat can be purchased with the token. Store-based ormanufacturer coupons are analogous to a token econ-omy. When a customer purchases an item from a par-ticipating store, the cashier provides the purchaser withthe coupon—the medium of exchange—that serves asthe token. The coupon is traded later for another itemat a reduced price, or it is redeemed immediately forthe backup reinforcer. Money is another example of atoken that can be exchanged at a later time for backupobjects and activities (e.g., food, clothing, transporta-tion, entertainment).

Carton and Schweitzer (1996), for example, imple-mented a token economy to increase the compliance be-havior of a 10-year-old boy who was hospitalized for

severe renal disease and who required regular hemodial-ysis. The patient developed a noncompliant repertoirethat affected his interactions with nurses and caretakers.During baseline, the number of 30-minute intervals ofnoncompliance was measured by dividing a 4-hour timeblock into eight, 30-minute segments. When the tokeneconomy was introduced, the boy was told that he couldearn one token for each 30-minute period that passedwithout a noncompliant episode. Tokens were exchangedweekly for baseball cards, comics, and toys.

Carton and Schweitzer reported a functional relationbetween the onset of the token economy and the reductionin noncompliant behavior. When tokens were in effect,noncompliant behavior was virtually eliminated. Follow-up data collected 3 months and 6 months after terminationof token reinforcement yielded continued evidence of lownoncompliance to nurse and caretaker requests.

Higgins, Williams, and McLaughlin (2001) used atoken economy to decrease the disruptive behaviors ofan elementary-age student with learning disabilities. Thestudent exhibited high levels of out-of-seat behavior, talk-ing out, and poor sitting posture. After collecting baselinedata on the number of out-of-seat, talking out, and poorposture responses, a token economy was implemented.The student earned a check mark that was exchangeablefor free time after each minute if an alternative behaviorwas being emitted instead of the behaviors targeted for re-duction. Maintenance checks were taken on two subse-quent occasions to determine duration effects. Figure 8displays the results of the study across the three depen-dent variables. A functional relation was established be-tween the onset of the token economy and the reductionof the behaviors. Further, maintenance checks indicatedthat the behavior remained at low levels beyond the ter-mination of the token economy.

Level Systems

A level system is a type of token economy in which par-ticipants move up (and sometimes down) a hierarchy oflevels contingent on meeting specific performance crite-ria with respect to the target behaviors. As participantsmove “up” from one level to the next level, they have ac-cess to more privileges and are expected to demonstratemore independence. The schedule of token reinforcementis gradually thinned so that participants at the highest lev-els are functioning on schedules of reinforcement thatare similar to those in natural settings.

According to Smith and Farrell (1993) level systemsare an outgrowth of two major educational advances thatbegan in the late 1960s and 1970s: (a) Hewett’sEngineered Classroom (1968), and (b) Phillips, Phillips,Fixen, and Wolf’s Achievement Place (1971). In both

A token is an example of a generalized conditionedreinforcer. It can be exchanged for a wide variety ofbackup reinforcers. Generalized conditioned rein-forcers are independent of specific states of motivationbecause they are associated with a wide variety ofbackup reinforcers. However, generalized conditionedreinforcement is a relative concept: Effectiveness de-pends to a large extent on the extensiveness of thebackup reinforcers. Tokens exchangeable for a wide variety of backup reinforcers have considerable utilityin schools, clinics, and hospitals where it is difficult for personnel to control the deprivation states of theirclients.

568

Contingency Contracting, Token Economy, and Group Contingencies

15

10

5

0

Tal

k-O

uts

Baseline Token Economy Maintenance

15

10

5

0

Ou

t-o

f-Sea

t

15

10

5

0

Po

or

Po

stu

re

1 3 5 7 9 11 13 15 17 19

Observation Days

Figure 8 The number of talk-outs, out-of-seat behavior, and poor posture during baseline,token economy, and maintenance conditions.From “The Effects of a Token Economy Employing InstructionalConsequences for a Third-Grade Student with Learning Disabilities:A Data-Based Case Study” by J. W. Higgins, R. L. Williams, and T. F. McLaughlin, 2001, Education and Treatment of Children, 24 (1),p. 103. Copyright 2001 by The H. W. Wilson Company. Reprinted bypermission.

cases, systematic academic and social programming com-bined token reinforcement, tutoring systems, student self-regulation, and managerial arrangements. Smith andFarrell stated that level systems are designed to

foster a student’s improvement through self-manage-ment, to develop personal responsibility for social, emo-tional, and academic performance . . . and to providestudent transition to a less restrictive mainstream set-ting. . . . Students advance through the various levels as they show evidence of their achievement. (p. 252)

In a level system, students must acquire and achieveincreasingly more refined repertoires while tokens, socialpraise, or other reinforcers are simultaneously decreased.Level systems have built-in mechanisms for participantsto advance through a series of privileges, and they arebased on at least three assumptions: (a) combined tech-niques—so called “package programs”—are more ef-

fective than individual contingencies introduced alone,(b) student behaviors and expectations must be stated ex-plicitly, and (c) differential reinforcement is necessary toreinforce closer and closer approximations to the nextlevel (Smith & Farrell, 1993).

Lyon and Lagarde (1997) proposed a three-levelgroup of reinforcers that placed less desirable reinforcersat Level 1. At this level, students had to earn 148 pointsor 80% of the 185 maximum points that could be earnedduring the week to purchase certain items. At Level 3,the highly desirable reinforcers could be purchased onlyif the students had accumulated at least 167 points, or90% of the total points possible. As the levels progress,the expectations for performance increase.

Cavalier, Ferretti, and Hodges (1997) incorporateda self-management approach with an existing level sys-tem to improve the academic and social behavior of twoadolescent students with learning disabilities for whom

569

Contingency Contracting, Token Economy, and Group Contingencies

increased participation in a general education classroomwas an individualized education program (IEP) goal. Basically, other students in the classroom were makingadequate progress through the six-level point system thatthe teacher had devised, but the inappropriate verbaliza-tions of the two target students during class stalled theirprogress at Level 1. After collecting baseline data on thenumber of inappropriate verbalizations, the researcherstrained the students to self-record occurrences of thesebehaviors during two 50-minute intervals in the day. In-appropriate verbalizations were defined explicitly, andduring mock trials the students practiced self-recordingand received feedback based on teacher observations during the same interval. An emphasis was placed on the accuracy of the students’ recording, and desirable reinforcers were awarded for accurate recording. Duringthe intervention (level system plus self-recording), stu-dents were told that they would be observed during thetwo 50-minute intervals for accuracy. If they met the cri-terion for each level (e.g., a decrease of five inappropri-ate verbalizations from the previous session), a reinforcerwas delivered. As the students progressed through thelevels, the reinforcer changed to more highly desirableitems. Results showed a high number of verbalizationsduring baseline; however, when the packaged interventionwas initiated for Student 1, inappropriate verbalizationsdecreased. This outcome was replicated for Student 2,confirming a functional relation between the the inter-vention and the decrease in inappropriate verbalizations.

Designing a Token Economy

The basic steps in designing and preparing to implementa token economy are as follows:

1. Select tokens that will serve as a medium of ex-change (e.g., points, stickers, plastic chips).

2. Identify target behaviors and rules.

3. Select a menu of backup reinforcers.

4. Establish a ratio of exchange.

5. Write procedures to specify when and how tokenswill be dispensed and exchanged and what will hap-pen if the requirements to earn a token are not met.Will the system include a response cost procedure?

6. Field-test the system before full-scale implementa-tion.

Selecting Tokens

A token is a tangible symbol that can be given immedi-ately after a behavior and exchanged later for known rein-forcers. Frequently used tokens include washers, checkers,

coupons, poker chips, points or tally marks, teacher ini-tials, holes punched in a card, and strips of plastic. Crite-ria to consider in selecting the token itself are important.First, the token should be safe; it should not be harmful tothe learners. If a very young child or a person with severelearning or behavioral problems is to receive the token, itshould not be an item that can be swallowed or used tocause injury. Second, the analyst should control token pre-sentation; learners should not be able to bootleg the deliv-ery of the tokens. If tally marks are used, they should be ona special card or made with a special marking pen that isavailable only to the analyst. Likewise, if holes are punchedin a card, the paper punch should be available only to theanalyst to avoid counterfeiting.

Tokens should be durable because they may have to beused for an extended period of time, and they should beeasy to carry, handle, bank, store, or accumulate. Tokensshould also be readily accessible to the practitioner at themoment they are to be dispensed. It is important that theybe provided immediately after the target behavior. Tokensshould be inexpensive; there is no need to spend a largesum of money to purchase tokens. Rubber stamps, stars,check marks, and buttons are all inexpensive items thatcan be used as tokens. Finally, the token itself should notbe a desirable object. One teacher used baseball cards astokens, but the students spent so much time interactingwith the tokens (e.g., reading about players) that the to-kens distracted students from the purpose of the system.

For some students, the object of an obsession canserve as a token reinforcer (Charlop-Christy & Haymes,1998). In their study, three children with autism who at-tended an after-school program served as participants. Allthree children were consistently off task during activities,preoccupied with some objects, and engaged in self-stim-ulatory behaviors. During baseline, the students earnedstars for appropriate behavior. Inappropriate behaviors orincorrect responses were addressed by saying “try again”or “no.” When the students earned five stars, the tokenscould be exchanged for backup reinforcers (e.g., food,pencils, tablets). During the token condition, an “obses-sion” object (one the children had been previously pre-occupied with) was used as the token. Once they hadearned five obsession objects, children could exchangethem for food items or other known reinforcers. Charlop-Christy and Haymes reported that the overall pattern of re-sponding demonstrated that when the token was anobsession object, student performance improved.

Identifying TargetBehaviors and Rules

The criteria presented

570

Contingency Contracting, Token Economy, and Group Contingencies

also apply to the selection and definition of rules and tar-get behaviors for a token economy. Generally, the guide-lines for selecting behaviors for a token economy include(a) selecting only measurable and observable behaviors;(b) specifying criteria for successful task completion; (c)starting with a small number of behaviors, including somethat are easy for the individual to accomplish; and (d)being sure the individual possesses the prerequisite skillsfor any targeted behaviors (Myles, Moran, Ormsbee, &Downing, 1992).

After rules and behaviors that apply to everyone aredefined, then criteria and behaviors specific to individuallearners should be established. Many token economyfailures can be traced to requiring the same behaviors andsetting the same criteria for all learners. Token economiesusually need to be individualized. For example, in a class-room setting the teacher may want to select different be-haviors for each student. Or perhaps the token economyshould not be applied to all students in the classroom.Perhaps only the lowest functioning students in the class-room should be included. However, students who do notneed a token system should still continue to receive otherforms of reinforcement.

Selecting a Menu of Backup Reinforcers

Most token economies can use naturally occurring ac-tivities and events as backup reinforcers. For example, ina classroom or school setting, tokens can be used to buytime with popular games or materials, or they can be ex-changed for favorite classroom jobs such as office mes-senger, paper passer, teacher assistant, or media operator.Tokens can also be used for schoolwide privileges suchas a library or study hall pass; a special period (e.g., phys-ical education) with another class; or special responsi-bilities such as school patrol, cafeteria monitor, or tutor.Higgins and colleagues (2001) used naturally occurringactivities and events as backup reinforcers for a tokeneconomy in their study (i.e., the students had access tocomputer games and leisure books). However, play ma-terials, hobby-type games, snacks, television time, al-lowance, permission to go home or downtown, sportsevents, and coupons for gifts or special clothing couldalso be used as backup reinforcers because these objectsor items tend to occur in many settings.

If naturally occurring activities and events fail, thenbackup items not ordinarily present in a particular pro-gram can be considered (e.g., pictures of movie or sportsstars, CDs or DVDs, magazines, or edibles). Such itemsshould generally be considered only when more naturallyoccurring activities have proven ineffective. Using theleast intrusive and most naturally occurring reinforcersis recommended.

Selection of backup reinforcers should follow con-sideration of ethical and legal issues, as well as state andlocal education agency policies. Token reinforcementcontingencies that would deny the learner basic needs(e.g., food) or access to personal or privileged informa-tion or events (e.g., access to mail, telephone privileges,attending religious services, medical care, etc.) shouldnot be used. Furthermore, general comforts that are as-sociated with basic rights afforded to all citizens shouldnot be used in a token program (e.g., clean clothing, ad-equate heating, ventilation, hot water).

Establishing a Ratio of Exchange

Initially, the ratio between the number of tokens earnedand the price of backup items should be small to provideimmediate success for learners. Thereafter, the ratio ofexchange should be adjusted to maintain the responsive-ness of the participants. Following are several generalguidelines for establishing the ratio between earned to-kens and the price of backup items:

1. Keep initial ratios low.

2. As token-earning behaviors and income increase,increase the cost of backup items, devalue tokens,and increase the number of backup items.

3. With increased earnings, increase the number ofluxury backup items.

4. Increase the prices of necessary backup items morethan those of luxury items.

Myles and colleagues (1992) provided guidelines for es-tablishing and maintaining a token economy, includingthe distribution and redemption of tokens. The next sectionaddresses frequently asked questions related to tokens.

What procedure will be used to dispense tokens?If objects such as tally marks or holes punched in a cardare selected as the tokens, how the learner will receivethem is obvious. If objects such as coupons or pokerchips are used, there should be some container for stor-ing the accumulated tokens before they are exchangedfor the backup items. Some practitioners have learnersconstruct individual folders or containers for storingtheir tokens. Another suggestion is to deposit the tokensthrough cut slots in the plastic tops of coffee cans. Withyounger learners tokens can be chained to form a neck-lace or bracelet.

How will the tokens be exchanged? A menu of thebackup items should be provided with a given price for

571

Contingency Contracting, Token Economy, and Group Contingencies

each item. Learners can then select from the menu.Many teachers have a table store with all the items dis-played (e.g., games, balloons, toys, certificates for priv-ileges). To avoid noise and confusion at shopping time,individual orders can be filled by writing in or checkingoff the items to be purchased. Those items are thenplaced in a bag with the order form stapled to the topand are returned to the purchaser. Initially, the storeshould be open frequently, perhaps twice per day.Lower functioning learners may need more frequent ex-change periods. Later, exchange periods might be avail-able only on Wednesdays and Fridays, or only onFridays. As quickly as possible, token exchange shouldoccur on an intermittent basis.

Writing Procedures to Specify What Happensif Token Requirements Are Not Met

Occasionally, token requirements will not be met for onereason or another. One approach is to nag the individual:“You didn’t do your homework. You know your home-work must be completed to earn tokens. Why didn’t youdo it?” A better approach is a matter-of-fact restatementof the contingency: “I’m sorry. You haven’t enough to-kens to exchange at this time. Try again.” It is importantto know whether the individual has the skills required toearn tokens. A learner should always be able to meet theresponse requirements.

What Should Be Done When a Learner Tests the Sys-tem? How should a practitioner respond when alearner says she doesn’t want any tokens or backupitems? One approach is to argue, debate, or cajole thelearner. A better approach is to say something neutral(e.g., “That is your decision”) and then walk away, pre-cluding any argument or debate. In this way a con-frontation is avoided, and the occasion remains set fortoken delivery for the learner. Most learners can andshould have input in selecting the backup items, gener-ating the rules for the economy, establishing the pricefor the backup items, and performing general duties inmanaging the system. A learner can be a salesperson forthe store or a bookkeeper to record who has how manytokens and what items are purchased. When learners areinvolved and their responsibilities for the economy areemphasized, they are less likely to test the system.

Will the Token Economy Include a Response CostProcedure? Most token economies do include a tokenloss contingency for inappropriate behaviors and ruleinfractions (Musser et al., 2001). Any behaviors subjectto response cost should be defined and stated clearly inthe rules. Learners need to be aware of what actions

will result in token loss and how much the behaviorwill cost. The more serious the inappropriate behavioris, the greater the token loss should be. Clearly, fight-ing, acting out, or cheating should result in greatertoken loss than minor infractions (e.g., out-of-seat be-havior or talk-outs). Token loss should never be appliedto a behavior if the learner does not have tokens. Stu-dents should not be allowed to go into debt, whichwould likely decrease the reinforcement value of the to-kens. A learner should always earn more tokens thanshe loses.

Field-Testing the System

The final step before actually implementing a token sys-tem is to field-test it. For 3 to 5 days token delivery istallied exactly as if tokens were being earned, but no to-kens are actually awarded during the field test. Data fromthe field test are used for assessment. Are learners actu-ally deficient in the targeted skills? Are some learnersdemonstrating mastery of behaviors targeted for inter-vention? Are some learners not receiving tokens? Basedon answers to questions such as these, final adjustmentsin the system can be made. For some learners, more dif-ficult behaviors may need to be defined; others may needless demanding target behaviors. Perhaps more or fewertokens need to be delivered relative to the price of thebackup reinforcers.

Implementing a Token Economy

Initial Token Training

The manner in which initial training is conducted to im-plement a token economy depends on the functioninglevel of the learners. For high-functioning learners andthose with mild disabilities, initial training might requireminimal time and effort and consist primarily of verbalinstructions or modeling. Usually the initial token train-ing for these individuals can be accomplished in one 30-to 60-minute session. Three steps are normally sufficient.First, an example of the system should be given. The prac-titioner might describe the system as follows:

This is a token and you can earn it by [specify behav-ior]. I will watch your behavior; and when you accom-plish [specify behavior], you will earn a token. Also, asyou continue [specify behavior], you will earn more to-kens. At [specify time period] you will be able to ex-change the tokens you have earned for whatever youwant and can afford on this table. Each item is markedwith the number of tokens needed for purchase. You canspend only the tokens you have earned. If you want an

572

Contingency Contracting, Token Economy, and Group Contingencies

item that requires more tokens than you have earned,you will have to save your tokens over several [specifytime period].

The second step is to model the procedure for tokendelivery. For instance, each learner might be directed toemit the specified behavior. Immediately following theoccurrence of the behavior, the learner should be praised(e.g., “Enrique, I’m pleased to see how well you areworking by yourself!”) and the token delivered.

The third step is to model the procedure for tokenexchange. Learners should be taken to the store andshown the items for purchase. All learners should alreadyhave one token, which was acquired during the modelingof token delivery. At this time, several items should beable to be purchased for one token (the price may go uplater)—a game, 5 minutes of free time, a pencil-sharp-ening certificate, or teacher helper privilege. Studentsshould actually use their tokens in this exchange. Lowerfunctioning learners may require several sessions of ini-tial token training before the system is functional forthem. Further response prompts may be needed.

Ongoing Token Training

During token reinforcement training, the practitioner andthe students should follow the guidelines for effectiveuse of reinforcement. For example, tokens should be dis-pensed contingently and immediately after the occur-rence of the desired behavior. Procedures for deliveryand exchange should be clear and should be followedconsistently. If a booster session is needed to improvestudent understanding of how tokens are earned and ex-changed, practitioners should do so early in the program.Finally, the focus should be on building and increasingdesirable behaviors through token delivery rather thandecreasing undesirable behaviors through response cost.

As part of the overall training, the behavior analystmay choose to take part in the token economy as well. Forexample, the analyst could pinpoint a personal behaviorto be increased and then model how to act if her perfor-mance of that behavior does not meet the criterion to earna token, how to save tokens, and how to keep track ofprogress. After 2 to 3 weeks, a revision in the token econ-omy system may be needed. It is usually desirable to havelearners discuss behaviors they want to change, the backupitems they would like to have available, or the schedule ofexchange. If some participants rarely earn tokens, a sim-pler response or prerequisite skill may be necessary. Onthe other hand, if some participants always earn all possi-ble tokens, requirements may need to be changed to a morecomplex skill.

Management Issues During Implementation

Students must be taught how to manage the tokens theyearn. For instance, once received, tokens should be placedin a safe, but accessible container so that they are out ofthe way, but readily available when needed. If the tokensare in clear view and readily accessible, some studentsmight play with them at the expense of performing aca-demic tasks assigned by the teacher. Also, placing the to-kens in a secure location reduces the risk of other studentscounterfeiting or stealing them. Preemptive measuresshould be taken to ensure that tokens are not easily coun-terfeited or reachable by anyone other than the recipient.If counterfeiting or stealing occurs, switching to differenttokens will help reduce the likelihood of these tokensbeing exchanged under false pretenses.

Another management issue, however, relates to stu-dents’ token inventories. Some students may hoard theirtokens and not exchange them for backup reinforcers.Other students may try to exchange their tokens for abackup reinforcer, but they lack the requisite number oftokens to do so. Both extremes should be discouraged.That is, students should be required to exchange at leastsome of their earned tokens periodically, and studentswithout the requisite number of tokens should not be per-mitted to participate in an exchange. That is, they shouldnot be permitted to buy backup reinforcers on credit.

A final management issue relates to chronic rulebreakers or students who test the system at every turn.Practitioners can minimize this situation by (a) ensuringthat the token does serve as a generalized conditioned re-inforcer, (b) conducting a reinforcer assessment to deter-mine that the backup reinforces are preferred by thestudents and function as reinforcers, and (c) applying re-sponse cost procedures for chronic rule breakers.

Withdrawing the Token Economy

Strategies for promoting generalization and maintenanceof target behaviors to settings in which a token or levelsystem is not used should be considered in the design andimplementation of a token or level system. Before ap-plying the initial token program, analysts should planhow they will remove the program. One goal of the tokenprogram should be to have the descriptive verbal praisethat is delivered simultaneously with the token acquirethe reinforcing capability of the token. From the begin-ning, a systematic goal of the token economy should beto withdraw the program. Such an approach, aside fromhaving functional utility for practitioners (i.e., they willnot have to issue tokens forever), also has advantages forthe learner. For example, if a special education teacher is using a token economy with a student scheduled for

573

Contingency Contracting, Token Economy, and Group Contingencies

full-time placement in a regular fourth grade classroom,the teacher wants to be certain that the student’s responsescan be maintained in the absence of the token economy.It is unlikely that the student would encounter a similartoken system in the general education classroom.

Various methods have been used to withdraw tokenreinforcers gradually after criterion levels of behaviorhave been reached. The following six guidelines allowthe practitioner to develop, and later withdraw, token re-inforcers effectively. First, the token presentation shouldalways be paired with social approval and verbal praise.This should increase the reinforcing effect of the socialapproval and serve to maintain behaviors after tokenwithdrawal.

Second, the number of responses required to earn atoken should be gradually increased. For instance, if astudent receives a token initially after reading only onepage, he should be required to read more pages later fortoken delivery.

Third, the duration the token economy is in effectshould be gradually decreased. For example, during Sep-tember the system might be in effect all day; in Octoberthe time might be 8:30 A.M. to 12:00 P.M. and 2:00 P.M. to3:00 P.M.; and in November, 8:30 A.M. to 10:00 A.M. and2:00 P.M. to 3:00 P.M. In December the times might be thesame as those in November, but on only 4 days a week,and so on.

Fourth, the number of activities and privileges thatserve as backup items and are likely to be found in the un-trained setting should be increased gradually. For exam-ple, the analyst should start taking away tangible items inthe store that might not be present in the regular class-room. Are edibles available at the store? They are usuallynot available for reinforcement in regular classrooms.Gradually, items should be introduced that would be com-mon in a regular class (e.g., special award sheets, goldstars, positive notes home).

Fifth, the price of more desirable items should be in-creased systematically while keeping a very low price onless desirable items for exchange. For example, in a tokensystem with adolescent girls with moderate to severemental retardation, the price of candy bars and trips tothe canteen and grooming aids (e.g., comb, deodorant)was initially about the same. Slowly, the cost of itemssuch as candy bars was increased to such a high level thatthe girls no longer saved tokens to purchase them. Moregirls used their tokens to purchase grooming aids, whichcost substantially less than candy.

Sixth, the physical evidence of the token should befaded over time. The following sequence illustrates howthe physical evidence of the token can be faded.

• Learners earn physical tokens such as poker chipsor washers.

• The physical tokens are replaced with slips of paper.

• The slips of paper are replaced with tally marks onan index card that is kept by the learners.

• In a school setting the index card can now be tapedto the learner’s desk.

• The index card is removed from the learner and iskept by the analyst, but participants can check theirbalance at any time.

• The analyst keeps tallies, but no checking is al-lowed during the day. Totals are announced at theend of the day, and then every other day.

• The token system is no longer operative. The be-havior analyst does not announce point totals eventhough they are still kept.

Evaluating the Token Economy

Token economies can be evaluated using any number ofreliable, valid, field-tested, best practice designs. Giventhat most token economy programs are conducted withsmall groups, we recommend that single-subject evalua-tion designs be used such that the participant serves as hisor her own control. Further, we suggest that social vali-dation data on the target participants and significant oth-ers who come into contact with the person be collectedbefore, during, and after token intervention.

Reasons for the Effectiveness of the Token Economy

A token economy is often effective in applied settingsfor three reasons. First, tokens bridge the time gap be-tween the occurrence of a behavior and delivery of abackup reinforcer. For example, a token may be earnedduring the afternoon, but the backup reinforcer is notawarded until the next morning. Second, tokens bridgethe setting gap between the behavior and the delivery ofthe backup reinforcer. For instance, tokens earned atschool could be exchanged for reinforcers at home, ortokens earned in a general education classroom in themorning could be exchanged in a special education class-room in the afternoon. Finally, as generalized conditionedreinforcers, tokens make the management of motivationless critical for the behavior analyst.

Further Considerations

Intrusive. Token systems can be intrusive. It takestime, energy, and resources to establish, implement, andevaluate token programs. Also, because most natural

574

Contingency Contracting, Token Economy, and Group Contingencies

environments do not reinforce a person’s behavior withtokens, careful thought must be given to how to thin thetoken schedule while simultaneously maintaining per-formance. In any case, token economy programs canhave lots of “moving parts,” and practitioners must beprepared to deal with them.

Self-Perpetuating. A token economy can be aneffective procedure for managing behavior, and ana-lysts can be so encouraged by the results that they donot want to remove the system. Learners then continueworking for reinforcement that is not normally avail-able in the natural environment.

Cumbersome. Token economies can be cumber-some to implement, especially if there are multiple par-ticipants with multiple schedules of reinforcement. Thesystem may require additional time and effort from thelearner and the behavior analyst.

Federal Mandates. When tokens are introducedwithin the context of a level system, practitionersshould exercise caution that the explicit and uniform re-quirements for students to earn a specific number of to-kens before progressing to the next level does notviolate the spirit or intent of federal mandates that callfor individualized programs. Scheuermann and Webber(1996) suggested that tokens and other programs em-bedded within level systems be individualized and thatself-management techniques be combined with thelevel system to increase the likelihood of a successfulinclusion program.

Group ContingenciesThus far in the text we have focused primarily on howcontingencies of reinforcement can be applied to changethe future frequency of certain behaviors of individualpersons. Applied research has also demonstrated howcontingencies can be applied to groups, and behavior an-alysts have increasingly turned their attention towardgroup contingencies in areas such as leisure activities foradults (Davis & Chittum, 1994), schoolwide applications(Skinner, Skinner, Skinner, & Cashwell, 1999), class-rooms (Brantley & Webster, 1993; Kelshaw-Levering,Sterling-Turner, Henry, & Skinner, 2000; Skinner, Cash-well, & Skinner, 2000), and playgrounds (Lewis, Powers,Kelk, & Newcomer, 2002). Each of these applicationshas shown that group contingencies, properly managed,can be an effective and practical approach to changing

the behavior of many people simultaneously (Stage &Quiroz, 1997).

Definition of a Group Contingency

A group contingency is one in which a common conse-quence (usually, but not necessarily, a reward intended tofunction as reinforcement) is contingent on the behaviorof one member of the group, the behavior of part of thegroup, or the behavior of everyone in the group. Groupcontingencies can be classified as dependent, indepen-dent, or interdependent (Litow & Pumroy, 1975).

Rationale for and Advantages of Group Contingency

There are a number of reasons for using a group contin-gency in applied settings. First, it can save time during ad-ministration (Skinner, Skinner, Skinner, & Cashwell,1999). Instead of repeatedly administering a consequenceto each member of a group, the practitioner can applyone consequence to all members of the group. From a lo-gistical perspective, a practitioner’s workload may be re-duced. Group contingencies have been demonstrated tobe effective in producing behavior change (Brantley &Webster, 1993). A group contingency can be effectiveand economical, requiring fewer practitioners or less timeto implement.

Another advantage is that a practitioner can use agroup contingency in a situation in which an individualcontingency is impractical. For example, a teacher at-tempting to reduce disruptive behaviors of several stu-dents might have difficulty administering an individualprogram for each pupil in the classroom. A substituteteacher, in particular, might find the use of a group con-tingency a practical alternative because her knowledgeof the students’ previous histories of reinforcementwould be limited, and the group contingency could be applied across a variety of behaviors, settings, orstudents.

A group contingency can also be used in cases inwhich the practitioner must resolve a problem quickly,as when serious disruptive behavior occurs. The prac-titioner might be interested not only in decreasing thedisruptive behavior rapidly, but also in building im-proved levels of appropriate behavior (Skinner et al.,2000).

Furthermore, a practitioner can use a group contin-gency to capitalize on peer influence or peer monitoringbecause this type of contingency sets the occasion forpeers to act as change agents (Gable, Arllen, & Hen-drickson, 1994; Skinner et al., 1999). Admittedly, peer

575

Contingency Contracting, Token Economy, and Group Contingencies

bonus

Figure 9 An independent group contingency.

A B C

Criterion statedfor one

individualor small group*

Criterion met SR+ for whole group

*(E.g., “When all students at Table 2 finish their math assignments, the class will have 5 minutes of free time.”)

Figure 10 A dependent group contingency.

pressure can have a detrimental effect on some people;they may become scapegoats, and negative effects maysurface (Romeo, 1998). However, potentially harmful ornegative outcomes can be minimized by structuring thecontingency elements randomly (Kelshaw-Levering et al.,2000; Poplin & Skinner, 2003).

Practitioners can establish a group contingency tofacilitate positive social interactions and positive be-havioral supports within the group (Kohler, Strain,Maretsky, & DeCesare, 1990). For example, a teachermight establish a group contingency for a student or agroup of students with disabilities. The students withdisabilities might be integrated into the general educationclassroom, and a contingency could be arranged in sucha way that the class would be awarded free time contin-gent on the performance of one or more of the studentswith disabilities.

Independent Group Contingency Applications

An independent group contingency is an arrangementin which a contingency is presented to all members of agroup, but reinforcement is delivered only to those groupmembers who meet the criterion outlined in the contin-gency (see Figure 9) Independent group contingenciesare frequently combined with contingency contractingand token reinforcement programs because these pro-grams usually establish reinforcement schedules inde-pendent of the performance of other members of thegroup.

Brantley and Webster (1993) used an independentgroup contingency in a general education classroom todecrease the disruptive behavior of 25 fourth-grade stu-dents. After collecting data on off-task behavior, call-outs,and out-of-seat behavior, the teachers posted rules relatedto paying attention, seeking the teachers’ permission be-fore talking, and remaining in their seats. An independentgroup contingency was established whereby each student

could earn a check mark next to his or her name on a listthat was posted publicly in the room during any of the in-tervals that marked the observation periods during theday. When a student emitted an appropriate or prosocialbehavior, a check mark was registered. The criterion forearning a reward was increased from four to six checkmarks over 4 out of 5 days per week.

Results showed that after 8 weeks, the total numberof combined disruptions (e.g., off-task behavior, call-outs, and out-of-seat behavior) decreased by over 70%,and some off-task behaviors (e.g., not keeping hands toself) were eliminated completely. The teacher’s satisfac-tion with the approach was positive, and parents reportedthat they were able to understand the procedures that werein place for their children at school. Brantley and Web-ster (1993) concluded:

The independent contingency added structure for stu-dents by using clear time intervals and clarified teacherexpectations by limiting and operationally defining rulesto be followed, monitoring behavior consistently, andsetting attainable criteria for students. (p. 65)

Dependent Group Contingency Applications

Under a dependent group contingency the reward forthe whole group is dependent on the performance ofan individual student or small group. Figure 10 il-lustrates the dependent group contingency as a three-term contingency. The contingency operates like this:If an individual (or small group within the total group)performs a behavior to a specific criterion, the groupshares the reinforcer. The group’s access to the rewarddepends on the individual’s (or small group’s) perfor-mance. If the individual performs below the criterion,the reward is not delivered. When an individual, orsmall group, earns a reward for a class, the contingencyis sometimes referred to as the hero procedure.

576

Contingency Contracting, Token Economy, and Group Contingencies

A B C

Criterion statedfor all

group members*

Criterion met SR+ for whole groupif all

members ofthe group

achieve criterion

*(E.g., "Each student must complete at least four science projects by the 6th week in the term in order for the class to go on the field trip.")

Figure 11 An interdependent group contingency.

According to Kerr and Nelson (2002), the hero proce-dure can facilitate positive interactions among students because the class as a whole benefits from the improvedbehavior of the student targeted for the group con-tingency.

Gresham (1983) conducted a dependent group con-tingency study in which the contingency was applied athome, but the reward was delivered at school. In thisstudy an 8-year-old boy who was highly destructive athome (e.g., set fires, destroyed furniture) earned goodnotes for nondestructive behavior at home. Billy receiveda good note—a daily report card—each day that no de-structive acts took place. Each note was exchangeablefor juice, recess, and five tokens at school the next day.After Billy received five good notes, the whole class re-ceived a party, and Billy served as the host. Gresham re-ported that the dependent group contingency reduced theamount of destructive behavior and represented the firstapplication of a dependent group contingency in a com-bined home–school setting.

Allen, Gottselig, and Boylan (1982) used an inter-esting variation of the dependent group contingency.In their study eight disruptive third-graders from a classof 29 students served as target students. On the firstday of intervention the teacher posted and explainedclassroom rules for hand raising, leaving a seat, dis-turbing others, and getting help. Contingent on reducedamounts of disruptive behavior during 5-minute inter-vals in math and language arts, the class earned 1 extraminute of recess time. If a disruptive behavior occurredduring the 5-minute interval, the teacher cited the dis-rupter for the infraction (e.g., “James, you disturbedSue”) and reset the timer for another 5-minute inter-val. The teacher also posted the accumulated time on aneasel in full view of the class. The results indicated thatreduced disruptive behavior occurred under the depen-dent group contingency.

Interdependent GroupContingencies

An interdependent group contingency is one in whichall members of a group must meet the criterion of thecontingency (individually and as a group) before anymember earns the reward (Elliot, Busse, & Shapiro,1999; Kelshaw-Levering et al., 2000; Lewis et al., 2002;Skinner et al., 1999; Skinner et al., 2000). Theoretically,interdependent group contingencies have a value-addedadvantage over dependent and independent group con-tingencies insofar as they yoke students to achieve acommon goal, thereby capitalizing on peer pressure andgroup cohesiveness.

The effectiveness of dependent and interdependentgroup contingencies may be enhanced by randomly ar-ranging some or all of the components of the contingency(Poplin & Skinner, 2003). That is, randomly selected stu-dents, behaviors, or reinforcers are targeted for the con-tingency (Kelshaw-Levering et al., 2000; Skinner et al.,1999). Kelshaw-Levering and colleagues (2000) demon-strated that randomizing either the reward alone or mul-tiple components of the contingency (e.g., students,behaviors, or reinforcers) was effective in reducing dis-ruptive behavior.

Procedurally, an interdependent group contingencycan be delivered (a) when the group as a whole meets thecriterion, (b) when the group achieves a mean groupscore, or (c) based on the results of the Good BehaviorGame or the Good Student Game. In any case, interde-pendent group contingencies represent an “all or none”arrangement. That is, all students earn the reward or noneof them do (Poplin & Skinner, 2003). Figure 11 il-lustrates the interdependent group contingency as a three-term contingency.

Total Group Meets Criterion

Lewis and colleagues (2002) used the total group meetscriterion variation to reduce the problem playgroundbehaviors of students enrolled in a suburban elementaryschool. After a faculty team conducted an assessmentof problematic playground behaviors, social skill in-struction in the classroom and on the playground wascoupled with a group contingency. During social skillsinstruction, students learned how to get along withfriends, cooperate with each other, and be kind. Duringthe group contingency, students earned elastic loopsthat they could affix to their wrists. After recess, stu-dents placed the loops in a can on the teacher’s desk.When the can was full, the group earned a reinforcer.

577

Contingency Contracting, Token Economy, and Group Contingencies

60

50

40

30

20

10

0

Baseline Intervention

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

Recess 1

60

50

40

30

20

10

0

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

Recess 2

60

50

40

30

20

10

0

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

Recess 3

Daily Sessions

Freq

uen

cy o

f P

rob

lem

Beh

avio

rFigure 12 Frequency of problem behaviorsacross recess periods. Recess 1 was composed ofsecond- and fourth-grade students, Recess 2 wascomposed of first- and third-grade students, andRecess 3 was composed of fifth- and sixth-gradestudents. Kindergarten students were on the play-ground across Recess 1 and 2.From “Reducing Problem Behaviors on the Playground: An Investigationof the Application of School-Wide Positive Behavior Supports” by T. J. Lewis, L. J. Powers, M. J. Kelk, and L. L. Newcomer, 2002,Psychology in the Schools, 39 (2), p. 186. Copyright 2002 by Wiley Periodicals, Inc. Reprinted by permission of John Wiley & Sons, Inc.

Figure 12 shows the results of the social skills plusgroup contingency intervention across three recess pe-riods during the day.

Group Averaging

Baer and Richards (1980) used a group averaging inter-dependent group contingency to improve the math andEnglish performance of 5 elementary-aged students. Intheir study all of the students in a class of 10, includingthe 5 target students, were told that they would earn 1extra minute of recess for each point of class improve-ment beyond the previous weekly average. Also, all stu-dents were given a contract stating this same contingency.The extra recess was awarded every day of the followingweek. For example, if the students’ weekly averages ex-ceeded their previous weekly averages by 3 points, theywould receive 3 minutes of extra recess every day of thefollowing week. The results of this 22-week study showedthat all students improved when the group contingencywas in effect. Anecdotal data indicated that all students

participated and received extra recess time during thecourse of the study.

Good Behavior Game

Barrish, Saunders, and Wolf (1969) used the Good Be-havior Game to describe an interdependent group contin-gency in which a group is divided into two or more teams.Prior to the game being played, the teams are told thatwhichever team has the fewest marks against it at the endof the game will earn a privilege. Each team is also told thatit can win a privilege if it has fewer than a specified num-ber of marks (a DRL schedule). Data reported by the au-thors show that this strategy can be an effective method ofreducing disruptive behavior in the classroom. When gameconditions were in effect during math or reading, talking-out and out-of-seat behaviors occurred at low levels. Whengame conditions were not in effect, disruptive behaviors oc-curred at much higher levels (see Figure 13).

In the Good Behavior Game teacher attention is di-rected toward observing and recording occurrences of

578

Mat

h P

erio

dR

ead

ing

Per

iod

Sessions Observed

Per

cen

tage

of

Inte

rval

s in

wh

ich

Beh

avio

r O

ccu

rred

Figure 13 Percentage of 1-minuteintervals containing talking-out and out-of-seatbehaviors in a classroom of 24 fourth-gradechildren during math and reading periods.From “Good Behavior Game: Effects of Individual Contingencies forGroup Consequences on Disruptive Behavior in a Classroom” by H. H. Barrish, M. Saunders, and M. M. Wolf, 1969, Journal ofApplied Behavior Analysis, 2, p. 122. Copyright 1969 by the Societyfor the Experimental Analysis of Behavior, Inc. Reprinted by permission.

misbehavior, with the incentive that if one or more teamshave fewer than the criterion number of infractions, a re-inforcer is delivered. The advantage of the Good Behav-ior Game is that competition occurs within teams andagainst the criterion, not across groups.

Good Student Game

The Good Student Game combines an interdependentgroup contingency (like the Good Behavior Game) withself-monitoring tactics (Babyak, Luze, & Kamps, 2000).Basically, the Good Student Game is intended for im-plementation during independent seatwork periods whenproblematic behaviors surface. In the Good Student

Game, the teacher (a) chooses target behaviors to mod-ify, (b) determines goals and rewards, and (c) determineswhether group or individual monitoring (or both) willoccur.

Students are trained in the Good Student Game usinga model-lead-test instructional sequence: students arearranged in clusters of four to five, the target behaviors aredefined, examples and nonexamples are provided, prac-tice occurs under teacher supervision, and one or morestudents record their own or the group’s performance.Table 2 shows how the Good Student Game compares to the Good Behavior Game. Note that two distinctionsrelate to target behaviors, the delivery of rewards, andfeedback.

Contingency Contracting, Token Economy, and Group Contingencies

579

Table 2 Components of the Good Behavior Game and the Good Student Game

Component Good Behavior Game Good Student Game

Organization Students play in teams. Students play in teams or as individuals.

Management Teacher monitors and records behavior. Students self-monitor and record their behavior.

Target behaviors Behaviors are stated as rule breaking or Behaviors are stated as rule following.rule following.

Recording Teachers records incidents of rule-breaking Students record rule-following behaviors on a behaviors as they occur. variable-interval schedule.

System of reinforcement Positive. Positive.

Criterion for reinforcement Teams must not exceed a set number of Groups or individuals achieve or exceed a set rule-breaking behaviors. percentage of rule-following behaviors.

Delivery of reinforcement Dependent on group performance. Dependent on individual or group performance.

Feedback Teacher provides feedback when rule- Teacher provides feedback at intervals. Praise and breaking behaviors occur. encouragement presented during the game to

reinforce positive behaviors.

From “The Good Student Game: Behavior Management for Diverse Classrooms” by A. E. Babyak, G. J. Luze, and D. M. Kamps, 2000, Interventionin School and Clinic, 35 (2), p. 217. Copyright 2000 by PRO ED, Inc. Reprinted by permission.

Implementing a Group Contingency

Implementing a group contingency requires as much pre-planning as any other behavior change procedure. Pre-sented here are six guidelines to follow before and duringthe application of a group contingency.

Choose an Effective Reward

One of the most important aspects of a group contingencyis the strength of the consequence; it must be strongenough to serve as an effective reward. Practitioners areadvised to use generalized conditioned reinforcers or re-inforcer menus at every opportunity. Both of these strate-gies individualize the contingency, thereby increasing itspower, flexibility, and applicability.

Determine the Behavior to Change and AnyCollateral Behaviors That Might Be Affected

Let us suppose a dependent group contingency is estab-lished in which a class receives 10 minutes of extra freetime, contingent on the improved academic performanceof a student with developmental disabilities. Obviously,the teacher will need to collect data on the student’s aca-demic performance. However, data might also be col-lected on the number of positive interactions between thestudent and her classmates without disabilities within andoutside the room. An additional benefit of using a groupcontingency might be the positive attention and encour-

agement that the student with developmental disabilitiesreceives from her classmates.

Set Appropriate Performance Criteria

If a group contingency is used, the persons to whom thecontingency is applied must have the prerequisite skillsto perform the specified behaviors. Otherwise, they willnot be able to meet the criterion and could be subject toridicule or abuse (Stolz, 1978).

According to Hamblin, Hathaway, and Wodarski(1971), the criteria for a group contingency can be estab-lished by using the average-, high-, or low-performancelevels of the groups as the standard. In an average-per-formance group contingency, the mean performance ofthe group is averaged, and reinforcement is contingent onthe achievement of that mean score or a higher score. Ifthe average score for a math exercise were 20 correct prob-lems, a score of 20 or more would earn the reward. In ahigh-performance group contingency, the high score de-termines the level of performance needed to receive the re-ward. If the high score on a spelling test was 95%, onlystudents achieving a score of 95% would receive the re-ward. In the low-performance group contingency, the lowperformance score determines the reinforcer. If the lowscore on a social studies term paper were C, then studentswith a C or better would receive the reinforcer.

Hamblin and colleagues (1971) indicated that dif-ferential effects can be noted with these performance con-tingencies. Their data show that slow students performed

Contingency Contracting, Token Economy, and Group Contingencies

580

worse under a high-performance contingency, whereasgifted students performed best under this contingency.The data of Hamblin and colleagues suggest that a groupcontingency can be effective in improving behavior, butit should be applied with the understanding that effec-tiveness may vary for different members of the group.

Combine with Other Procedures When Appropriate

According to LaRowe, Tucker, and McGuire (1980), agroup contingency can be used alone or in combinationwith other procedures to systematically change perfor-mance. The LaRowe and colleagues study was designed toreduce the excessive noise levels in an elementary schoollunchroom; their data suggest that differential reinforce-ment of low rates of behavior (DRL) can be incorporatedeasily into a group contingency. In situations in whichhigher levels of group performance are desired, differen-tial reinforcement of high rates of behavior (DRH) can beused. With either DRL or DRH the use of a changing crite-rion design may facilitate the analysis of treatment effects.

Select the Most Appropriate Group Contingency

The selection of a specific group contingency should bebased on the overall programmatic goals of the practi-

tioner, the parents (if applicable), and the participantswhenever possible. For instance, if the group contin-gency is designed to improve the behavior of one per-son or a small group of individuals, then perhaps adependent group contingency should be employed. Ifthe practitioner wants to differentially reinforce appro-priate behavior, then an independent group contingencyshould be considered. But if the practitioner wants eachindividual within a group to perform at a certain level,then an interdependent group contingency should bechosen. Regardless of which group contingency is se-lected, the ethical issues must be addressed.

Monitor Individual and Group Performance

With a group contingency practitioners must observeboth group and individual performance. Sometimes thegroup’s performance improves but some members withinthe group do not improve, or at least do not improve asfast. Some members of the group might even attempt tosabotage the group contingency, preventing the groupfrom achieving the reinforcement. In these cases indi-vidual contingencies should be arranged for the sabo-teurs, in combination with the group contingency.

SummaryContingency Contracting

1. A contingency contract, also called a behavioral contract,is a document that specifies a contingent relation betweenthe completion of a specified behavior and access to, ordelivery of, a specified reward such as free time, a lettergrade, or access to a favorite activity.

2. Every contract has three major parts: a description of thetask, a description of the reward, and the task record.Under task, who, what, when, and how well should bespecified. Under reward, who, what, when, and how muchshould be specified. A task record provides a place forrecording the progress of the contract and providing in-terim rewards.

3. Implementing a contract involves a complex package in-tervention of related positive and negative reinforcementcontingencies and rule-governed behaviors that operatealone and together.

4. Contracting has been used widely in classroom, home, andclinical settings.

5. Contracts have also been used to teach self-managementto children.

6. A self-contract is a contingency contract that an individ-ual makes with herself, incorporating a self-selected taskand reward as well as self-monitoring of task completionand self-delivery of the reward.

Token Economy

7. A token economy is a behavior change system consistingof three major components: (a) a specified list of targetbehaviors to be reinforced; (b) tokens or points that par-ticipants receive for emitting the target behaviors; and (c) a menu of items or activities, privileges, and backup re-inforcers from which participants choose and obtain byexchanging tokens they have earned.

8. Tokens function as generalized conditioned reinforcers be-cause they have been paired with a wide variety of backupreinforcers.

9. A level system is a type of token economy in which participants move up or down a hierarchy of levels

Contingency Contracting, Token Economy, and Group Contingencies

581

Contingency Contracting, Token Economy, and Group Contingencies

contingent on meeting specific performance criteria withrespect to the target behaviors.

10. There are six basic steps in designing a token economy:(1) selecting the tokens that will serve as a medium of ex-change; (2) identifying the target behaviors and rules;(3) selecting a menu of backup reinforcers; (4) establish-ing a ratio of exchange; (5) writing procedures to specifywhen and how tokens will be dispensed and exchangedand what will happen if the requirements to earn a tokenare not met; and (6) field testing the system before full-scale implementation.

11. When establishing a token economy, decisions must bemade regarding how to begin, conduct, maintain, evalu-ate, and remove the system.

Group Contingencies

12. A group contingency is one in which a common conse-quence is contingent on the behavior of one member of

the group, the behavior of part of the group, or the behav-ior of everyone in the group.

13. Group contingencies can be classified as independent, de-pendent, and interdependent.

14. Six guidelines can assist the practitioner in implement-ing a group contingency: (a) choose a powerful reward,(b) determine the behavior to change and any collateralbehaviors that might be affected, (c) set appropriate per-formance criteria, (d) combine with other procedureswhen appropriate, (e) select the most appropriate groupcontingency, and (f) monitor individual and group performance.

582

habit reversalmassed practiceself-control

self-evaluationself-instructionself-management

self-monitoringsystematic desensitization

Behavior Analyst Certification Board® BCBA® & BCABA®Behavior Analyst Task List,© Third Edition

Content Area 9: Behavior Change Procedures

9-27 Use self-management strategies.

© 2006 The Behavior Analyst Certification Board, Inc.,® (BACB®) all rights reserved. A current version ofthis document may be found at www.bacb.com. Requests to reprint, copy, or distribute this document andquestions about this document must be submitted directly to the BACB.

Self-Management

Key Terms

From Chapter 27 of Applied Behavior Analysis, Second Edition. John O. Cooper, Timothy E. Heron, William L. Heward. Copyright © 2007by Pearson Education, Inc. All rights reserved.

583

Self-Management

Raylene used to be so forgetful, always failing to do thethings she needed and wanted to do. Her days were justso busy! But Raylene is beginning to get a handle on herhectic life. For example, this morning a Post-It Note onher clothes closet door reminded her to wear her gray suitfor her luncheon meeting. A note on the refrigeratorprompted her to take her completed sales report to work.And when Raylene got into her car—sales report in handand looking sharp in her gray suit—and sat on the librarybook she’d placed on the driver’s seat the night before, thechances were very good that she would return the book tothe library that day and avoid another overdue fine.

Almost a year has passed since Daryl collected the lastdata point for his master’s thesis. It’s a solid study on atopic Daryl enjoys and believes to be important, butDaryl is struggling mightily to get his thesis written. Al-though Daryl knows that if he could sit down and writefor an hour or two each day his thesis would get written,the size and difficulty of the task are daunting. Hewishes his ability to sit and write were half as good ashis ability to avoid those behaviors.

“Some people [self-manage their behavior] fre-quently and well, others do it rarely and poorly”(Epstein, 1997a, p. 547). Raylene has recently

discovered self-management—behaving to alter the oc-currence of other behaviors she wants to control—and feelson top of the world. Daryl, in sore need of some self-management, is feeling worse each passing week. Thischapter defines self-management, identifies uses for self-management and the benefits of teaching and learning self-management skills, describes a variety of self-managementtactics, and offers guidelines for conducting a successfulself-management program. We begin with a discussion ofthe role of the self as a controller of behavior.

The “Self” as Controllerof BehaviorA fundamental precept of radical behaviorism is that thecauses of behavior are found in the environment (Skinner,1974). Throughout the evolution of the human species,causal variables selected by the contingencies of survivalhave been transmitted through genetic endowment. Othercauses of behavior can be found in the contingencies ofreinforcement that describe behavior–environment inter-actions during the lifetime of an individual person. Whatrole, then, if any, is left for the self to play?

Locus of Control: Internal or External Causes of Behavior

Proximate causes of some behaviors are apparent to any-one observing the events as they unfold. A mother picksup and cuddles her crying infant, and the baby stops cry-

ing. A highway worker leaps off the road when he sees afast-approaching car not slowed by the warning signs.An angler casts his lure toward the spot where his previ-ous cast produced a strike. A behavior analyst likelywould suggest the involvement of escape, avoidance, andpositive reinforcement contingencies, respectively, forthese events. Although a nonbehaviorist might offer amentalistic explanation for why the person in each sce-nario responded as he or she did (e.g., the infant’s cryingtriggered the mother’s nurturing instinct), most people, ir-respective of their education and orientation, professionaland layperson alike, would identify the same antecedentevents—a baby cried, a speeding car approached, a fishstruck the lure—as functional variables in the three sce-narios. An analysis of the three episodes would reveal al-most certainly that the immediately preceding events didhave functional roles—the crying baby and speeding caras motivating operations that evoked escape or avoidanceresponses; the biting fish as a powerful reinforcer.

But much human behavior does not immediately fol-low such obviously related antecedent events. Nonethe-less, we humans have a long history of assigning causalstatus to events that immediately precede behavior. AsSkinner (1974) noted, “We tend to say, often rashly, thatif one thing follows another, it was probably caused byit—following the ancient precept of post hoc, ergopropter hoc (after this, therefore because of this)” (p. 7).When causal variables are not readily apparent in the im-mediate, surrounding environment, the tendency to pointto internal causes of behavior is particularly strong. AsSkinner explained:

The person with whom we are most familiar is ourself;many of the things we observe just before we behaveoccur within our body, and it is easy to take them as thecauses of our behavior. . . . Feelings occur at just theright time to serve as causes of behavior, and they havebeen cited as such for centuries. We assume that otherpeople feel as we feel when they behave as we behave.(pp. 7, 8)

Why does one college student maintain a regularstudy schedule from the first week of the semester, whileher roommates are out partying night after night? Whena group of people joins a weight-loss or a smoking ces-sation program in which each group member is exposedto the same intervention, why do some people meet theirself-determined goals but many others do not? Why doesa high school basketball player with limited physical abil-ities consistently outperform his more athletically giftedteammates? The hardworking student is said to have morewillpower than her less studious roommates; the groupmembers who lost weight or stopped smoking are thoughtto possess more desire than their peers who failed to meettheir goals or dropped out; the athlete’s superior play isconsidered a result of his exceptional drive. Although

584

Self-Management

some psychological theories grant causal status to hypo-thetical constructs such as willpower, desire, and drive,these explanatory fictions lead to circular reasoning andbring us no closer to understanding the behaviors theyclaim to explain.1

Skinner’s Two-ResponseConceptualization of Self-Control

Skinner was the first to apply the philosophy and theoryof radical behaviorism to actions typically considered tobe controlled by the self. In his classic textbook, Scienceand Human Behavior, Skinner (1953) devoted a chapterto self-control.

When a man controls himself, chooses a course of ac-tion, thinks out the solution to a problem, or strives to-ward an increase in self-knowledge, he is behaving. Hecontrols himself precisely as he would control the be-havior of anyone else—through the manipulation ofvariables of which behavior is a function. His behaviorin so doing is a proper object of analysis, and eventuallyit must be accounted for with variables lying outside theindividual himself. (pp. 228–229)

Skinner (1953) continued by conceptualizing self-control as a two-response phenomenon:

One response, the controlling response, affects variablesin such a way as to change the probability of the other,the controlled response. The controlling response maymanipulate any of the variables of which the controlledresponse is a function; hence there are a good many dif-ferent forms of self-control. (p. 231)

Skinner (1953) provided examples of a wide varietyof self-control techniques, including using physical re-straint (e.g., clapping a hand over one’s mouth to preventyawning at an embarrassing moment), changing the an-tecedent stimulus (e.g., putting a box of candy out of sightto reduce overeating), and “doing something else” (e.g.,talking about another topic to avoid talking about a par-ticular topic), to name a few. Since Skinner’s initial listof techniques, a variety of taxonomies and catalogs ofself-control tactics have been described (e.g., Agran,1997; Kazdin, 2001; Watson & Tharp, 2007). All self-control—or self-management—tactics can be opera-tionalized in terms of two behaviors: (a) the targetbehavior a person wants to change (Skinner’s controlledresponse) and (b) the self-management behavior (Skin-ner’s controlling response) emitted to control the targetbehavior. For example, consider the following table:

1Explanatory fictions and circular reasoning are discussed in Chapters en-titled “Definition and Characteristics of Applied Behavior Analysis” and“Positive Reinforcement.”

2There is also nothing new about self-management. As Epstein (1997)pointed out, many of the self-control techniques outlined by Skinner weredescribed by ancient Greek and Roman philosophers and have appeared inthe teachings of many organized religions for millennia (cf. Bolin & Gold-berg, 1979; Shimmel, 1977, 1979).

Target Behavior Self-Management Behavior

• Save money instead of • Enroll in a payroll deductionspending it on frivolous plan.items.

• Take garbage and recycling • After backing the car out cans from garage to the of the garage for work curb on Thursday nights for Thursday morning, pull the pickup early Friday morning. trash and recycling cans

onto the space in the garage where you will park the car in the evening.

• Ride exercise bike for 30 • Make a chart of the minutesminutes each evening. you ride, and show it to a

coworker each morning.

• Write 20-page paper. • (1) Outline paper and divide it into five 4-page parts;(2) specify a due date for each part; (3) give room-mate five $10 checks made out to a despised organi-zation; (4) on each due date show a completed part of the paper to room-mate and get one $10 check back.

A Definition of Self-Management

There is nothing mystical about self-management.2 Asthe previous examples show, self-management is simplybehavior that a person emits to influence another behav-ior. But many of the responses a person emits each dayaffect the frequency of other behaviors. Putting tooth-paste on a toothbrush makes it very likely that toothbrushing will occur soon. But we do not think of squeez-ing toothpaste on a brush as an act of “self-management”just because it alters the frequency of tooth brushing.What gives some responses the special status of self-management? How can we distinguish self-managementfrom other behavior?

Numerous definitions of self-control or self-man-agement have been proposed, many of which are similarto the one offered by Thoresen and Mahoney (1974).They suggested that self-control occurs when, in the “rel-ative absence” of immediate external controls, a personemits a response designed to control another behavior.For example, a man shows self-control if, when homealone and “otherwise free” to do whatever he wants, heforgoes his usual peanuts and beer and instead rides his

585

Self-Management

exercise bike for 20 minutes. According to definitionssuch as Thoresen and Mahoney’s, self-control is not in-volved when immediate and obvious external events setthe occasion for or reinforce the controlling response.The man would not be credited with self-control if hiswife were there reminding him of his overeating, prais-ing him for riding the exercise bike, and marking his timeand mileage on a chart.

But what if the man had asked his wife to prompt,praise, and chart his exercise? Would his having done soconstitute a form of self-control? One problem with con-ceptualizing self-control as occurring only in the absenceof “external control” is that it excludes those situations inwhich the person designs and puts into place the contin-gencies entailing external control for a behavior he desiresto change. An additional problem with the “relative ab-sence of external controls” concept is that it creates afalse distinction between internal and external control-ling variables when, in fact, all causal variables for be-havior ultimately reside in the environment.

Definitions of self-control such as Kazdin’s (2001),“those behaviors that a person deliberately undertakes toachieve self-selected outcomes” (p. 303), are more func-tional for applied behavior analysis. With this definitionself-control occurs whenever a person purposely emitsbehavior that changes the environment in order to mod-ify another behavior. Self-control is considered pur-poseful in the sense that a person labels (or tacts) herresponses as designed to attain a specified result (e.g., toreduce the number of cigarettes smoked each day).

We define self-management as the personal appli-cation of behavior change tactics that produces a desiredchange in behavior. This is an intentionally broad, func-tional definition of self-management. It encompasses one-time self-management events, such as Raylene’s taping anote to her closet door to remind herself to wear her graysuit the next day, as well as complex, long-running self-directed behavior change programs in which a personplans and implements one or more contingencies tochange his behavior. It is a functional definition in that thedesired change in the target behavior must occur for self-management to be demonstrated.

Self-management is a relative concept. A behaviorchange program may entail a small degree of self-management or be totally conceived, designed, and im-plemented by the person. Self-management occurs on acontinuum along which the person controls one or allcomponents of a behavior change program. When a be-havior change program is implemented by one person(e.g., teacher, therapist, parent) on behalf of another (e.g.,student, client, child), the external change agent manip-ulates motivating operations, arranges discriminativestimuli, provides response prompts, delivers differentialconsequences, and observes and records the occurrence

3For a variety of conceptual analyses of self-control/self-management froma behavioral perspective, see Brigham (1983); Catania (1975, 1976);Goldiamond (1976); Hughes & Lloyd (1993); Kanfer and Karoly (1972);Malott (1984, 1989, 2005a, b); Nelson and Hayes (1981); Newman, Buff-ington, Hemmes, and Rosen (1996); Rachlin (1970, 1974, 1995); and Wat-son and Tharp (2007).

or nonoccurrence of the target behavior. Some degree ofself-management is involved whenever a person performs(that is to say, controls) any element of a program thatchanges his behavior.

It is important to recognize that defining self-management as the personal application of behaviorchange tactics that results in a desired change in behav-ior does not explain the phenomenon. Our definition ofself-management is descriptive only, and then so in a verybroad sense. Although self-management tactics can beclassified according to their emphasis on a given com-ponent of the three- or four-term contingency, or by theirstructural similarity with a particular principle of behav-ior (e.g., stimulus control, reinforcement), it is likely thatall self-management tactics involve multiple principlesof behavior. Thus, when researchers and practitioners de-scribe self-management tactics, they should provide a de-tailed statement of the exact procedures used. Behavioranalysts should not ascribe the effects of self-managementinterventions to specific principles of behavior in the ab-sence of an experimental analysis demonstrating such re-lations. Only through such research will a more completeunderstanding of the mechanisms that account for the ef-fectiveness of self-management be attained.3

Terminology: Self-Control or Self-Management?

Although self-control and self-management often appearinterchangeably in the behavioral literature, we recom-mend that self-management be used in reference to a per-son acting “in some way in order to change subsequentbehavior” (Epstein, 1997, p. 547, emphasis in original).We have three reasons for making this recommendation.First, self-control is an “inherently misleading” term thatimplies that the ultimate control of behavior lies within theperson (Brigham, 1980). Although Skinner (1953) ac-knowledged that a person could achieve practical controlover a given behavior by acting in ways that manipulatevariables that affect the frequency of that behavior, he ar-gued that the controlling behaviors themselves are learnedfrom the person’s interactions with the environment.

A man may spend a great deal of time designing his ownlife—he may choose the circumstances in which he is tolive with great care, and he may manipulate his daily en-vironment on an extensive scale. Such activity appearsto exemplify a high order of self-determination. But it isalso behavior, and we account for it in terms of other

586

Self-Management

variables in the environment and history of the individ-ual. It is these variables which provide the ultimate con-trol. (p. 240)

In other words, the causal factors for “self-control”(i.e., controlling behaviors) are to be found in a person’sexperiences with his environment. Beginning with theexample of a person setting an alarm clock (controllingbehavior) so as to get out of bed at a certain time (con-trolled behavior) and concluding with an impressive in-stance of self-control by the main character in Homer’sOdyssey, Epstein (1997) described the origin of self-control as follows:

As is true of all operants, any number of phenomenamight have produced the [controlling] behavior origi-nally: instructions, modeling, shaping, or generativeprocess (Epstein, 1990, 1996), for example. Its occur-rence might have been verbally mediated by a self-generated rule (“I’ll bet I’d get up earlier if I set analarm clock”), and that rule, in turn, might have had anynumber of origins. Odysseus had his men tie him to hismast (controlling behavior) to lower the probability thathe would steer toward the Siren’s song (controlled be-havior). This was an elegant instance of self-control, butit was entirely instruction driven: Circe had told him todo it. (p. 547)

Second, attributing the cause of a given behavior toself-control can serve as an explanatory fiction. As Baum(2005) pointed out, self-control “seems to suggest con-trolling a [separate] self inside or [that there is] a self in-side controlling external behavior. Behavior analystsreject such views as mentalistic. Instead they ask, ‘Whatis the behavior that people call “self-control”?’” (p. 191,words in brackets added). Sometimes self is synonymouswith mind and “not far from the ancient notion of ho-munculus—an inner person who behaves in precisely theways necessary to explain the behavior of the outer per-son in whom he dwells” (Skinner, 1974, p. 121).4

Third, laypeople and behavioral researchers alikeoften use self-control to refer to a person’s ability to“delay gratification” (Mischel & Gilligan, 1964). In op-erant terms, this connotation of self-control entails re-sponding to achieve a delayed, but larger or higher qualityreward instead of acting to obtain an immediate, less-valuable reward (Schweitzer & Sulzer-Azaroff, 1988).5

Using the same term to refer to a tactic for changing be-havior and to a certain type of behavior that may be an

4In early biological theory, homunculus was a fully formed human beingthought to exist inside an egg or spermatozoon.5Behavior that fails to exhibit such self-control is often labeled “impul-sive.” Neef and colleagues (e.g., Neef, Bicard, & Endo, 2001; Neef, Mace,& Shade, 1993) and Dixon and colleagues (e.g., Binder, Dixon, & Ghezzi,2000; Dixon et al., 1998; Dixon, Rehfeldt, & Randich, 2003) have devel-oped procedures for assessing impulsiveness and for teaching self-controlin the form of tolerance for delayed rewards.

outcome of the tactic is at best confusing and logicallyfaulty. Restricting the use of self-control to describe acertain type of behavior may reduce the confusion causedby using self-control as the name for both an indepen-dent variable and dependent variable. In this sense, self-control can be a possible goal or outcome of a behaviorchange program irrespective of whether that behavior isthe product of an intervention implemented by an exter-nal agent or the subject. Thus, a person can use self-management to achieve, among other things, self-control.

Applications, Advantages,and Benefits of Self-ManagementIn this section we identify four basic applications of self-management and explain the numerous advantages andbenefits that accrue to people who use self-management,to practitioners who teach it to others, and to society asa whole.

Applications of Self-Management

Self-management can help a person be more effectiveand efficient in his daily life, replace bad habits withgood ones, accomplish difficult tasks, and achieve per-sonal goals.

Living a More Effective and Efficient Daily Life

Raylene writing notes to herself and placing her librarybook on her car seat are examples of self-managementtechniques that can be used to overcome forgetfulness or lack of organization. Most people use simple self-management techniques such as making a shopping listbefore going to the store or creating “to-do” lists as waysto organize their day; but few probably consider whatthey are doing to be “self-management.” Although manyof the most widely used self-management techniques canbe considered common sense, a person with an under-standing of basic principles of behavior can use thatknowledge to apply those common sense techniques moresystematically and consistently in her life.

Breaking Bad Habits and Acquiring Good Ones

Many behaviors that we want to do more (or less) oftenand know that we should do (or not do) are caught in re-inforcement traps. Baum (2005) suggested that impul-siveness, bad habits, and procrastination are products of

587

Self-Management

naturally existing reinforcement traps in which immedi-ate but smaller consequences have greater influence onour behavior than more significant outcomes that are de-layed. Baum gave this description of a reinforcement trapfor smoking:

Acting impulsively leads to a small but relatively imme-diate reinforcer. The short-term reinforcement for smok-ing lies in the effects of nicotine and social reinforcerssuch as appearing grown-up or sophisticated. The trou-ble with impulsive behavior lies in long-term ill effects.It may be months or years before the bad habit takes itstoll in consequences such as cancer, heart disease, andemphysema.

The alternative to impulsiveness, refraining fromsmoking, also leads to both short-term and long-term con-sequences. The short-term consequences are punishingbut relatively minor and short-lived: withdrawal symp-toms (e.g., headaches) and possibly social discomfort. Inthe long run, however, . . . refraining from smoking re-duces the risk of cancer, heart disease, and emphysema;ultimately, it promotes health. (pp. 191–192)

Reinforcement traps are two-sided contingencies thatwork to promote bad habits while simultaneously work-ing against the selection of behavior that is beneficial inthe long-term. Even though we may know the rules de-scribing such contingencies, the rules are difficult to fol-low. Malott (1984) described weak rules as those withdelayed, incremental, or unpredictable outcomes. An ex-ample of a hard-to-follow, weak rule is, I’d better notsmoke or I might get cancer someday and die. Eventhough the potential consequence—cancer and death—isa major one, it is far off in the future and not a sure thingeven then (we all know someone who smoked two packsa day since the age of 15 and lived to be 85), which se-verely limits its effectiveness as a behavioral conse-quence. So the rule Don’t smoke or you might get canceris hard to follow. The deleterious effect of any one ciga-rette is small, so small that it is not even noticed. Em-physema and lung cancer are probably years andthousands of puffs away. What will one more puff hurt?

Self-management provides one strategy for avoidingthe deleterious effects of reinforcement traps. A personcan use self-management tactics to arrange immediateconsequences that will counteract the consequences cur-rently maintaining the self-destructive behavior.

Accomplishing D ifficult Tasks

Immediate outcomes that bring a person only “infinites-imally closer” to a significant outcome do not control be-havior (Malott, 1989, 2005a). Many self-managementproblems are the result of outcomes that are small, butof cumulative significance.

Malott (2005a) argued that our behavior is controlledby the outcome of each individual response, not by thecumulative effect of a large number of responses.

The natural contingency between a response and its out-come will be ineffective if the outcome of each individ-ual response is too small or too improbable, even thoughthe cumulative impact of those outcomes is significant.So, we have trouble managing our own behavior, wheneach instance of that behavior has only an insignificantoutcome, though the outcomes of many repetitions ofthat behavior will be highly significant. And we have lit-tle trouble managing our own behavior, when each in-stance of that behavior has a significant outcome,though that outcome might be greatly delayed.

Most obese people in the United States know therule describing that contingency: If you repeatedlyovereat, you will become overweight. The problem is,knowing the overweight rule does not suppress this oneinstance of eating that delicious fudge sundae toppedwith the whipped cream and the maraschino cherry, be-cause a single instance of eating this dessert will causeno significant harm, and will, indeed, taste great.Knowledge of this rule describing the natural contin-gency exerts little control over gluttony.

But the outcome of our behavior also needs to beprobable. Many people have trouble following thebuckle-up rule, even though they might get in a seriousauto accident as soon as they pull onto the street. Eventhough the delay between the failure to buckle up andthe accident could be only a few seconds, they fail tobuckle up because the probability of the accident is solow. However, if they were driving in a dangerous autorace or a dangerous stunt-driving demonstration, theywould always buckle up, because the probability of a se-rious accident would be fairly high. (pp. 516–517)

Just as a person who smokes another cigarette or eatsjust one more hot fudge sundae cannot detect that lungcancer or obesity is a noticeably closer outcome, gradu-ate student Daryl does not perceive that writing one moresentence brings him closer to his long-term goal of a com-pleted master’s thesis. Daryl is capable of writing onemore sentence but, like many of us, he often has troublegetting started and working steadily on difficult taskswhere each response produces little or no apparent changein the size of the remaining task. He procrastinates.

Applying self-management to this type of perfor-mance problem involves designing and implementing oneor more contrived contingencies to compete with the in-effective natural contingencies. The self-management con-tingency provides immediate consequences or short-termoutcomes for each response or small set of responses.These contrived consequences increase the frequency ofthe target responses, which over time produce the cumu-lative effects necessary to complete the task.

588

Self-Management

Achieving Personal Goals

People can use self-management to achieve personalgoals, such as learning to play a musical instrument,learning a foreign language, running a marathon, stickingto a schedule of daily yoga sessions (Hammer-Kehoe,2002), or simply taking some time each day to relax (Har-rell, 2002) or listen to enjoyable music (Dams, 2002).For example, a graduate student who wanted to becomea better guitarist used self-management to increase thetime he practiced scales and chords (Rohn, 2002). Hepaid a $1 fine to a friend for each day that he failed to practice playing scales and chords for a criterion number of minutes before going to bed. Rohn’s self-management program also included several contingen-cies based on the Premack principle. For example, if hepracticed scales (a low-frequency behavior) for 10 min-utes, he could play a song (a high-frequency behavior).

Advantages and Benefits of Self-Management

A dozen potential advantages and benefits accrue to peo-ple who learn to use self-management skills and to prac-titioners who teach such skills to their clients and students.

Self-Management Can Influence BehaviorsNot Accessible to External Change Agents

Self-management can be used to change behaviors withtopographies that make them inaccessible to observationby others. Behaviors such as thoughts of self-doubt, ob-sessive thoughts, and feelings of depression are privateevents for which a self-managed treatment approach maybe needed (e.g., Kostewicz, Kubina, & Cooper, 2000;Kubina, Haertel, & Cooper, 1994).

Even publicly observable target behaviors are not al-ways emitted in settings and situations accessible to anexternal change agent. Sometimes the behavior a personwishes to change must be prompted, monitored, evalu-ated, or reinforced or punished on a day-to-day or evenminute-to-minute basis in all of the situations and envi-ronments that the person encounters. Even though a per-son enrolls in a behavior change program planned anddirected by others, most successful smoking, weight-loss,exercise, and habit-reversal programs rely heavily on par-ticipants using various self-management techniques whenthey are away from the treatment setting. Many behaviorchange goals brought to clinicians pose the same chal-lenge: How can an effective contingency be arranged thatfollows the client at all times wherever she goes? For ex-ample, target behaviors designed to increase a secretary’s 6Producing generalized behavior change is a defining goal of applied be-

havior analysis and the focus of Chapter entitled “ Generalization andMaintenance of Behavior Change.”

self-esteem and assertiveness can be identified and prac-ticed in the therapist’s office, but implementation of an ac-tive contingency in the workplace is apt to requireself-management techniques.

External Change Agents Often MissImportant Instances of Behavior

In most education and treatment settings, particularly inlarge group situations, many important responses go un-noticed by the person responsible for applying behaviorchange procedures. Classrooms, for example, are espe-cially busy places where desired behaviors by studentsoften go unnoticed because the teacher is engaged withother tasks and students. As a result, students invariablymiss opportunities to respond, or they respond and re-ceive no feedback because, in a behavioral sense, theteacher is not there. However, students who have beentaught to evaluate their own performance and providetheir own feedback in the form of self-delivered rewardsand error correction, including seeking assistance andpraise from their teacher when needed (e.g., Alber &Heward, 2000; Bennett & Cavanaugh, 1998; Olympia,Sheridan, Jenson, & Andrews, 1994), are not dependenton the teacher’s direction and feedback for every learn-ing task.

Self-Management Can Promote the Generalization and Maintenance of Behavior Change

A behavior change that (a) continues after treatment hasended, (b) occurs in relevant settings or situations otherthan the one(s) in which it was learned originally, or (c) spreads to other related behaviors has generality (Baer,Wolf, & Risley, 1968). Important behavior changes with-out such generalized outcomes must be supported indefi-nitely by continued treatment.6 As soon as the student orclient is no longer in the setting in which a behavior changewas acquired, she may no longer emit the desired response.Certain aspects of the original treatment environment, in-cluding the person (teacher, parent) who administered theteaching program, may have become discriminative stim-uli for the newly learned behavior, enabling the learner todiscriminate the presence or absence of certain contin-gencies across settings. Generalization to nontreatmentenvironments is also hampered when the naturally exist-ing contingencies in those settings do not provide rein-forcement for the target behavior.

589

Self-Management

Such challenges to achieving generalized outcomesmay be overcome if the learner has self-managementskills. Baer and Fowler (1984) posed and answered a prag-matic question related to the problem of promoting thegeneralization and maintenance of newly learned skills.

What behavior change agent can go with the student toevery necessary lesson, at all times, to prompt and rein-force every desirable form of the behavior called for bythe curriculum? The student’s own “self” can alwaysmeet these specifications. (p. 148)

A Small Repertoire of Self-ManagementSkills Can Control Many Behaviors

A person who learns how to apply a few self-managementtactics can control a potentially unlimited range of behav-iors. For example, self-monitoring—observing and record-ing one’s own behavior—has been used to increase on-taskbehavior (e.g., Blick & Test, 1987), academic productivityand accuracy (e.g., Maag, Reid, & DiGangi, 1993), em-ployee productivity and job success (e.g., Christian & Pol-ing, 1997), and independence (e.g., Dunlap, Dunlap,Koegel, & Koegel, 1991), and to decrease undesired be-haviors such as habits or tics (e.g., Koegel & Koegel, 1991).

People with Diverse Abilities Can Learn Self-Management Skills

People of wide-ranging ages and cognitive abilities havesuccessfully used self-management tactics. Preschoolers(e.g., Sainato, Strain, Lefebvre, & Rapp, 1990; DeHaas-Warner, 1992), typically developing students from pri-mary grades through high school (e.g., Sweeney, Salva,Cooper, & Talbert-Johnson, 1993), students with learningdisabilities (e.g., Harris, 1986), students with emotionaland behavior disorders (e.g., Gumpel & Shlomit, 2000),children with autism (e.g., Koegel, Koegel, Hurley, &Frea, 1992; Newman, Reinecke, & Meinberg, 2000), andchildren and adults with mental retardation and other de-velopmental disabilities (e.g., Grossi & Heward, 1998)have all used self-management successfully. Even col-lege professors are capable of using self-management toimprove their performance (Malott, 2005a)!

Some People Perform Better underSelf-Selected Tasks and Performance Criteria

Most studies that have compared self-selected and other-selected reinforcement contingencies have found thatunder certain conditions self-selected contingencies canbe as effective in maintaining behavior as contingenciesdetermined by others (e.g., Felixbrod & O’Leary, 1973,1974; Glynn, Thomas, & Shee, 1973). However, somestudies have found better performance with self-selected

work tasks and consequences (e.g., Baer, Tishelman,Degler, Osnes, & Stokes, 1992; Olympia et al., 1994;Parsons, Reid, Reynolds, & Bumgarner, 1990). For ex-ample, in three brief experiments with a single student,Lovitt and Curtiss (1969) found pupil-selected rewardsand contingencies to be more effective than teacher-selected performance standards. In the first phase of Ex-periment 1, the subject, a 12-year-old boy in a specialeducation classroom, earned a teacher-specified numberof minutes of free time based on the number of math andreading tasks he completed correctly. In the next phaseof the study, the student was allowed to specify the num-ber of correct math and reading items needed to earneach minute of free time. During the final phase of thestudy the original teacher-specified ratios of academicproduction to reinforcement were again in effect. Thestudent’s median academic response rate during the self-selected contingency phase was 2.5 correct responses perminute (math and reading tasks reported together), com-pared to correct rates of 1.65 and 1.9 in the two teacher-selected phases.

An experiment by Dickerson and Creedon (1981) employed a yoked-control (Sidman, 1960/1988) and between-group comparison experimental design with 30second- and third-graders and found that pupil-selectedstandards resulted in significantly higher academic pro-duction of both reading and math tasks. Both the Lovittand Curtiss and the Dickerson and Creedon studies demon-strated that self-selected reinforcement contingencies canbe more effective than teacher-selected contingencies.However, research in this area also shows that simply let-ting children determine their own performance standardsdoes not guarantee high levels of performance; some stud-ies have found that children select too-lenient standardswhen given the opportunity (Felixbrod & O’Leary, 1973,1974). But interestingly, in a study of student-managedhomework teams by Olympia and colleagues (1994), al-though students assigned to teams who could select theirown performance goals chose more lenient accuracy cri-teria than the 90% accuracy criterion required of studentson the teams that worked toward teacher-selected goals,the overall performance of the student-selected-goal teamswas slightly higher than that of the teams working towardteacher-selected goals. More research is needed to deter-mine the conditions under which students will self-selectand maintain appropriate standards of performance.

People with Good Self-Management SkillsContribute to More Efficientand Effective Group Environments

The overall effectiveness and efficiency of any group ofpeople who share a working environment is limited whena single person is responsible for monitoring, supervising,

590

Self-Management

and providing feedback for the performance of eachgroup member. When individual students, teammates,band members, or employees have self-managementskills that enable them to work without having to rely onthe teacher, coach, band director, or shift manager forevery task, the overall performance of the entire groupor organization improves. In the classroom, for example,teachers traditionally have assumed full responsibility forsetting performance criteria and goals for students, eval-uating students’ work, delivering consequences for thoseperformances, and managing students’ social behavior.The time required for these functions is considerable. Tothe extent that students can self-score their work, providetheir own feedback using answer keys or self-checkingmaterials (Bennett & Cavanaugh, 1998; Goddard &Heron, 1998), and behave appropriately without teacherassistance, the teacher is freed to attend to other aspectsof the curriculum and perform other instructional duties(Mitchem, Young, West, & Benyo, 2001).

When Hall, Delquadri, and Harris (1977) conductedobservations of elementary classrooms and found low lev-els of active student response, they surmised that higherrates of academic productivity might actually be punishingto teachers. Even though there is considerable evidencelinking high rates of student opportunity to respond withacademic achievement (e.g., Ellis, Worthington, & Larkin,2002; Greenwood, Delquadri, & Hall, 1984; Heward,1994), in most classrooms generating more student aca-demic responses results in the teacher having to spend timegrading papers. The time savings produced by studentswith even the most simple self-management skills can besignificant. In one study a class of five elementary specialeducation students completed as many arithmetic prob-lems as they could during a 20-minute session each day(Hundert & Batstone, 1978). When the teacher graded thepapers, an average of 50.5 minutes was required to pre-pare and conduct the session and score and record eachstudent’s performance. When the students scored their ownpapers, the total teacher time needed to conduct the arith-metic period was reduced to an average of 33.4 minutes.

Teaching Students Self-Management SkillsProvides Meaningful Practice for Other Areasof the School Curriculum

When students learn to define and measure behavior andto graph, evaluate, and analyze their own responses, theyare practicing various math and science skills in a relevantway. When students are taught how to conduct self-experiments such as using A-B designs to evaluate theirself-management projects, they receive meaningful prac-tice in logical thinking and scientific method (Marshall &Heward, 1979; Moxley, 1998).

Self-Management Is an Ultimate Goal of Education

When asked to identify what education should accom-plish for its students, most people—educators and lay-people alike—include in their reply the development ofindependent, self-directed people who are capable of be-having appropriately and constructively without the su-pervision of others. John Dewey (1939), one of thiscountry’s most influential educational philosophers, saidthat “the ideal aim of education is the creation of self-control” (p. 75). A student’s ability to be self-directed,with the teacher as a guide or facilitator, and his abilityto evaluate his own performance have long been consid-ered cornerstones of humanistic education (Carter, 1993).

As Lovitt (1973) observed more than 30 years ago,the fact that systematic instruction of self-managementskills is not a regular part of most schools’ curricula is aparadox because “one of the expressed objectives of theeducational system is to create individuals who are self-reliant and independent” (p. 139).

Although self-control is a social skill expected andvalued by society, it is seldom addressed directly by theschool curriculum. Teaching self-management as an in-tegral part of the curriculum requires that students learna fairly sophisticated sequence of skills (e.g., Marshall& Heward, 1979; McConnell, 1999). However, system-atic teaching of self-management skills, if successful, isworthy of the effort required; students will have an ef-fective means of dealing with situations in which there islittle, if any, external control. Materials and lesson plansare available that help students become independent, self-directed learners (e.g., Agran, 1997; Daly & Ranalli,2003; Young, West, Smith, & Morgan, 1991). Box 1 de-scribes free software programs children can use to createself-management tools for use in school and at home.

Self-Management Benefits Society

Self-management serves two important functions for so-ciety (Epstein, 1997). First, citizens with self-managementskills are more likely to fulfill their potential and makegreater contributions to society. Second, self-managementhelps people behave in ways (e.g., purchasing fuel-efficientautomobiles, taking public transportation) that forgo im-mediate reinforcers that are correlated with devastatingoutcomes that are so long deferred that only future gener-ations will experience them (e.g., depletion of natural re-sources and global warming). People who have learnedself-management techniques that help them to save re-sources, recycle, and burn less fossil fuel help create a bet-ter world for everyone. Self-management skills can providepeople with the means of meeting the well-intentioned butdifficult-to-follow rule, Think globally but act locally.

591

Self-Management

KidTools, KidSkills, and StrategyTools are software pro-grams that help children create and use a variety of self-management tactics, organizational skills, and learningstrategies. The programs enable children to take responsi-bility for changing and managing their own social and aca-demic behaviors. eKidTools and eKidSkills are designedfor children ages 7 to 10; iKidTools and iKidSkills are de-signed for children ages 11 to 14; StrategyTools includes en-hanced self-management and learning strategy tools,transition planning tools, and self-training modules forsecondary students. All three programs feature easy-to-complete templates that let children be as independent aspossible in identifying personal behaviors or academiclearning needs, selecting a self-management tool to supportthe task, developing the steps of their personalized plan, andprinting the tool for immediate use in the classroom or athome. The remainder of this essay provides additional in-formation on the KidTools and KidSkills programs.

KidTools

The KidTools program supports children’s self-manage-ment skills to identify behaviors, make plans, and mon-itor implementation of the plans. KidTools includesself-management interventions for three types, or levels,of control. In all three levels, there is an emphasis onwhat children say and think to themselves as they executetheir behavior change plans.

External control procedures may be necessary to establish control over problematic behaviors before thechild moves to shared control or self-control interven-tions. In these procedures, the adult provides directionand structure for appropriate behaviors. KidTools avail-able for this level of intervention are point cards.

Shared control techniques provide a transition stepfor encouraging the child to develop self-control. Thereis an emphasis on problem solving and making plans tochange or learn a behavior. The child and adult jointlyparticipate in these procedures. KidTools available forthis level of intervention are contracts, make-a-plan cards,and problem-solving planning tools.

Internal control techniques assist children throughcues and structure provided by a variety of self-monitoringprocedures.

KidSkills

KidSkills helps children be successful in school by get-ting organized, completing tasks, and using learningstrategies. KidSkills provides computer-generated tem-plates for getting organized, learning new stuff, organiz-ing new information, preparing for tests, doing homework,and doing projects. KidSkills programs are complemen-tary to the KidTools programs and operate the same interms of selecting tools, entering content, recordkeeping,and adult supports.

Box 1Self-Management Software for Children

592

Self-Management

Tool Resources and Skill Resources

In addition to the computer programs for children, in-formation databases about the tools and strategies areprovided for parents and teachers who assist childrenwith self-management and learning strategies. Each data-base includes guidelines and tips for each procedure,steps for implementation, age-appropriate examples forchildren, troubleshooting tips, and references to relatedresources. The user can navigate through the informa-tion, return to a menu to select new strategies, or searchthe entire database using the Find function.

Rationale for KidTools and KidSkills

KidTools and KidSkills encourage children to expresstheir social and academic behaviors in positive terms.They focus on doing positive behaviors and being suc-cessful, rather than stopping negative behaviors and fail-ure. For example, they decide “I will use my inside voice”rather than be reminded “Don’t talk loudly in class.”Children are taught organizational and learning strate-gies that can be used in general and special settings. Thecomputerized tools serve as bridges to support studentsin behaving and learning. This form of assistive tech-nology works because it provides direct instruction aswell as scaffolds to support learning at the “right time,right form, and right place.”

For these materials to be successful, teachers or par-ents must first provide instruction in behavior changetechniques and self-monitoring. Children need to under-stand the concept of a behavior as something they do or

think, be able to name behaviors, understand how to mon-itor whether behaviors occur or not, and assume somelevel of responsibility for their own behaviors. Childrenneed to learn how and when to use organizational andlearning strategies and to use self-monitoring techniquesin applying the strategies across settings.

Helping Children Use KidTools and KidSkills

Children will need to be taught to use self-talk cues andinner speech to give themselves directions. Each self-management procedure in KidTools has variations thatwill need to be taught, demonstrated, and practiced withchildren before they can be used independently. The or-ganizational and learning strategies in KidSkills aregeneric but require teaching, practice, and teacher help inapplying the tools prior to independent use. Once chil-dren know the prerequisite skills, the computer programcan be used with a minimal amount of teacher help. On-going support by the teacher or parent will ensure thatthe process remains effective and positive for children.

A teacher can use the following steps to help chil-dren learn to use KidTools and KidSkills.

Discuss behavioral and task expectations. Theteacher can use setting or task demands to initiate dis-cussion about expectations. Students can verbalize whatthey must do to be successful and identify challenges inmeeting the expectations. Students should be able todemonstrate the behaviors and do the tasks prior to usingthe tools.

Introduce the software tools. The tool programs canbe introduced to the whole class using a computer and

593

Self-Management

projection device. The teacher can demonstrate the menuof available tools and highlight a variety of them, in-cluding how to navigate the program, how to enter con-tent into the tools, and how to save and print outcompleted tools.

Model and demonstrate use of the tools. Using pre-selected tools for specific purposes, the teacher can so-licit student input while demonstrating how the softwareworks. The finished tools can be displayed and printedout for students to have copies. A discussion of their ac-tual use follows.

Provide guided practice. Using available computerequipment, in a lab for group practice or in the classroomfor independent or small group practice, the teacher canguide and oversee student development of tools. It will behelpful to have students verbalize their content prior tofilling in the tool. The purpose of the guided practice stepis to ensure correct software usage.

Provide independent practice. The teacher can helpstudents identify authentic uses for the tools and provideopportunities for creating and using tools. During inde-pendent usage, teachers should provide encouragementand reinforcement for student tool usage and successfuloutcomes in supervised situations.

Facilitate generalization. The teacher can help stu-dents identify authentic needs and uses of tools acrossschool and home environments. The teacher can assistby prompting tool usage, problem solving, and com-municating with other teachers and parents about the procedures.

Programs Available Free

The programs can be downloaded at no cost in Windowsor Macintosh versions from the KTSS Web site athttp://kidtools.missouri.edu. Training modules for teach-ers with video demonstrations and practice materials canalso be downloaded at this site. All three programs arealso available on CD-ROM at cost from InstructionalMaterials Laboratory at www.iml.coe.missouri.edu. TheKTSS programs have been developed with partial fund-ing by the U.S. Department of Education, Office of Spe-cial Education Programs.

Adapted from “KidTools and KidSkills: Self-Management Tools forChildren” by Gail Fitzgerald and Kevin Koury, 2006. In W. L. Heward,Exceptional Children: An Introduction to Special Education (8th ed., pp. 248–250). Copyright 2006 by Pearson Education. Used by permission.

Self-Management Helps a Person Feel Free

Baum (2005), noted that people caught in reinforcementtraps who recognize the contingency between their ad-dictive behavior, impulsiveness, or procrastination andthe eventual consequence that is likely to occur as a resultof that behavior are unhappy and do not feel free. How-ever, “Someone who escapes from a reinforcement trap,like someone who escapes from coercion, feels free andhappy. Ask anyone who has kicked an addiction” (p. 193).

Ironically, a person who skillfully uses self-man-agement techniques derived from a scientific analysis ofbehavior–environment relations predicated on an as-sumption of determinism is more likely to feel free thanis a person who believes that her behavior is a productof free will. In discussing the apparent disconnect be-tween philosophical determinism and self-control, Ep-stein (1997) compared two people: one who has noself-management skills and one who is very skilled atmanaging his own affairs.

First, consider the individual who has no self-control. InSkinner’s view, such a person falls prey to all immediatestimuli, even those that are linked to delayed punish-ment. Seeing a chocolate cake, she eats it. Handed a cig-arette, she smokes it. . . . She may make plans, but shehas no ability to carry them out, because she is entirelyat the mercy of proximal events. She is like a sailboatblowing uncontrollably in a gale. . . .

At the other extreme, we have a skillful self-manager. He, too, sets goals, but he has ample ability to meet them. He has skills to set dangerous reinforcersaside. He identifies conditions that affect his behaviorand alters them to suit him. He takes temporally remotepossibilities into account in setting his priorities. Thewind is blowing, but he sets the boat’s destination anddirects it there.

These two individuals are profoundly different. Thefirst is being controlled in almost a linear fashion by herimmediate environment. The second is, in a nontrivialsense, controlling his own life. . . .

The woman who lacks self-control skills feels con-trolled. She may believe in free will (in fact, in our cul-ture, it’s a safe bet that she does) but her own life is outof control. A belief in free will only exacerbate her frus-tration. She should be able to will herself out of any jam,but “willpower” proves to be highly unreliable. In con-trast, the self-manager feels that he is in control. Ironi-cally, like Skinner, he may believe in determinism, buthe not only feels that he is in control, he is in fact exer-cising considerably more control over his life than ourimpulsive subject. (p. 560, emphasis in original)

Self-Management Feels Good

A final, but by no means trivial, reason for learningself-management is that being in control of one’s lifefeels good. A person who arranges her environment in

594

Self-Management

purposeful ways that support and maintain desirablebehaviors will not only be more productive than shewould otherwise be, but will also feel good about her-self. Seymour (2002) noted her personal feelings abouta self-management intervention she implemented withthe objective of running for 30 minutes three times perweek.

The guilt I had been feeling for the past years hadbeen burning strong within me. [As a former scholar-ship softball player] I used to exercise 3 hours per day (6 days per week) because that exercise paid for my schooling. . . . But now the contingencies were missing, and my exercising had gone completely downthe tubes. Since the start of my project, I’ve run 15 outof the 21 times; so I’ve gone from 0% during baseline to 71% during my intervention. I’m happy with that. Because the intervention’s successful, the guilt I havefelt for the past years has been lifted. I have more energy and my body feels new and strong. It’s amazingwhat a few contingencies can do to improve the qualityof life in a big way. (p. 7–12)

Antecedent-Based Self-Management TacticsIn this and the three sections that follow we describesome of the many self-management tactics that behav-ioral researchers and clinicians have developed. Al-though no standard set of labels or classification schemefor self-management tactics has emerged, the techniquesare often presented according to their relative emphasison antecedents or consequences for the target behavior.An antecedent-based self-management tactic is onewhose primary feature is the manipulation of events orstimuli antecedent to the target (controlled) behavior.Sometimes lumped under general terms such as envir-onmental planning (Bellack & Hersen, 1977; Thoresen& Mahoney, 1974) or situational inducement (Martin &Pear, 2003), antecedent-based approaches to self-man-agement encompass a wide range of tactics, includingthe following:

• Manipulating motivating operations to make a de-sired (or undesired) behavior more (or less) likely

• Providing response prompts

• Performing the initial steps of a behavior chain toensure being confronted later with a discriminativestimulus that will evoke the desired behavior

• Removing the materials required for an unde-sired behavior

• Limiting an undesired behavior to restricted stimu-lus conditions

2 12

2 12

7Motivating operations are described in detail in Chapter entitled “ Moti-vating Operations.”

• Dedicating a specific environment for a desired behavior

Manipulating MotivatingOperations

People with an understanding of the dual effects of mo-tivating operations can use that knowledge to their ad-vantage when attempting to self-manage their behavior.A motivating operation (MO) is an environmental con-dition or event that (a) alters the effectiveness of somestimulus, object, or event as a reinforcer and (b) altersthe current frequency of all behaviors that have been re-inforced by that stimulus, object, or event in the past. AnMO that increases the effectiveness of a reinforcer andhas an evocative effect on behaviors that have producedthat reinforcer is called an establishing operation (EO); anMO that decreases the effectiveness of a reinforcer andhas an abative (i.e., weakening) effect on the frequencyof behaviors that have produced that reinforcer is calledan abolishing operation (AO).7

The general strategy for incorporating an MO into aself-management intervention is to behave in a way (con-trolling behavior) that creates a certain state of motivationthat, in turn, increases (or decreases as desired) the sub-sequent frequency of the target behavior (controlled be-havior). Imagine that you have been invited to dinner atthe house of someone known to be a first-rate cook. Youwould like to obtain maximum enjoyment from whatpromises to be a special meal but are worried you willnot be able to eat everything. By purposively skippinglunch (controlling behavior), you create an EO that wouldincrease the likelihood of your being able to enjoy theevening meal from appetizer through dessert (controlledbehavior). Conversely, eating a meal just before goinggrocery shopping (controlling behavior) could serve asan AO that decreases the momentary value of ready-to-eat foods with high sugar and fat content as reinforcersand results in purchasing fewer such items at the super-market (controlled behavior).

Providing Response Prompts

Creating stimuli that later function as extra cues and re-minders for desired behaviors is one of the simplest, mosteffective, and most widely applied self-management tech-niques. Response prompts can take a wide variety offorms (e.g., visual, auditory, textual, symbolic), be per-manent for regularly occurring events (e.g., entering “Put

595

Self-Management

ZAP!

ZAP!

ZAP!

ZAP!

garbage out tonight!” each Thursday on one’s calendar orPDA), or be one-time affairs such as when Raylene wrote,“gray suit today” on a Post-It Note and attached it to hercloset door where she was sure to see it when dressing forwork in the morning. Dieters often put pictures of over-weight people, or perhaps even unattractive pictures ofthemselves, on the refrigerator door, near the ice cream,in the cupboard—anyplace where they might look forfood. Seeing the pictures may evoke other controlling responses, such as moving away from the food, phoninga friend, going for a walk, or marking a point on an “IDidn’t Eat It!” chart.

The same object can be used as a generic responseprompt for various behaviors, such as putting a rubberband around the wrist as a reminder to do a certain taskat some later time. This form of response prompt, how-ever, will be ineffective if the person is unable later to re-member what specific task the physical cue was intendedto prompt. Engaging in some self-instruction when“putting on” a generic response prompt may help the per-son recall the task to be completed when seeing the cuelater. For example, one of the authors of this text uses asmall carabineer he calls a “memor-ring” as a generic re-

sponse prompt. When the thought of an important taskthat can only be completed in another setting later in theday occurs to him, he clips the carabineer to a belt loopor the handle of his briefcase and “activates the memor-ring” by stating the target task three times to himself (e.g.,“borrow Nancy’s journal at Arps Hall,” “borrow Nancy’sjournal at Arps Hall,” “borrow Nancy’s journal at ArpsHall”). When he sees the “memor-ring” in the relevantsetting later, it almost always functions as an effectiveresponse prompt for completing the targeted task.

Cues can also be used to prompt the occurrence of abehavior the person wants to repeat in a variety of set-tings and situations. In this case, the person might salthis environment with supplemental response prompts.For example, a father who wants to increase the numberof times he interacts with and praises his children mightput ZAP! cues like those shown in Figure 1 in variousplaces around the house where he will see them rou-tinely—on the microwave, on the T V remote control, asa bookmark. Every time he sees a ZAP! cue, the father isreminded to make a comment or ask a question of hischild, or to look for some positive behavior for which hecan provide attention and praise.

Instructions: Cut out these ZAP! cues and place them in conspicuous locationsthroughout your home or apartment, such as the refrigerator, bathroom mirror, TVremote control, and the door to your child’s bedroom. Each time you see a ZAP cue,you’ll be reminded to do one of your self-selected target behaviors.Figure 1 Contrived cues used by parents to remind themselves to emit self-selected parenting behaviors.Originally developed by Charles Novak. From Working with Parents of Handicapped Children by W. L.Heward, J. C. Dardig, and A. Rossett, 1979, p. 26, Columbus, OH: Charles E. Merrill. Copyright 1979 byCharles E. Merrill. Reprinted by permission.

596

Self-Management

Other people can also provide supplemental responseprompts. For example, Watson and Tharp (2007) de-scribed the self-management procedure used by a manwho had been off cigarettes for about 2 months andwanted to stop smoking permanently.

He had stopped several times in the past, but each timehe had gone back to smoking. He had been successfulthis time because he had identified those situations inwhich he had relapsed in the past and had taken steps todeal with them. One of the problem situations was beingat a party. The drinks, the party atmosphere, and the feel-ing of relaxation represented an irresistible temptation to“smoke just a few,” although in the past this usually ledto a return to regular smoking. One night, as the man andhis wife were getting ready to go out to a party, he said,“You know, I am really going to be tempted to smokethere. Will you do me a favor? If you see me bumming acigarette from someone, remind me of how much thekids want me to stay off cigarettes.” (pp. 153–154)

Performing the Initial Steps of a Behavior Chain

Another self-management tactic featuring manipulationof antecedent events entails behaving in a manner thatensures being confronted later with a discriminative stim-ulus (SD ) that reliably evokes the target behavior. Al-though operant behavior is selected and maintained byits consequences, the occurrence of many behaviors canbe controlled on a moment-to-moment basis by the pres-entation or removal of discriminative stimuli.

A self-management tactic more direct than addingresponse prompts to the environment is to behave in sucha way so that your future behavior makes contact with apowerful discriminative stimulus for the desired behav-ior. Many tasks consist of chains of responses. Each re-sponse in a behavior chain produces a change in theenvironment that serves both as a conditioned reinforcerfor the response that preceded it and as an SD for the nextresponse in the chain. Each response in a behavior chainmust occur reliably in the presence of its related dis-criminative stimulus for the chain to be completed suc-cessfully. By performing part of a behavioral chain (theself-management response) at one point in time, a personhas changed her environment so that she will be con-fronted later with an SD that will evoke the next responsein the chain and will lead to the completion of the task(the self-managed response). Skinner (1983b) providedan excellent example of this tactic.

Ten minutes before you leave your house for the dayyou hear a weather report: It will probably rain beforeyou return. It occurs to you to take an umbrella (the sen-tence means quite literally what it says: The behavior oftaking an umbrella occurs to you), but you are not yet

able to execute it. Ten minutes later you leave withoutthe umbrella. You can solve that kind of problem by exe-cuting as much of the behavior as possible when it oc-curs to you. Hang the umbrella on the doorknob, or putit through the handle of your briefcase, or in some otherway start the process of taking it with you. (p. 240)

Removing Items Necessaryfor an Undesired Behavior

Another antecedent-based self-management tactic is toalter the environment so that an undesired behavior is lesslikely or, better yet, impossible to emit. The smoker whodiscards her cigarettes and the dieter who removes allcookies and chips from his house, car, and office, have, forthe moment at least, effectively controlled their smokingand junk food eating. Although other self-managementefforts will probably be needed to refrain from seekingand reobtaining the harmful material (e.g., the ex-smokerwho asked his wife to provide response prompts if shesaw him asking for cigarettes at a party), ridding oneselfof the items needed to engage in an undesired behavior isa good start.

Limiting Undesired Behaviorto Restricted Stimulus Conditions

A person may be able to decrease the frequency of anundesired behavior by limiting the setting or stimulusconditions under which he engages in the behavior. Tothe extent that the restricted situation acquires stimuluscontrol over the target behavior and access to the situa-tion is infrequent or otherwise not reinforcing, the targetbehavior will occur less frequently. Imagine a man whohabitually touches and scratches his face (he is aware ofthis bad habit because his wife often nags him about itand asks him to stop). In an effort to decrease the fre-quency of his face touching, the man decides that he willdo two things. First, whenever he becomes aware that heis engaging in face touching, he will stop; second, at anytime he can go to a bathroom and touch and rub his facefor as long as he wants.

Nolan (1968) reported the case of a woman who usedrestricted stimulus conditions in an effort to quit smok-ing. The woman noted that she most often smoked whenother people were around or when she watched televi-sion, read, or lay down to relax. It was decided that shewould smoke only in a prescribed place, and that shewould eliminate from that place the potentially reinforc-ing effects of other activities. She designated a specificchair as her “smoking chair” and positioned it so that shecould not watch TV or easily engage in conversationwhile sitting in it. She asked other family members not toapproach her or talk to her while she was in her smoking

597

Self-Management

chair. She followed the procedure faithfully, and hersmoking decreased from a baseline average of 30 ciga-rettes per day to 12 cigarettes per day. Nine days afterbeginning the program, the woman decided to try to re-duce her smoking even further by making her smokingchair less accessible. She put the chair in the basement,and her smoking decreased to five cigarettes per day. Amonth after beginning the smoking chair program, thewoman had quit smoking completely.

Goldiamond (1965) used a similar tactic to help aman who continually sulked when interacting with hiswife. The husband was instructed to limit his sulking be-havior to his “sulking stool,” which was placed in thegarage. The man went to the garage whenever he felt likesulking, sat on his stool and sulked for as long as hewanted, and left when he was finished sulking. The manfound that his sulking decreased considerably when hehad to sit on a stool in the garage to do it.

Dedicating a Specific Environmentfor a Desired Behavior

A person may achieve some degree of stimulus controlover a behavior that requires diligence and concentrationby reserving or creating an environment where he willonly engage in that behavior. For example, students haveimproved their study habits and professors their schol-arly productivity when they have selected a specific placein which to study, free of other distractions, and have notengaged in any other behaviors such as daydreaming orletter writing in that place (Goldiamond, 1965). Skinner(1981b) suggested this kind of stimulus control strategywhen he offered the following advice to aspiring writers:

Equally important are the conditions in which the be-havior occurs. A convenient place is important. It shouldhave all the facilities needed for the execution of writ-ing. Pens, typewriters, recorders, files, books, a comfort-able desk and chair. . . . Since the place is to take controlof a particular kind of behavior, you should do nothingelse there at any time. (p. 2)

People who do not have the luxury of devoting a cer-tain environment to a single activity can create a specialstimulus arrangement that can be turned on and off in themultipurpose setting, as in this example provided by Wat-son and Tharp (2007):

A man had in his room only one table, which he had touse for a variety of activities, such as writing letters,paying bills, watching TV, and eating. But when hewanted to do concentrated studying or writing, he al-ways pulled his table away from the wall and sat on theother side of it. In that way, sitting on the other side ofhis table became the cue associated only with concen-trated intellectual work. (p. 150)

Most students have a single personal computer thatthey use for their academic work and scholarly writingbut also for completing administrative tasks, writing andreading personal e-mails, conducting family business,playing games, shopping online, browsing the Internet,and so on. In a manner similar to the man who sat on theother side of his table when studying and writing, a per-son could display a particular background on the com-puter’s desktop that signals that only academic orscholarly work is to be done on the computer. For ex-ample, when he sits down to study or write, a studentcould replace the standard display on his computer desk-top, a photo of his dog, let’s say, with a solid green back-ground. The green desktop signals that only scholarlywork will be done. Over time the “scholarly work” desk-top may acquire a degree of stimulus control over thedesired behavior. If the student wants to quit workingand do anything else on the computer, he must firstchange the background on the desktop. Doing so beforehe has fulfilled his time or productivity goal for a worksession may generate some guilt that he can escape by re-turning to work.

This tactic can also be used to increase a desired be-havior that is not being emitted at an acceptable rate be-cause of competing undesired behaviors. In one case anadult insomniac reportedly went to bed about midnightbut did not fall asleep until 3:00 or 4:00 A.M. Instead offalling asleep, the man would worry about several mun-dane problems and turn on the television. Treatment con-sisted of instructing the man to go to bed when he felttired, but he was not to stay in bed if he was unable tosleep. If he wanted to think about his problems or watchtelevision, he could do so, but he was to get out of bed andgo into another room. When he again felt sleepy, he wasto return to bed and try again to go to sleep. If he stillcould not sleep, he was to leave the bedroom again. Theman reported getting up four or five times a night for thefirst few days of the program, but within 2 weeks he wentto bed, stayed there, and fell asleep.

Self-MonitoringSelf-monitoring has been the subject of more researchand clinical applications than any other self-managementstrategy. Self-monitoring (also called self-recording orself-observation) is a procedure whereby a person ob-serves his behavior systematically and records the oc-currence or nonoccurrence of a target behavior. Originallyconceived as a method of clinical assessment for col-lecting data on behaviors that only the client could ob-serve and record (e.g., eating, smoking, fingernail biting),self-monitoring soon became a major therapeutic inter-vention in its own right because of the reactive effects it

598

Self-Management

At the top of the page are several rows of squares. At different times during the period (whenever you think of it but don’t fill them all in at the same time) put down a “+” if you were studying and a “−” if you weren’t. If, for example, you were ready to mark a square, you would ask yourself if, for the last few minutes you had been studying and then you would put down a “+” if you had been studying or a “−” if you hadn’t been studying.

Date

+ + − − + + − + − +

− + −

Figure 2 Example of a self-recording formused by an eighth-grade girl.From “The Effect of Self-Recording on the Classroom Behaviorof Two Eighth-Grade Students” by M. Broden, R. V. Hall, and B. Mitts, 1971, Journal of Applied Behavior Analysis, 4, p. 193.Copyright 1971 by the Society for the Experimental Analysis ofBehavior, Inc. Reprinted by permission.

often produced. Reactivity refers to the effects on a per-son’s behavior produced by an assessment or measure-ment procedure. In general, the more obtrusive anobservation and measurement method, the greater willbe the likelihood of reactivity (Haynes & Horn, 1982;Kazdin, 2001). When the person observing and recordingthe target behavior is the subject of the behavior changeprogram, maximum obtrusiveness exists and reactivityis very likely.

Although reactivity caused by a researcher’s mea-surement system represents uncontrolled variability in astudy and must be minimized as much as possible, thereactive effects of self-monitoring are usually welcomefrom a clinical perspective. Not only does self-monitoringoften change behavior, but also the change is typically inthe educationally or therapeutically desired direction(Hayes & Cavior, 1977, 1980; Kirby, Fowler, & Baer,1991; Malesky, 1974).

Behavior therapists have had adult clients use self-monitoring to reduce overeating, decrease smoking (Lip-inski, Black, Nelson, & Ciminero, 1975; McFall, 1977),and stop biting their nails (Maletzky, 1974). Self-monitoring has helped students with and without dis-abilities be on task more often in the classroom (Blick &Test, 1987; Kneedler & Hallahan, 1981; Wood, Murdock,Cronin, Dawson, & Kirby, 1998), decrease talk-outs andaggression (Gumpel & Shlomit, 2000; Martella, Leonard,Marchand-Martella, & Agran, 1993), improve their per-formance in a variety of academic subject areas (Harris,1986; Hundert & Bucher, 1978; Lee & Tindal, 1994;Maag, Reid, & DiGangi, 1993; Moxley, Lutz, Ahlborn,Boley, & Armstrong, 1995; Wolfe, Heron, & Goddard,2000), and complete homework assignments (Trammel,

Schloss, & Alper, 1994). Classroom teachers have usedself-monitoring to increase their use of positive state-ments during classroom instruction (Silvestri, 2004).

In one of the first published accounts of self-moni-toring in the classroom, Broden, Hall, and Mitts (1971)analyzed the effects of self-recording on the behavior oftwo eighth-grade students. Liza was earning a D– in his-tory and exhibited poor study behavior during the lectureformat class. Using 10-second momentary time samplingfor 30 minutes each day, an observer seated in the backof the classroom (Liza was not told she was being ob-served) produced a 7-day baseline showing that Liza ex-hibited study behaviors (e.g., faced the teacher, took noteswhen appropriate) an average of 30% of the observationintervals, despite two conferences with the school coun-selor in which she promised to “really try.” Prior to theeighth session, the counselor gave Liza a piece of paperwith three rows of 10 squares (see Figure 2) and directedher to record her study behavior “when she thought ofit” during her history class sessions. Some aspects ofstudy behavior were discussed at this time, including adefinition of what constituted studying.

Liza was instructed to take the slip to class each day andto record a “+” in the square if she was studying or hadbeen doing so for the last few minutes, and a “�” if shewas not studying at the time she thought to record.Sometime before the end of the school day she was toturn it in to the counselor. (p. 193)

Figure 3 shows the results of Liza’s self-monitoring.Her level of study behavior increased to 78% (as mea-sured by the independent observer) and stayed at ap-proximately that level during Self-Recording 1, quickly

599

Self-Management

100

80

60

40

20

0

Per

cen

t St

ud

y B

ehav

ior

Baseline 1 Self-Recording 1

Baseline 2 Self Recording 2 PraiseOnly

Self-RecordingPlus Praise

Baseline 3

No slip

No slip

6543210

Tea

cher

Att

enti

on

5 10 15 20 25 30 35 40

Sessions

Figure 3 Percentage of observed intervals in which an eighth-grade girl paid attention in a history class.From “The Effect of Self-Recording on the Classroom Behavior of Two Eighth-Grade Students” by M. Broden, R. V. Hall, and B. Mitts, 1971, Journal ofApplied Behavior Analysis, 4, p. 194. Copyright 1971 by the Society for the Experimental Analysis of Behavior, Inc. Reprinted by permission.

decreased to baseline levels when self-recording was ter-minated during Baseline 2, and averaged 80% duringSelf-Recording 2. During Self-Recording Plus Praise,Liza’s teacher attended to her whenever he could duringhistory class and praised her for study behavior when-ever possible. Liza’s level of study behavior during thiscondition increased to 88%.

The bottom part of Figure 3 shows the number oftimes per session the observer recorded teacher attentiontoward Liza. The fact that the frequency of teacher at-tention did not correlate in any apparent way with Liza’sstudy behavior during the first four phases of the exper-iment suggests that teacher attention—which often ex-erts a powerful influence on student behavior—was nota confounding variable and that the improvements inLiza’s study behavior most likely can be attributed to theself-recording procedure. However, the effects of self-monitoring may have been confounded by the fact thatLiza turned her self-recording slips in to the counselorbefore the end of each school day, and at weekly student–counselor conferences she received praise from the coun-

selor for recording slips with a high percentage of plusmarks.

In the second experiment reported by Broden andcolleagues (1971), self-monitoring was used with Stu, aneighth-grade student who talked out continually in class.Using a 10-second partial interval recording procedureto calculate the number of talk-outs per minute, an inde-pendent observer recorded Stu’s behavior during bothparts of a math class that met before and after lunch. Self-recording was begun first during the before-lunch por-tion of the math class. The teacher handed Stu a slip ofpaper on which was marked a 2-inch by 5-inch rectan-gular box and the instruction, “Put a mark every time youtalk out” (p. 196). Stu was told to use the recording slipand turn it in after lunch. At the top of the recording slipwas a place for the student’s name and the date. No otherinstructions were given to Stu, nor were any conse-quences or other contingencies applied to his behavior.Self-recording was later implemented during the secondhalf of the math class. During the prelunch portion of themath class, Stu’s baseline rate of 1.1 talk-outs per minute

600

Self-Management

was followed by a rate of 0.3 talk-outs per minute whenhe self-recorded. After lunch Stu talked out at an averagerate of 1.6 times per minute before self-recording and 0.5times per minute during self-recording.

The combination reversal and multiple baseline de-sign showed a clear functional relation between Stu’s re-duced talk-outs and the self-monitoring procedure.During the final phase of the experiment, however, Stutalked out at rates equal to initial baseline levels, eventhough self-recording was in effect. Broden and col-leagues (1971) suggested that the lack of effect producedby self-recording during the final phase may have re-sulted from the fact that “no contingencies were ever ap-plied to differential rates of talking out and the slips thuslost their effectiveness” (p. 198). It is also possible thatthe initial decreases in talk-outs attributed to self-recordingwere confounded by Stu’s expectations of some form ofteacher reinforcement.

As can be seen from these two experiments, it is ex-tremely difficult to isolate self-monitoring as a straight-forward, “clean” procedure; it almost always entails othercontingencies. Nevertheless, the various and combinedprocedures that comprise self-monitoring have oftenproved to be effective in changing behavior. However,the effects of self-monitoring on the target behavior aresometimes temporary and modest and may require theimplementation of reinforcement contingencies to main-tain the desired behavior changes (e.g., Ballard & Glynn,1975; Critchfield & Vargas, 1991).

Self-Evaluation

Self-monitoring is often combined with goal setting andself-evaluation. A person using self-evaluation (alsocalled self-assessment) compares her performance with apredetermined goal or standard (e.g., Keller, Brady, &Taylor, 2005; Sweeney et al., 1993). For example, Grossiand Heward (1998) taught four restaurant trainees withdevelopmental disabilities to select productivity goals,self-monitor their work, and self-evaluate their perfor-mance against a competitive productivity standard (i.e.,the typical rate at which nondisabled restaurant employ-ees perform the task in competitive settings). Job tasks inthis study included scrubbing pots and pans, loading adishwasher, busing and setting tables, and mopping andsweeping floors.

Self-evaluation training with each trainee took ap-proximately 35 minutes across three sessions and con-sisted of five parts: (1) rationale for working faster (e.g.,finding and keeping a job in a competitive setting),(2) goal setting (trainee was shown a simple line graph ofhis baseline performance compared to the competitivestandard and prompted to set a productivity goal), (3) use

of timer or stopwatch, (4) self-monitoring and how tochart self-recorded data on a graph that showed compet-itive standard with a shaded area, and (5) self-evaluation(comparing their work against the competitive standardand making self-evaluative statements, e.g., “I’m not inthe shaded area. I need to work faster.” or “Good, I’m inthe area.”). When a trainee met his goal for three consec-utive days, a new goal was selected. At that time all fourtrainees had selected goals within the competitive range.

The work productivity of all four trainees increasedas a function of the self-evaluation intervention. Figure 4shows the results for one of the participants, Chad, a 20-year-old male with mild mental retardation, cerebralpalsy, and a seizure disorder. For the 3 months prior to thestudy, Chad had been training at two work tasks at thedishwashing job station: scrubbing cooking pots and pansand racking dishes (a six-step behavior chain for loadingan empty dishwasher rack with dirty dishes). Throughoutthe study, pot scrubbing was measured during 10-minuteobservation sessions both before and after the daily lunchoperation. Racking dishes was measured using a stop-watch to time to the nearest second how long a traineetook to load a rack of dishes; 4 to 8 racks per sessionwere timed during the busiest hour of the lunch shift.After each observation session during baseline, Chad re-ceived feedback on the accuracy and quality of his work.No feedback was given during baseline on the produc-tivity of his performance.

During baseline Chad scrubbed an average of 4.5pots and pans every 10 minutes, and none of his 15 base-line trials were within the competitive range of 10 to 15pots. Chad’s pot scrubbing rate increased to a mean of11.7 pots during self-evaluation, with 76% of 89 self-evaluation trials at or above the competitive range. Dur-ing baseline it took Chad a mean of 3 minutes 2 secondsto load one rack of dishes, and 19% of his 97 baselinetrials were within the competitive range of 1 to 2 min-utes. During self-evaluation, Chad’s time to load one rackimproved to a mean of 1 minute 55 seconds, and 70% ofthe 114 self-evaluation trials were within the competitiverange. At the end of the study, three of the four traineesstated that they liked to time and record their work per-formance. Although one trainee stated it was “too stress-ful” to time and record his work, he said thatself-monitoring helped him show other people that hecould do the work.

Self-Monitoring with Reinforcement

Self-monitoring is often part of an intervention packagethat includes reinforcement for meeting either self- orteacher-selected goals (e.g., Christian & Poling, 1997;

601

Self-Management

20

15

10

5

0

Nu

mb

er o

f P

ots

Scru

bbed

5 10 15 20 25 30 35 40 45 50 55 60

Baseline Self-Evaluation

Attempted

Accepted

7

6

5

4

3

2

1

0

Mea

n N

um

ber

an

d R

ange

of

Min

ute

s to

Rac

k D

ish

es

5 10 15 20 25 30 35 40 45 50 55 60

SessionsChad

Figure 4 Number of pots scrubbed in 10 minutes and mean and range of minutes to load a rack of dishes in dishwasher during baseline and self-evaluation conditions. Shaded bandsrepresent competitive performance range of restaurant workers without disabilities. Dashed hori-zontal line represents Chad’s self-selected goal; no dashed line means that Chad’s goal waswithin the competitive range. Vertical lines through data points on bottom graph show range ofChad’s performance across 4 to 8 trials.From “Using Self-Evaluation to Improve the Work Productivity of Trainees in a Community-Based Restaurant Training Program” by T. A.Grossi and W. L. Heward, 1998, Education and Training in Mental Retardation and Developmental Disabilities, 33, p. 256. Copyright 1998 bythe Division on Developmental Disabilities. Reprinted by permission.

Dunlap & Dunlap, 1989; Olympia et al., 1994; Rhode,Morgan, & Young, 1983). The reinforcer may be self-administered or teacher delivered. For example, Koegeland colleagues (1992) taught four children with autismbetween the ages of 6 and 11 to obtain their own rein-forcers after they had self-recorded a criterion number ofappropriate responses to questions from others (e.g.,“Who drove you to school today?”).

Teacher-delivered reinforcement was a componentof a self-monitoring intervention used by Martella andcolleagues (1993) to help Brad, a 12-year-old studentwith mild mental retardation, reduce the number of neg-ative statements he made during classroom activities (e.g.,“I hate this #*%@! calculator,” “Math is crappy”). Bradself-recorded negative statements during two class peri-ods, charted the number on a graph, and then comparedhis count with the count obtained by a student teacher. IfBrad’s self-recorded data agreed with the teacher at alevel of 80% or higher, he received his choice from a

menu of “small” reinforcers (items costing 25 cents orless). If Brad’s self-recorded data were in agreement withthe trainer’s data and were at or below a gradually de-clining criterion level for four consecutive sessions, hewas allowed to choose a “large” (more than 25 cents) re-inforcer (see Figure 13).

Why Does Self-Monitoring Work?

The behavioral mechanisms that account for the effec-tiveness of self-monitoring are not fully understood.Some behavior theorists suggest that self-monitoring iseffective in changing behavior because it evokes self-evaluative statements that serve either to reinforce de-sired behaviors or to punish undesired behaviors. Cautela(1971) hypothesized that a child who records on a chartthat he has completed his chores may emit covert verbalresponses (e.g., “I am a good boy”) that serve to rein-force chore completion. Malott (1981) suggested that

602

Self-Management

self-monitoring improves performance because of whathe called guilt control. Self-monitoring less-than-desirable behavior produces covert guilt statements thatcan be avoided by improving one’s performance. That is,the target behavior is strengthened through negative re-inforcement by escape and avoidance of the guilty feel-ings that occur when one’s behavior is “bad.”

Descriptions of the self-monitoring techniques usedby two famous authors apparently agreed with Malott’sguilt control hypothesis. Novelist Anthony Trollope, writ-ing in his 1883 autobiography, stated:

When I have commenced a new book, I have alwaysprepared a diary, divided into weeks, and carried on forthe period which I have allowed myself for the comple-tion of the work. In this I have entered, day by day, thenumber of pages I have written, so that if at any time Ihave slipped into idleness for a day or two, the record ofthat idleness has been there staring me in the face, anddemanding of me increased labour, so that the defi-ciency might be supplied. . . . I have allotted myself somany pages a week. The average number has been about40. It has been placed as low as 20, and has risen to 112.And as a page is an ambiguous term, my page has beenmade to contain 250 words; and as words, if notwatched, will have a tendency to straggle, I have hadevery word counted as I went. . . . There has ever beenthe record before me, and a week passed with an insuffi-cient number of pages has been a blister to my eye and amonth so disgraced would have been a sorrow to myheart. (from Wallace, 1977, p. 518)

Guilt control also helped to motivate the legendaryErnest Hemingway. Novelist Irving Wallace (1977) re-ported the following extract from an article by GeorgePlimpton about the self-monitoring technique used bythe Hemingway.

He keeps track of his daily progress—“so as not to kidmyself ”—on a large chart made out of the side of acardboard packing case and set up against the wallunder the nose of a mounted gazelle head. The numberson the chart showing the daily output of words differfrom 450, 575, 462, 1250, back to 512, the higher fig-ures on days Hemingway put in extra work so he won’tfeel guilty spending the following day fishing on thegulf stream. (p. 518)

Exactly what principles of behavior are operatingwhen self-monitoring results in a change in the target be-havior is not known because much of the self-monitoringprocedure consists of private, covert behaviors. In addi-tion to the problem of access to these private events, self-monitoring is usually confounded by other variables.Self-monitoring is often part of a self-management pack-age in which contingencies of reinforcement, punish-ment, or both, are included, either explicitly (e.g., “If I run10 miles this week, I can go to the movies”) or implicitly

(e.g., “I’ve got to show my record of calories consumedto my wife”). Regardless of the principles of behaviorinvolved, however, self-monitoring is often an effectiveprocedure for changing one’s behavior.

Guidelines and Procedures for Self-Monitoring

Practitioners should consider the following suggestionswhen implementing self-monitoring with their studentsand clients.

Provide Materials That Make Self-Monitoring Easy

If self-monitoring is difficult, cumbersome, or time-consuming, at best it will be ineffective and disliked bythe participant, and at worst it may have negative effectson the behavior. Participants should be provided with ma-terials and devices that make self-monitoring as easy andefficient as possible. All of the devices and proceduresfor measuring behavior (e.g., paper and pencil, wristcounters, hand-tally counters, timers, stopwatches) canbe used for self-monitoring. For example, a teacher whowants to self-monitor the number of praise statementsshe makes to students during a class period could put tenpennies in her pocket prior to start of class. Each timeshe praises a student’s behavior, she moves one coin toanother pocket.

The recording forms used for most self-monitoringapplications can and should be very simple. Self-recordingforms consisting of little more than a series of boxes orsquares such as the one shown in Figure 2 are often ef-fective. At various intervals the participant might write aplus or minus, circle yes or no, or put an X through a smil-ing face or sad face, in a kind of momentary time sam-pling procedure; or he could record a count of the numberof responses made during a just-completed interval.

Recording forms can be created for self-monitoringspecialized tasks or a chain of behaviors. Dunlap andDunlap (1989) taught students with learning disabilitiesto self-monitor the steps they used to solve subtractionproblems with regrouping. Each student self-monitoredhis work by recording a plus or a minus next to each stepon an individualized checklist of steps (e.g., “I under-lined all the top numbers that were smaller than the bot-tom.”; “I crossed out only the number next to theunderlined number and made it one less.” [p. 311]) de-signed to prompt the student from committing specifictypes of errors.

Lo (2003) taught elementary students who were at-risk for behavioral disorders to use the form shown inFigure 5 to self-monitor whether they worked quietly,

603

Self-Management

Figure 5 Form used by elementary school students to self-monitor whether they worked quietlyand followed a prescribed sequence for recruitingteacher assistance.From “Functional Assessment and Individualized Intervention Plans: Increasingthe Behavioral Adjustment of Urban Learners in General and Special EducationSettings” by Y. Lo, 2003. Unpublished doctoral dissertation. Columbus, OH: TheOhio State University. Reprinted by permission.

evaluated their work, and followed a prescribed sequencefor obtaining teacher assistance during independent seat-work activities. The form, which was taped to each stu-dent’s desk, functioned as a reminder to students of theexpected behaviors and as a device on which they self-recorded those behaviors.

“Countoons” are self-monitoring forms that illustratethe contingency with a series of cartoon-like frames. A

countoon reminds young children of not only what be-havior to record, but also what consequences will followif they meet their performance criteria. Daly and Ranalli(2003) created six-frame countoons that enable studentsto self-record an inappropriate behavior and an incom-patible appropriate behavior. In the countoon shown inFigure 6, Frames F1 and F4 show the student doing hermath work, appropriate behavior that is counted in FrameF5. The criterion number of math problems to meet thecontingency—in this case 10—is also indicated in FrameF5. Frame F2 shows the student talking with a friend, theinappropriate behavior to be counted in F3. The studentmust not chat more than six times to meet the contin-gency. The “What Happens” frame (F6) depicts the re-ward the student will earn by meeting both parts of thecontingency. Daly and Ranalli have provided detailedsteps for creating and using countoons to teach self-management skills to children.

Self-monitoring forms can also be designed to en-able self-recording of multiple tasks across days. For ex-ample, teachers working with high school-level studentsmight employ the Classroom Performance Record (CPR)developed by Young and colleagues (1991) as a way to as-sist students with monitoring their assignments, comple-tion of homework, points earned, and citizenship scores.The form also provides students with information abouttheir current status in the class, their likely semestergrade, and tips for improving their performance.

Provide Supplementary Cues or Prompts

Although some self-recording devices—a recording formtaped to a student’s desk, a notepad for counting caloriescarried in a dieter’s shirt pocket, a golf counter on ateacher’s wrist—serve as continual reminders to self-monitor, additional prompts or cues to self-monitor areoften helpful. Researchers and practitioners have used avariety of auditory, visual, and tactile stimuli as cues andprompts for self-monitoring.

Auditory prompts in the form of prerecorded signalsor tones have been widely used to cue self-monitoring inclassrooms (e.g., Blick & Test, 1987; Todd, Horner, &Sugai, 1999). For example, second-graders in a study byGlynn, Thomas, and Shee (1973) placed a checkmark ina series of squares if they thought they were on task at themoment they heard a tape-recorded beep. A total of 10beeps occurred at random intervals during a 30-minuteclass period. Using a similar procedure, Hallahan, Lloyd,Kosiewicz, Kauffman, and Graves (1979) had an 8-year-old boy place a checkmark next to Yes or No under theheading “Was I paying attention?” when he heard a toneplayed at random intervals by a tape recorder.

Ludwig (2004) used visual cues written on the class-room board to prompt kindergarten children to self-record

604

Self-Management

Do my math work

What I Do

F1 F2 F4 F6

F3

F5

Chat withmy friend

Do my math work 5 minutes ofmy favorite

game on Friday

My Count

1 2 3 4

5 6 7 8

9 10 11 12

What I Do What Happens

1 2 3 4

6

11

5

7 8 9 10

12 13 14 15

My Count

Figure 6 Example of a countoon that can be taped to a student’s deskas a reminder of target behaviors, theneed to self-record, and the conse-quence for meeting the contingency.From “Using Countoons to Teach Self-Monitoring Skills”by P. M. Daly and P. Ranalli, 2003, Teaching ExceptionalChildren, 35 (5), p. 32. Copyright 2003 by the Council forExceptional Children. Reprinted by permission.

their productivity during a morning seatwork activity inwhich they wrote answers on individual worksheets toitems, problems, or questions that the teacher had printedon a large whiteboard on the classroom wall. The workwas divided into 14 sections by topic and covered a va-riety of curriculum areas (e.g., spelling, reading com-prehension, addition and subtraction problems, tellingtime, money). At the end of each section the experimenterdrew a smiley face with a number from 1 to 14. The num-bered smiley faces drawn on the board corresponded to14 numbered smiley faces on each student’s self-monitoring card.

Tactile prompts can also be used to signal self-recording opportunities. For example, the MotivAider(www.habitchange.com) is a small electronic device thatvibrates at fixed or variable time intervals programmed bythe user. This device is excellent for signaling people toself-monitor or perform other self-management tasks, orto prompt practitioners to attend to the behavior of stu-dents or clients.8

Whatever form they take, prompts to self-monitorshould be as unobtrusive as possible so they do not dis-rupt the participant or others in the setting. As a generalrule, practitioners should provide frequent prompts toself-monitor at the beginning of a self-management in-tervention and gradually reduce their number as the par-ticipant becomes accustomed to self-monitoring.

8Flaute, Peterson, Van Norman, Riffle, and Eakins (2005) described 20 ex-amples for using a MotivAider to improve behavior and productivity inthe classroom.

Self-Monitor the Most Important Dimensionof the Target Behavior

Self-monitoring is an act of measurement. But as wesaw, there are a variety of dimensions by which a giventarget behavior could be measured. What dimension ofthe target behavior should be self-monitored? A personshould self-monitor the target behavior dimension that,should desired changes in its value be achieved, wouldyield the most direct and significant progress towardthe person’s goal for the self-management program. Forexample, a man wishing to lose weight by eating lessfood could measure the number of bites he takesthroughout the day (count), the number of bites perminute during each meal (rate), how much time elapsesfrom the time he sits down at the table to when he takesthe first bite (latency), how long he pauses between bites(interresponse time), or the amount of time his mealslast (duration). Although daily measures of each of thesedimensions would provide the man with some quanti-tative information about his eating, none of the dimen-sions would be as directly related to his goal as woulda count of the total number of calories he consumes eachday.

Several studies have examined the question ofwhether students should self-monitor their on-task be-havior or their academic productivity. Harris (1986) con-ducted one of the most frequently cited studies in thisline of research. She taught four elementary students withlearning disabilities to self-monitor either their on-taskbehavior or the number of academic responses they

605

Self-Management

completed while practicing spelling. During the self-monitoring of attention condition, students asked them-selves “Was I paying attention?” and checked theiranswer under Yes and No columns on a card wheneverthey heard a tape-recorded tone. During the self-monitoring of productivity condition, students countedthe number of spelling words written at the end of thepractice period. Both self-monitoring procedures resultedin increased on-task behavior for all four students. How-ever, for three of the four students, self-monitoring ofproductivity resulted in higher academic response ratesthan did self-monitoring of on-task behavior.

Similar results have been reported in other studiescomparing the differential effects of self-monitoring on-task behavior or productivity (e.g., Maag et al., 1993;Lloyd, Bateman, Landrum, & Hallahan, 1989; Reid &Harris, 1993). Although both procedures increase on-taskbehavior, students tend to complete more assignmentswhen self-monitoring academic responses than when self-recording on-task behavior. Additionally, most studentsprefer self-monitoring academic productivity to on-taskbehavior.

In general we recommend teaching students to self-monitor a measure of academic productivity (e.g., num-ber of problems or items attempted, number correct)rather than whether they are on task. This is because in-creasing on-task behavior, whether by self-monitoring orby contingent reinforcement, does not necessarily resultin a collateral increase in productivity (e.g., Marholin &Steinman, 1977; McLaughlin & Malaby, 1972). By con-trast, when productivity is increased, improvements inon-task behavior almost always occur. However, a stu-dent whose persistent off-task and disruptive behaviorsare creating problems for him or others in the classroommay benefit more from self-monitoring on-task behav-ior, at least initially.

Self-Monitor Early and Often

In general, each occurrence of the target behavior shouldbe self-recorded as soon as possible. However, the act ofself-monitoring a behavior the person wants to increaseshould not disrupt “the flow of the behavior” (Critchfield,1999). Self-monitoring target behaviors that produce nat-ural or contrived response products (e.g., answers on aca-demic worksheets, words written) can be conducted afterthe session by permanent product.

Relevant aspects of some behaviors can be self-monitored even before the target behavior itself has oc-curred. Self-recording a response early in a behaviorchain that leads to an undesired behavior the person wantsto decrease may be more effective in changing the targetbehavior in the desired direction than will recording onthe terminal behavior in the chain. For example, Rozen-

sky (1974) reported that, when a woman who had beena heavy smoker for 25 years recorded the time and placeshe smoked each cigarette, there was little change in herrate of smoking. She then began to record the same in-formation each time she noticed herself beginning thechain of behaviors that would lead to smoking: reachingfor her cigarettes, removing a cigarette from the pack,and so on. She stopped smoking within a few weeks ofself-monitoring in this fashion.

Generally, a person should self-monitor more oftenat the beginning of a behavior change program. If and asperformance improves, the frequency of self-monitoringcan be decreased. For example, Rhode and colleagues(1983) had students with behavioral disorders begin self-evaluating (on a 0- to 5-point scale) the extent to whichthey followed classroom rules and completed academicwork correctly at the end of each 15-minute intervalthroughout the school day. Over the course of the study,the self-evaluation intervals were increased gradually,first to every 20 minutes, then every 30 minutes, thenonce per hour. Eventually, the self-evaluation cards werewithdrawn and students reported their self-evaluationsverbally. In the final self-evaluation condition, the stu-dents verbally self-evaluated their academic work andcompliance with classroom rules on an average of every2 days (i.e., a VR 2-day schedule).

Reinforce Accurate Self-Monitoring

Some studies have found little correlation between theaccuracy of self-monitoring and its effectiveness inchanging the behavior being recorded (e.g., Kneedler &Hallahan, 1981; Marshall, Lloyd, & Hallahan, 1993). Itappears that accurate self-monitoring is neither a suffi-cient nor a necessary condition for behavior change. Forexample, Hundert and Bucher (1978) found that, eventhough the students became highly accurate in self-scoring their arithmetic, their performance on the arith-metic itself did not improve. On the other hand, in theBroden and colleagues (1971) study, both Liza and Stu’sbehavior improved even though their self-recorded dataseldom matched the data from independent observers.

Nevertheless, accurate self-monitoring is desirable,especially when participants are using their self-recordeddata as the basis for self-evaluation or self-administeredconsequences.

Although some studies have shown that young chil-dren can accurately self-record their behavior without spe-cific external contingencies for accuracy (e.g., Ballard &Glynn, 1975; Glynn, Thomas, & Shee, 1973), other re-searchers have reported low agreement between the self-recordings of children and the data collected byindependent observers (Kaufman & O’Leary, 1972;Turkewitz, O’Leary & Ironsmith, 1975). One factor that

606

Self-Management

seems to affect the accuracy of self-scoring is the use ofself-reported scores as a basis for reinforcement. San-togrossi, O’Leary, Romanczyk, and Kaufman (1973)found that, when children were allowed to evaluate theirown work and those self-produced evaluations were usedto determine levels of token reinforcement, their self-monitoring accuracy deteriorated over time. Similarly,Hundert and Bucher (1978) found that students who pre-viously had accurately self-scored their arithmetic assign-ments greatly exaggerated their scores when higher scoresresulted in points that could be exchanged for prizes.

Rewarding children for producing self-recorded datathat match the data of an independent observer and spot-checking students’ self-scoring reports are two proce-dures that have been used successfully to increase theaccuracy of self-monitoring by young children. Drab-man, Spitalnik, and O’Leary (1973) used these proce-dures in teaching children with behavior disorders toself-evaluate their own classroom behavior.

Now something different is going to happen. If you get arating within plus or minus one point of my rating, youcan keep all your points. You lose all your points if youare off my rating by more than plus or minus one point.In addition, if you match my rating exactly you get abonus point. (O’Leary, 1977, p. 204)

After the children demonstrated that they could eval-uate their own behavior reliably, the teacher beganchecking only 50% of the children’s self-ratings bypulling names from a hat at the end of the period, then33%, then 25%, then 12%. During the last 12 days ofthe study she did not check any child’s self-rating.Throughout this period of reduced checks and finally nochecks at all, the children continued to self-evaluate ac-curately. Rhode and colleagues (1983) used a similarfaded matching technique.

Self-AdministeredConsequencesArranging to have specified consequences follow occur-rences (or nonoccurrences) of one’s behavior is one ofthe fundamental approaches to self-management. In thissection we review some of the tactics people have usedto self-reinforce and self-punish. First, we briefly exam-ine some of the conceptual issues raised by the conceptof “self-reinforcement.”

Is Self-Reinforcement Possible?

Skinner (1953) pointed out that self-reinforcement shouldnot be considered synonymous with the principle of op-erant reinforcement.

The place of operant reinforcement in self-control is not clear. In one sense, all reinforcements are self-administered since a response may be regarded as “pro-ducing” its reinforcement, but “reinforcing one’s ownbehavior” is more than this. . . . Self-reinforcement of operant behavior presupposes that the individual has it in his power to obtain reinforcement but does not do sountil a particular response has been emitted. This mightbe the case if a man denied himself all social contactsuntil he had finished a particular job. Something of thissort unquestionably happens, but is it operant reinforce-ment? It is roughly parallel to the procedure in condi-tioning the behavior of another person. But it must be remembered that the individual may at any moment dropthe work in hand and obtain the reinforcement. We haveto account for his not doing so. It may be that such in-dulgent behavior has been punished—say, with disap-proval—except when a piece of work has just beencompleted. (pp. 237–38)

In his discussion of self-reinforcement, Goldiamond(1976) continued with Skinner’s example and stated thatthe fact that a person “does not cheat, but engages in thetask, cannot be explained simply by resort to his self-reinforcement by social contact, which he makes avail-able contingent on his finishing the job” (p. 510). In otherwords, the variables influencing the controlling re-sponse—in this case not allowing oneself social contactuntil the job is complete—must still be accounted for;simply citing self-reinforcement as the cause is an ex-planatory fiction.

The issue is not whether procedures labeled as “self-reinforcement” often work to change behavior in waysthat resemble the defining effect of reinforcement; theydo. However, a careful examination of instances of self-reinforcement reveals that something more than, or dif-ferent from, a straightforward application of positivereinforcement is involved (e.g., Brigham, 1980; Catania,1975, 1976; Goldiamond, 1976a, 1976b; Rachlin, 1977).As a technical term, self-reinforcement (as also, self-punishment) is a misnomer, and the issue is not merely asemantic one as some writers have suggested (e.g.,Mahoney, 1976). Assigning the effectiveness of a be-havior change tactic to a well-understood principle of be-havior when something other than, or in addition to, thatprinciple is operating overlooks relevant variables thatmay hold the key to a fuller understanding of the tactic.Considering the analysis of a behavioral episode com-plete once it has been identified as a case of self-reinforcement precludes further search for other relevantvariables.

We agree with Malott (2005a; Malott & Harrison,2002; Malott & Suarez, 2004) who argued thatperformance-management contingencies, whether designedand implemented by oneself or by others, are best viewedas rule-governed analogs of reinforcement and punishment

607

Self-Management

contingencies because the response-to-consequence delayis too great.9 Malott (2005a) provided the following exam-ples of self-management contingencies as analogs to neg-ative reinforcement and punishment contingencies:

[Consider the] contingency, Every day I eat more than1,250 calories, I will give someone else $5 to spend friv-olously, like a despised roommate or despised charity,though for many of us, the loss of $5 to a beloved char-ity is sufficiently aversive that the thought of it will pun-ish overeating. [This] contingency is an analog to apenalty [punishment] contingency, an analog becausethe actual loss of the $5 will normally occur more than 1 minute after exceeding the 1,250 calorie limit. Suchanalog penalty contingencies are effective in decreasingundesirable behavior. And to increase desirable behav-ior, an analog to an avoidance [negative reinforcement]contingency works well: Every day I exercise for onehour, I avoid paying a $5 fine. But, if you have not finished your hour by midnight, you must pay the $5. (p. 519, emphasis in original, words in brackets added)

Self-Administered Consequences to Increase Desired Behavior

A person can increase the future frequency of a target be-havior in a self-management program by applying con-tingencies that are analogs to positive reinforcement andnegative reinforcement.

Self-Management Analogs of Positive Reinforcement

Several self-reinforcement studies with schoolchildrenhave involved positive reinforcement in which partici-pants obtained a self-determined number of tokens, points,or minutes of free time based on a self-assessment oftheir performance (Ballard & Glynn, 1975; Bolstad &Johnson, 1972; Glynn, 1970; Koegel et al., 1992;Olympia et al., 1994).

The effects of treatments involving self-administeredrewards are difficult to evaluate because they are typi-cally confounded by self-monitoring and self-evaluation.However, in a study by Ballard and Glynn (1975), third-graders were taught after a baseline condition to self-score and self-record several aspects of their writing—number of sentences, number of describing words, andnumber of action words. Self-monitoring had no effecton any of the variables measured even though the stu-dents handed in their counting sheets with their writingeach day. The children were then given a notebook inwhich to record their points, which could be exchanged

9The critical importance of immediacy in reinforcement is discussed inChapter entitled “ Positive Reinforcement.”

at the rate of one point per minute for each student’schoice of activities during an earned-time period eachday. The self-reinforcement procedure resulted in largeincreases in each of the three dependent variables.

Self-administered reinforcement does not have to beself-delivered: the learner could make a response that re-sults in another person providing the reinforcer. For ex-ample, in studies on self-recruited reinforcement, studentsare taught to periodically self-evaluate their work andthen show it to their teacher and request feedback or as-sistance (e.g., Alber, Heward, & Hippler, 1999; Craft,Alber, & Heward, 1998; Mank & Horner, 1987; Smith,& Sugai, 2000). In a sense, students administer their ownreinforcer by recruiting the teacher’s attention, whichoften results in praise and other forms of reinforcement(see Alber & Heward, 2000 for a review).

Todd and colleagues (1999) taught an elementary student to use a self-management system that included self-monitoring, self-evaluation, and self-recruited reinforce-ment. Kyle was a 9-year-old boy diagnosed with a learningdisability and receiving special education services in read-ing, math, and language arts. Kyle’s IEP also included sev-eral objectives for problem behavior (e.g., disruptingindependent and group activities, teasing and taunting class-mates, and saying sexually inappropriate comments). Theclassroom teacher’s goal setting and daily evaluation hadproven ineffective. An “action team” conducted a functionalassessment and designed a support plan that included theself-management system.

Kyle was taught how to use the self-managementsystem during two 15-minute one-on-one training ses-sions in which he practiced self-recording numerous role-played examples and nonexamples of on-task and off-taskbehavior and learned appropriate ways to recruit teacherattention and praise. For self-monitoring, Kyle listenedwith a single earplug to a 50-minute cassette tape onwhich 13 checkpoints (e.g., “check one,” “check two”)had been prerecorded on a VI 4-minute schedule (rang-ing from 3- to 5-minute intervals between checkpoints).Each time Kyle heard a checkpoint, he marked a plus (ifhe had been working quietly and keeping his hands, feet,and objects to himself) or a zero (teasing peers and/ornot working quietly) on a self-recording card.

Todd and colleagues (1999) described the recruitmentof teacher praise and how Kyle’s special program was in-tegrated into the reinforcement system for the entire class:

Each time Kyle marked three pluses on his card he raisedhis hand (during instruction) or walked up to the teacher(during group project time) and requested feedback onhis performance. The teacher acknowledged Kyle’s goodwork and placed a mark on his self-monitoring card toshow where he should begin a new count of three pluses.In addition to these within-session contingencies, Kylecould earn a self-manager sticker at the end of each class

608

Self-Management

period if he had no more than two zeroes for the period.Stickers were earned by all students in the class for ap-propriate behavior and were pooled weekly for class re-wards. Because these stickers were combined, Kyle’sstickers were valued by all the students and provided opportunities for positive peer attention. (p. 70)

Procedures for the second self-management phase(SM2) were the same as the first phase; during the thirdphase (SM3) Kyle used a 95-minute tape on which 16checkpoints were prerecorded on a VI 5-minute schedule(ranging from 4 to 6 minutes). When Kyle used the self-management system, the percentage of intervals in whichhe engaged in problem behavior was much lower thanbaseline levels (see Figure 7). Large increases in on-taskbehavior and completion of academic work were alsonoted. An important outcome of this study was that Kyle’steacher praised him more often. Significantly, the self-management intervention was initiated in Class PeriodB (bottom graph in Figure 7) at the teacher’s request be-cause she had noted an immediate and “dramatic change”in Kyle’s performance when he began self-monitoringand recruiting teacher praise in Class Period A. This out-come is strong evidence for the social validity of the self-management intervention.

Self-Management Analogs of Negative Reinforcement

Many successful self-management interventions involveself-determined escape and avoidance contingencies thatare analogous to negative reinforcement. Most of the casestudies featured in Malott and Harrison’s (2002) excellentbook on self-management, I’ll Stop Procrastinating WhenI Get Around To It, feature escape and avoidance contin-gencies in which emitting the target behavior enabled theperson to avoid an aversive event. For example:

Target Behavior/Objective Self-Management Contingency

Make a daily journal entry to Each day I failed to make a be able to remember the journal entry, I had to interesting things that had do my friend’s chores, happened to include in a including doing the dishes weekly letter to my parents. and laundry. (Garner, 2002)

Run 30 minutes, 3 times per I had to pay the piper $3 at week. 10:00 P.M. Sunday for every

day I’d run less than three times that week. I could count only one run per day.(Seymour, 2002)

Practice guitar for half an Do 50 sit-ups at 11:00 P.M.hour each day of the week on Sundays for every day before 11:00 P.M. that I did not practice during

the week. (Knittel, 2002)

In response to the uneasiness some people may feelabout using negative reinforcement to control their behav-ior, Malott (2002) made the following case for building“pleasant aversive control” into self-management programs.

Aversive control doesn’t have to be aversive! . . . Here’swhat I think you need in order to have a pleasant aver-sive control procedure. You need to make sure that theaversive consequence, the penalty, is small. And youneed to make sure the penalty is usually avoided—theavoidance response needs to be one that the person willreadily make, most of the time, as long as the avoidanceprocedure is in effect.

Our everyday life is full of such avoidance proce-dures, and yet they don’t make us miserable. You don’thave an anxiety attack every time you walk through adoorway, even though you might hurt yourself if you don’t avoid bumping into the doorjamb. And you don’tbreak out in a cold sweat every time you put the leftoversin the refrigerator, and thereby avoid leaving them out tospoil. So that leads us to the Self-Management Rule: Don’thesitate to use an avoidance procedure to get yourself todo something you want to do anyway. Just be sure theaversive outcome is as small as possible (but not so smallit’s ineffective) and the response is easy to make. (p. 8-2)

Self-Administered Consequences to Decrease Undesired Behavior

The frequency of an undesirable behavior can be de-creased by self-administered consequences analogous topositive punishment or negative punishment.

Self-Management Analogs of Positive Punishment

A person can decrease the frequency of an undesired be-havior by following each occurrence with the onset ofpainful stimulus or aversive activity. Mahoney (1971) re-ported a case study in which a man beleaguered by obses-sive thoughts wore a heavy rubber band around his wrist.Each time he experienced an obsessive thought, he snappedthe rubber band, delivering a brief, painful sensation to hiswrist. A 15-year-old girl who had compulsively pulled herhair for years, to the point of creating bald spots, alsoused contingent self-delivered snaps of a rubber band onher wrist to stop her habit (Mastellone, 1974). Anotherwoman stopped her hair pulling by performing 15 sit-upseach time she pulled her hair or had the urge to do so (Mac-Neil & Thomas, 1976). Powell and Azrin (1968) designeda special cigarette case that delivered a one-second electricshock when opened. The controlling responses by a per-son using such a device as part of a self-managementprogram is carrying the case and smoking only cigarettesthat he has personally removed from it.

2 12

609

Self-Management

100

90

80

70

60

50

40

30

20

10

0

BL1 SM1 BL2 Self-Management 2 SM3

100

90

80

70

60

50

40

30

20

10

0

Cla

ss P

erio

d A

Cla

ss P

erio

d B

Per

cen

tage

of

Inte

rval

s of

Pro

ble

m B

ehav

ior

5 10 15 20 25 30 35 40 45 50 56 60 63 68 73

Sessions

Figure 7 Problem behaviors by a 9-year-old boy during 10-minute probes across two class periods during baseline and self-management conditions.From “Self-Monitoring and Self-Recruited Praise: Effects on Problem Behavior, Academic Engagement, and Work Completion in a TypicalClassroom” by A. W. Todd, R. H. Horner, and G. Sugai, 1999, Journal of Positive Behavior Interventions, 1, p. 71. Copyright 1999 by Pro-Ed, Inc.Reprinted by permission.

Self-administering a positive practice overcorrectionprocedure also qualifies as an example of self-administeredpositive punishment. For example, a teenage girl whofrequently said don’t instead of doesn’t with the third person singular (e.g., “She don’t like that one”) used thisform of self-administered positive punishment to decreasethe frequency of her speech error (Heward, Dardig, &Rossett, 1979). Each time the girl observed herself say-ing don’t when she should have said doesn’t, she repeatedthe complete sentence she had just spoken 10 times in arow using correct grammar. She wore a wrist counter thatreminded her to listen to her speech and to keep track ofthe number of times she employed the positive practiceprocedure.

Self-Management Analogs of Negative Punishment

Self-administered analogs to negative punishment con-sist of arranging the loss of reinforcers (response cost)or denying oneself access to reinforcement for a specifiedperiod of time (time-out) contingent on the occurrenceof the target behavior. Response cost and time-out con-tingencies are widely used self-management strategies.The most commonly applied self-administered responsecost procedure is paying a small fine each time the targetbehavior occurs. In one study, smokers reduced their rateof smoking by tearing up a dollar bill each time they litup (Axelrod, Hall, Weis, & Rohrer, 1971). Response cost

610

Self-Management

procedures have also been used effectively with elementaryschool children who have self-determined the number oftokens they should lose for inappropriate social behavior(Kaufman & O’Leary, 1972) or for poor academic work(Humphrey, Karoly, & Kirschenbaum, 1978).

James (1981) taught an 18-year-old man who hadstuttered severely since the age of 6 to use a time-out fromspeaking procedure. Whenever he observed himself stut-tering, the young man immediately stopped talking for atleast 2 seconds, after which he could begin speakingagain. His frequency of dysfluencies decreased markedly.If talking is reinforcing, then this procedure might func-tion as time-out (e.g., not allowing oneself to engage in apreferred activity for a period of time).

Recommendations forSelf-Administered Consequences

Persons designing and implementing self-administeredconsequences should consider the following recommendations.

Select Small, Easy-to-Deliver Consequences

Both rewards and penalties used in self-management pro-grams should be small and easily delivered. A commonmistake in designing self-management programs is se-lecting consequences that are big and grand. Although aperson may believe that the promise of a large reward (orthreat of a severe aversive event) will motivate him tomeet his self-determined performance criteria, large con-sequences often work against the program’s success. Self-selected rewards and punishing consequences should notbe costly, elaborate, time-consuming, or too severe. Ifthey are, the person may not be able (in the case of grandrewards) or willing (in the case of severe aversive events)to deliver them immediately and consistently.

In general, it is better to use small consequences thatcan be obtained immediately and frequently. This is par-ticularly important with consequences intended to func-tion as punishers, which—in order to be most effective—must be delivered immediately every time the behaviortargeted for reduction is emitted.

Set a Meaningful But Easy-to-Meet Criterionfor Reinforcement

When designing contingencies involving self-adminis-tered consequences, a person should guard against mak-ing the same two mistakes practitioners often make whenimplementing reinforcement contingencies with studentsand clients: (1) setting expectations so low that im-provement in the current level of performance is not nec-essary to obtain the self-administered reward, or

(2) making the initial performance criterion too high (themore common mistake), thereby effectively program-ming an extinction contingency that may cause the per-son to give up on self-management altogether. Keys tothe effectiveness of any reinforcement-based interven-tion are setting an initial criterion that ensures that theperson’s behavior makes early contact with reinforce-ment and that continued reinforcement requires im-provements over baseline levels. The criterion-settingformulae (Heward, 1980) provide guidelines for makingthose determinations.

Eliminate “Bootleg Reinforcement”

Bootleg reinforcement—access to the specified rewardor to other equally reinforcing items or events withoutmeeting the response requirements of the contingency—is a common downfall of self-management projects. Aperson who obtains a comfortable supply of smuggledrewards is less likely to work hard to earn response-contingent rewards.

Bootleg reinforcement is common when people useeveryday preferred activities and treats as rewards inself-management programs. Although everyday expec-tations and enjoyments are easy to deliver, a person mayfind it difficult to withhold things he is used to enjoy-ing on a regular basis. For example, a man who is usedto unwinding at the end of the day by watching BaseballTonight with a beer and some peanuts may not be con-sistent in making those daily treats contingent on meet-ing the requirements of his self-management program.

One method for combating this form of bootleg re-inforcement is to no longer make access to the activitiesor items the person routinely enjoyed prior to the self-management program contingent on meeting any per-formance criteria and provide alternative items oractivities that are a cut above the usual. For example, eachtime the man in the previous scenario meets his perfor-mance criteria, he could replace his everyday brand ofbeer with his choice of a specialty beer from a collectionreserved in the back of the refrigerator.

If Necessary, Put Someone Else in Control of Delivering Consequences

Most ineffective self-management programs fail, not be-cause the controlling behavior is ineffective in controllingthe controlled behavior, but because the contingencies thatcontrol the controlling behavior are not sufficiently strong.In other words, the person does not emit the controllingbehavior consistently enough for its effects to be realized.What keeps a person from rationalizing that she did mostof the behaviors she was supposed to and obtaining a

611

Self-Management

self-determined reward anyway? What keeps a person fromfailing to deliver a self-determined aversive consequence?Too often, the answer to both questions is, nothing.

A person who really wants to change her behaviorbut has difficulty following through with delivering self-determined consequences should enlist another person toserve as performance manager. A self-manager can en-sure that her self-designed consequences will be admin-istered faithfully by creating a contingency in which theconsequence for failing to meet the performance criterionis aversive to the her but a reinforcer for the person she hasasked to implement the consequence. And, if the first per-son put in charge of the contingency fails to implement itas planned, the self-manager should find another personwho will do the job. Malott and Harrison (2002) wrote,

Christie wanted to walk on her dust-gathering treadmill20 minutes a day, 6 days a week. She tried her husbandas her performance contractor, but he wasn’t toughenough; he always felt sorry for her. So she fired himand hired her son; she made his bed every time she failedto do her 20 minutes, and he showed no mercy. (p. 18-7)

Kanfer (1976) called this kind of self-managementdecisional self-control: The person makes the initial de-cision to alter her behavior and plans how that will be ac-complished, but then turns over the procedure to asecond party in order to avoid the possibility of not emit-ting the controlling response. Kanfer made a distinctionbetween decisional self-control and protracted self-control, in which a person consistently engages in self-deprivation in order to effect the desired behaviorchange. Bellack and Hersen (1977) stated that decisionalself-control “is generally considered to be less desirablethan protracted self-control, as it does not provide theindividual with an enduring skill or resource” (p. 111).

We disagree that a self-management program that in-volves the help of other people is less desirable than onein which the self-manager does everything. First, a self-management program in which the contingencies are turnedover to someone else may be more effective than trying to“go it alone” because that other person is more consistentin applying the consequences. In addition, as a result of ex-periencing a successful self-management program in whichhe has chosen the target behavior, set performance criteria,determined a self-monitoring/self-evaluation system, andarranged to have someone else administer self-designedconsequences, a person has acquired a considerable reper-toire of self-management skills for future use.

Keep It Simple

A person should not create elaborate self-managementcontingencies if they are not needed. The same generalrule that applies to behavior change programs designedon behalf of others—the least complicated and intrusive

yet effective intervention should be employed—also ap-plies to self-management programs. With respect to theuse of self-administered consequences, Bellack andSchwartz (1976) warned that,

Adding complicated procedures where they are not re-quired is more likely to have negative than positive effects. Second, our experience indicates that many individuals find explicit self-reinforcement proceduresto be tedious, childish, and “gimmicky.” (p. 137)

There is no need for self-reinforcement proceduresto be complicated. And in our experience, more peoplefind creating and implementing their own performancemanagement contingencies fun than find them tedious orchildish.

Other Self-ManagementTacticsOther self-management strategies have been the sub-ject of behavior analysis research but are not so easilyclassified according to the four-term contingency. Theyinclude self-instruction, habit reversal, self-directedsystematic desensitization, and massed practice.

Self-Instruction

People talk to themselves all the time, offering encour-agement (e.g., “You can do this; you’ve done it before”),congratulations (e.g., “Great shot, Daryl! You just crushedthat 5-iron!”), and admonishment (e.g., “Don’t say thatanymore; you hurt her feelings”) for their behavior, aswell as specific instructions (e.g., “Pull the bottom ropethrough the middle”). Such self-statements can functionas controlling responses—verbal mediators—that affectthe occurrence of other behaviors. Self-instruction con-sists of self-generated verbal responses, covert or overt,that function as response prompts for a desired behavior.As a self-management tactic, self-instructions are oftenused to guide a person through a behavior chain or se-quence of tasks.

Bornstein and Quevillon (1976) conducted a studyfrequently cited as evidence of the positive and lastingeffects of self-instruction. They taught three hyperactivepreschool boys a series of four types of self-instructionsdesigned to keep them on task with classroom activities:

1. Questions about the assigned task (e.g., “Whatdoes the teacher want me to do?”)

2. Answers to the self-directed questions (e.g., “I’msupposed to copy that picture”)

3. Verbalizations to guide the child through the task athand (e.g., “OK, first I draw a line through here . . .”)

4. Self-reinforcement (e.g., “I really did that one well”)

612

Self-Management

During a 2-hour session the children were taughtto use self-instructions using a sequence of trainingsteps originally developed by Meichenbaum and Good-man (1971):

1. The experimenter modeled the task while talkingaloud to himself.

2. The child performed the task while the experi-menter provided the verbal instructions.

3. The child performed the task while talking aloud tohimself with the experimenter whispering the in-structions softly.

4. The child performed the task while whisperingsoftly to himself with the experimenter moving hislips but making no sound.

5. The child performed the task while making lipmovements but no sound.

6. The child performed the task while guiding hisperformance with covert instructions. (adaptedfrom p. 117)

During the training session a variety of classroomtasks were used, ranging from simple motor tasks such ascopying lines and figures to more complex tasks such asblock design and grouping tasks. The children showed amarked increase in on-task behavior immediately afterreceiving self-instruction training, and their improved be-havior was maintained over a considerable period of time.The authors suggested that the generalization obtainedfrom the training setting to the classroom could have beenthe result of telling the children during training to imag-ine that they were working with their teacher, not the ex-perimenter. The authors hypothesized that a behavioraltrapping phenomenon (Baer & Wolf, 1970) may havebeen responsible for the maintenance of on-task behav-ior; that is, the self-instructions may have initially pro-duced better behavior, which in turn produced teacherattention that maintained the on-task behavior.

Although some studies evaluating self-instructionfailed to reproduce the impressive results obtained byBornstein and Quevillon (e.g. Billings & Wasik, 1985;Friedling & O’Leary, 1979), other studies have producedgenerally encouraging results (Barkley, Copeland, &Sivage, 1980; Burgio, Whitman, & Johnson, 1980;Hughes 1992; Kosiewicz, Hallahan, Lloyd, & Graves,1982; Peters & Davies, 1981; Robin, Armel, & O’Leary,1975). Self-instruction training has increased the fre-quency of high school students’ initiating conversationswith familiar and unfamiliar peers (Hughes, Harmer, Kil-lian, & Niarhos, 1995; see Figure 5).

Employees with disabilities have learned to self-manage their work performance by providing their ownverbal prompts and self-instructions (Hughes, 1997). For

example, Salend, Ellis, and Reynolds (1989) used a self-instruction strategy to teach four adults with severe men-tal retardation to “talk while you work.” Productivityincreased dramatically and error rates decreased whenthe women verbalized to themselves, “Comb up, combdown, comb in bag, bag in box” while packaging combsin plastic bags. Hughes and Rusch (1989) taught two sup-ported employees working at a janitorial supply companyhow to solve problems by using a self-instruction proce-dure consisting of four statements:

1. Statement of the problem (e.g., “Tape empty”)

2. Statement of the response needed to solve theproblem (e.g., “Need more tape”)

3. Self-report (e.g., “Fixed it”)

4. Self-reinforcement (e.g., “Good”)

O’Leary and Dubey (1979) summarized their reviewof self-instruction training by suggesting four factors thatappear to influence its effectiveness with children.

Self-instructions appear to be effective self-controllingprocedures if the children actually implement the in-structional procedure, if the children use them [the self-instructions] to influence behavior at which they areskilled, if children have been reinforced for adhering totheir self-instructions in the past, and if the focus of theinstructions is the behavior most subject to conse-quences. (p. 451)

Habit Reversal

In his initial discussion of self-control, Skinner (1953)included “doing something else” as a self-managementtactic. In an interesting application of “doing somethingelse,” Robin, Schneider, and Dolnick (1976) taught 11primary-aged children with emotional and behavioraldisorders to control their aggressive behaviors by usingthe turtle technique: The children pulled their arms andlegs close to their bodies, put their heads down on theirdesks, relaxed their muscles, and imagined they wereturtles. The children were taught to use the turtle re-sponse whenever they believed an aggressive exchangewith someone else was about to take place, when theywere angry with themselves and felt they were about tohave a tantrum, or when the teacher or a classmate calledout, “Turtle!”

Azrin and Nunn (1973) developed an interventionthey called habit reversal, in which clients are taught toself-monitor their nervous habits and interrupt the be-havior chain as early as possible by engaging in behaviorincompatible with the problem behavior (i.e., doing some-thing else). For example, when a nail biter observes her-self beginning to bite her fingernails, she might squeezeher hand into a tight fist for 2 or 3 minutes (Azrin, Nunn,

613

Self-Management

Figure 8 Sequence of imaginary scenes concerning fear of cats that could be usedfor systematic self-desensitization.

Directions1.You’re sitting in a comfortable chair in the safety of your home watching TV.2.You’re watching a commercial for cat food—no cat is visible.3. The commercial continues and a cat is now eating the food.4. A man is now petting the cat.5. A man is holding the cat and fondling it.6. A woman is holding the cat, and the cat is licking her hands and face.7.You’re looking out the window of your home and you see a cat on the lawn across the

street.8.You’re sitting in front of your house, and you see a cat walk by on the sidewalk across

the street.9.You’re sitting in your yard, and you see a cat walk by on your sidewalk.

10. A cat walks within 15 feet of you.11. A friend of yours picks the cat up and plays with it.12.Your friend is 10 feet away from you, and the cat is licking his face.13.Your friend comes within 5 feet of you while he’s holding the cat.14.Your friend stands 2 feet away and plays with the cat.15.Your friend asks if you’d like to pet the cat.16.Your friend reaches out and offers you the cat.17. He puts the cat on the ground, and it walks over to you.18. The cat rubs up against your leg.19. The cat walks between your legs purring.20.You reach down and touch the cat.21.You pet the cat.22.You pick up the cat and pet it. (p. 71)

From “Self-Directed Systematic Desensitization: A Guide for the Student, Client, and Therapist” by W.W.Wen-rich, H. H. Dawley, and D. A. General, 1976, p. 71. Copyright 1976 by Behaviordelia, Kalamazoo, MI. Used bypermission.

& Frantz, 1980) or sit on her hands (Long, Miltenberger,Ellingson, & Ott, 1999). As a clinical intervention, habitreversal is typically implemented as a multiple-componenttreatment package that includes self-awareness training in-volving response detection and procedures for identifyingevents that precede and trigger the response, competingresponse training, motivation techniques including self-administered consequences, social support systems, andprocedures for promoting the generalization and mainte-nance of treatment gains (Long et al., 1999). Habit rever-sal has proven to be a highly effective self-managementtactic for a wide variety of problem behaviors. For a re-view of habit reversal procedures and research, see Mil-tenberger, Fuqua, and Woods (1998).

Self-Directed Systematic Desensitization

Systematic desensitization is a widely used behavior ther-apy treatment for anxieties, fears, and phobias that fea-tures the self-management strategy of engaging in analternative behavior (i.e., doing something else). Origi-nally developed by Wolpe (1958, 1973), systematic desensitization involves substituting one behavior, gen-

erally muscle relaxation, for the unwanted behavior—thefear and anxiety. The client develops a hierarchy of situ-ations from the least to the most fearful and then learnsto relax while imagining these anxiety-producing situa-tions, first the least fearful situation, then the next fear-ful one, and so on. Figure 8 shows a anxiety-stimulushierarchy that a person might develop in attempting tocontrol a fear of cats. When a person is able to go com-pletely through his hierarchy, imagining each scene indetail while maintaining deep relaxation and feeling noanxiety, he begins to expose himself gradually to real-life (in vivo) situations.

Detailed procedures for achieving deep muscle re-laxation, constructing and validating a hierarchy of anx-iety- or fear-producing situations, and implementing aself-directed systematic desensitization program can befound in Martin and Pear (2003) and Wenrich, Dawley,and General (1976).

Massed Practice

Forcing oneself to perform an undesired behavior againand again, a technique called massed practice, willsometimes decrease the future frequency of the behavior.

614

Self-Management

Wolff (1977) reported an interesting case of this form oftreatment by a 20-year-old woman who engaged in acompulsive, ritualized routine of 13 specific securitychecks every time she entered her apartment (e.g., look-ing under beds, checking the closets, looking in thekitchen). She began her program by deliberately goingthrough the 13 steps in an exact order and then repeatingthe complete ritual four more times. After doing this for1 week, she permitted herself to check the apartment ifshe wanted to but made herself go through the entire rou-tine five times whenever she did any checking at all. Shesoon quit performing her compulsive checking behavior.

Suggestions for Conductingan Effective Self-Management ProgramIncorporating the suggestions that follow into the designand implementation of self-management programs shouldincrease the likelihood of success. Although none of theseguidelines has been rigorously examined through exper-imental analyses—research on self-management has along way to go—each suggestion is consistent with pro-cedures proven effective in other areas of applied behav-ior analysis and with “best practices” commonly reportedin the self-management literature (e.g., Agran, 1997; Mal-ott & Harrison, 2002; Martin & Pear, 2003; Watson &Tharp, 2007).

1. Specify a goal and define the behavior to be changed.

2. Begin self-monitoring the behavior.

3. Contrive contingencies that will compete with nat-ural contingencies.

4. Go public with your commitment to change yourbehavior.

5. Get a self-management partner.

6. Continually evaluate your self-management pro-gram and redesign it as necessary.

Specify a Goal and Define the Target Behavior

A self-management program begins with identifying apersonal goal or objective and the specific behaviorchanges necessary to accomplish that goal or objective.A person can use most of the questions and issues thatpractitioners should consider when selecting target be-haviors for students or clients to assess the social signif-icance and prioritize the importance of a list of self-determined target behavior changes.

Begin Self-Monitoring the Behavior

A person should begin self-monitoring as soon as he hasdefined the target behavior. Self-monitoring prior to im-plementing any other form of intervention yields thesame benefits as does taking baseline data:

1. Self-monitoring makes a person observant ofevents occurring before and after the target behav-ior; information about antecedent-behavior-consequent correlations that may be helpful indesigning an effective intervention.

2. Self-monitored baseline data provide valuableguidance in determining initial performance crite-ria for self-administered consequences.

3. Self-monitored baseline data provide an objectivebasis for evaluating the effects of any subsequentinterventions.

Another reason to begin self-monitoring as soon as pos-sible without employing additional self-management tac-tics is that the desired improvement in behavior may beachieved by self-monitoring alone.

Create Contrived ContingenciesThat Will Compete with IneffectiveNatural Contingencies

When self-monitoring alone does not result in the desiredbehavior changes, the next step is designing a contrivedcontingency to compete with the ineffective natural con-tingencies. A person who implements a contingency thatprovides immediate, definite consequences for each oc-currence (or perhaps, nonoccurrence) of the target be-havior greatly increases the probability of obtaining apreviously elusive self-management goal. For example, asmoker who self-records and reports each cigarettesmoked to his self-management partner, who providescontingent praise, rewards, and penalties, has arrangedimmediate, frequent, and more effective consequences forreducing his smoking behavior than those provided by thenatural contingency: the threat of lung cancer and em-physema in the future, neither of which becomes appre-ciably more immediate or likely from one cigarette to thenext.

Go Public

The effectiveness of a self-management effort may beenhanced by publicly sharing the intentions of the pro-gram. When a person shares a goal or makes a predic-tion to others about her future behavior, she has arrangedpotential consequences—praise or condemnation—for

615

Self-Management

her success or failure in meeting that goal. A personshould state in specific terms what she intends to do andher deadline for completing it. Taking the idea of publiccommitment a step further, consider the powerful poten-tial of public posting (e.g., an untenured junior facultymember could post a chart of his writing where his de-partment chair or dean could see and comment on it).

Malott (1981) called this the “public spotlight prin-ciple of self-management.”

A public statement of goals improves performance. Butjust how do the social contingencies engaged by publiccommitment produce change? They increase the reward-ing value of success and the aversive value of failure, Ipresume. But those outcomes are probably too delayedto directly reinforce problem solving. Instead they mustbe part of the rules the student self-states at crucialtimes: “If I don’t make my goal, I’ll look like a fool; butif I do make it, I’ll look pretty cool.” Such rules thenfunction as cues for immediate self-reinforcement of on-task behavior and self-punishment of off-task behavior.(Volume II, No. 18, p. 5)

Skinner (1953) also theorized about the principlesof behavior that operate when a person shares his self-management goal with important others.

By making it in the presence of people who supply aver-sive stimulation when a prediction is not fulfilled, wearrange consequences which are likely to strengthen thebehavior resolved upon. Only by behaving as predictedcan we escape the aversive consequences of breakingour resolution. (p. 237)

Get a Self-Management Partner

Setting up a self-management exchange is a good way toinvolve another person whose differential feedback abouthow a self-management project is going can be effectiveas a behavioral consequence. Two people, each of whomhas a long-range goal or a regular series of tasks to do, canagree to talk to each other on a daily or weekly basis, asdetermined by the target behaviors and by each person’sprogress. They can share the data from their self-monitoring and exchange verbal praise or admonishmentsand perhaps even more tangible consequences contingenton performance. Malott (1981) reported a successful self-management exchange in which he and a colleague paideach other $1 every time either person failed to completeany one of a series of self-determined daily exercise,housekeeping, and writing tasks. Each morning theyspoke on the telephone, reporting their performance dur-ing the previous 24 hours.

To help themselves and each other study for examsand complete research and writing tasks, one group of doc-toral students created a self-management group they called

the Dissertation Club (Ferreri et al., 2006). At weekly meet-ings each group member shared data on the “scholarly be-haviors” she had targeted (e.g., graphs of words writteneach day or number of hours studied). Group members re-ceived social support from one another in the form of en-couragement to continue working hard and praise foraccomplishments, they served as behavioral consultants toone another on the deign of self-management interven-tions, and sometimes they administered rewards and finesto one another. All six members of the group wrote andsuccessfully defended their dissertations within the time-lines they had set.

Continually Evaluate and RedesignProgram as Needed

Your self-management project may not work the firsttime you try it. And it will certainly fall apart from timeto time, so be prepared with some scotch tape and bub-ble gum to put it back together again.

— Malott and Harrison (2002, p. 18-7)

The development and evaluation of most self-management programs reflects a pragmatic, data-basedapproach to personal problem solving more than it doesrigorous research with an emphasis on experimentalanalysis and control. Like the researcher, however, theself-manager should be guided by the data. If the datashow the program is not working satisfactorily, the in-tervention should be redesigned.

An A-B design is sufficient to evaluate the effectsof most self-management projects. People who havelearned to define, observe, record, and graph their ownbehavior can easily be taught how to evaluate their self-management efforts with self-experiments. A simple A-B design provides a straightforward accounting ofthe results in a before-and-after fashion that is usuallysufficient for self-evaluation. Experimentally deter-mining functional relations between self-managed in-terventions and their effects usually takes a back seatto the pragmatic goal of changing one’s behavior. How-ever, the changing criterion design lends itself nicely,not only to the stepwise increments in performance thatare often a part of improving personal performance, butalso to a clearer demonstration and understanding of therelation between the intervention and changes in the tar-get behavior.

In addition to data-based evaluation, people shouldevaluate their self-management projects in terms of so-cial validity dimensions (Wolf, 1978). A practitionerteaching self-management to others can aid his students’or clients’ social validity assessments of their self-management efforts by providing checklists of questions

616

Self-Management

covering topics such as how practical their interventionwas, whether they felt their self-management programaffected their behavior in any unmeasured ways, andwhether they enjoyed the project.

One of the most important aspects of social valid-ity for any behavior change program is the extent towhich the results—measured changes in the target be-havior—actually made a difference in the lives of theparticipants. One approach to assessing the social va-lidity of the results of a self-management program is tocollect data on what Malott and Harrison (2002) calledbenefit measures. For example, a person might measure:

• Number of pounds lost as a benefit measure of eat-ing less or exercising more

• Improved lung capacity (peak airflow volumemeasured as cubic centimeters per second on aninexpensive device available at many pharmacies)as an outcome of reducing the number of ciga-rettes smoked

• Decrease in time it takes to run a mile as a benefitof number of miles run each day

• Lower resting heart rate and faster recovery time asa benefit of aerobic exercise

• Higher scores on foreign language practice examsas an outcome of studying

In addition to assessing the social validity of self-management interventions, the positive results of bene-fit measures can serve as consequences that reward andstrengthen continued adherence to self-management.

An account of a self-management weight loss pro-gram that incorporated many of the tactics and sugges-

tions described in this chapter is presented in Box 2,“Take a Load Off Me.”

Behavior Changes BehaviorReferring to a book on self-management he wrote for apopular audience, Epstein (1997) wrote,

A young man whose life is in disarray (he smokes,drinks, overeats, loses things, procrastinates, and soon) seeks advice from his parents, teachers, andfriends, but no one can help. Then the young man re-members his Uncle Fred (modeled shamelessly afterFred Skinner), whose life always seemed to be in per-fect harmony. In a series of visits, Uncle Fred revealsto him the three “secrets” of self-management, all Ms: Modify your environment, monitor your behaviors,and make commitments. Fred also reveals and explainsthe “self-management principle”: Behavior changesbehavior. After each visit, the young man tries out anew technique, and his life is changed radically for thebetter. In one scene, he sees a classroom of remarkablycreative and insightful children who have been trainedin self-management techniques in a public school. It is fiction, of course, but the technology is well estab-lished and the possibilities are well within reach. (p. 563, emphasis in original)

Building upon Skinner’s (1953) conceptual analysisof self-control more than 50 years ago, behavior analystshave developed numerous self-management tactics andmethods for teaching learners with diverse abilities howto apply them. Underlying all of these efforts and findingsis the simple, yet profound principle that behaviorchanges behavior.

617

Joe was a 63-year-old man whose doctor had recently in-formed him that the 195 pounds on his 5-foot 11-inchframe must be reduced to 175 pounds or he was verylikely to experience serious health problems. AlthoughJoe obtained regular physical exercise by keeping up thelawn and garden, cutting and hauling firewood, feedingthe rabbits, and doing myriad other chores that comewith a farmhouse, his prodigious appetite—long con-sidered one of the “best in the county”—had caught upwith him. His doctor’s warning and the recent death ofa high school classmate had scared Joe enough to sitdown with his son to plan a self-management weight-loss program.

Joe’s program included antecedent-based tactics,self-monitoring, self-evaluation, contingency con-tracting, self-selected and self-administered conse-quences, and contingency management by significantothers.

GoalTo reduce current weight of 195 pounds to 175 poundsat a rate of 1 pound per week.

Behavior Change Needed to Achieve GoalReduce eating to a maximum of 2,100 calories per day.

Rules and Procedures

1. Mount bathroom scale each morning before dress-ing or eating and chart weight on “Joe’s Weight”graph taped to bathroom mirror.

2. Carry notepad, pencil, and calorie counter inpocket throughout the day and record the type and amount of all food and liquid (other thanwater) immediately after consumption.

3. Before going to bed each night, add total caloriesconsumed during the day and chart that number on“Joe’s Eating” graph taped to bathroom mirror.

4. Do not waiver from Steps 1–3, regardless ofweight loss or lack of weight loss.

ImmediateContingencies/Consequences

• Calorie counter, notepad, and pencil in pocket pro-vide continuously available response prompts.

• Recording all food and drink consumed providesimmediate consequence.

• When you have not eaten any food you thoughtabout eating, put a star on your notepad and giveself-praise (“Way to go, Joe!”).

Daily Contingencies/Consequences

• If the calories consumed during the day do not ex-ceed the 2,100 criterion, put 50 cents in “Joe’sGarden Jar.”

• If total calories exceed the criterion, remove $1.00from “Joe’s Garden Jar.”

• For each 3 days in a row that you meet the caloriecriterion, add a bonus of 50 cents to “Joe’s GardenJar.”

• Have Helen initial your contract each day that youmeet the calorie criterion.

Weekly Contingencies/Consequences

• Each Sunday night write the calories consumedeach day for the previous week on one of the pre-dated, addressed, and stamped postcards. Helenwill verify your self-report by initialing the post-card and mailing it to Bill and Jill on Monday.

• Each Monday, if total calories consumed met thecriterion for at least 6 of the past 7 days, obtain anitem or activity from “Joe’s Reward Menu.”

IntermediateContingencies/Consequences

• Eat within the daily calorie limit often enough andthere will be sufficient money in “Joe’s GardenJar” to buy seeds and plants for the spring veg-etable garden.

• If weight during May visit to Ohio represents aloss of at least 1 pound per week, be a guest ofhonor at a restaurant of your choice.

Long-Term Consequences

• Feel better.

• Look better.

• Be healthier.

• Live longer.

ResultsJoe often struggled with the rules and procedures, but hestuck with his self-management program and lost 22pounds (achieving a weight of 173) in 16 weeks. As thisreport is being written, 26 years have passed since Joe’sadventure in self-management. Joe can still hold his ownat the dinner table, but he has retained his weight lossand enjoys gardening, singing in a barbershop choir, andlistening to the Cubs on the radio.

Box 2Take a Load Off Me:

A Self-Managed Weight-Loss Program

618

Self-Management

SummaryThe “Self” As Controller of Behavior

1. We tend to assign causal status to events that immediatelyprecede behavior, and when causal variables are not read-ily apparent in the immediate, surrounding environment,the tendency to turn to internal causes is strong.

2. Hypothetical constructs such as willpower and drive areexplanatory fictions that bring us no closer to understand-ing the behaviors they claim to explain and lead to circu-lar reasoning.

3. Skinner (1953) conceptualized self-control as a two-response phenomenon: The controlling response affectsvariables in such a way as to change the probability of theother, the controlled response.

4. We define self-management as the personal application ofbehavior change tactics that produces a desired change inbehavior.

5. Self-management is a relative concept. A behavior changeprogram may entail a small degree of self-managementor be totally conceived, designed, and implemented bythe person.

6. Although self-control and self-management appear inter-changeably in the behavioral literature, we recommend thatself-management be used in reference to a person acting insome way in order to change his subsequent behavior.

• Self-control implies that the ultimate control of behav-ior lies within the person, but the causal factors for “self-control” are to be found in a person’s experiences withhis environment.

• Self-control “seems to suggest controlling a [separate]self inside or [that there is] a self inside controlling ex-ternal behavior” (Baum, 1994, p. 157).

• Self-control is also used to refer to a person’s ability to“delay gratification” by responding to achieve a delayed,but larger or higher quality reward instead of acting toobtain an immediate, less valuable reward.

Applications, Advantages, and Benefits of Self-Management

7. Four uses of self-management are to

• live a more effective and efficient daily life,

• break bad habits and acquiring good ones,

• accomplish difficult tasks, and

• achieve personal lifestyle goals.

8. Advantages and benefits of learning and teaching self-management skills include the following:

• Self-management can influence behaviors not accessi-ble to external change agents.

• External change agents often miss important instancesof behavior.

• Self-management can promote the generalization andmaintenance of behavior change.

• A small repertoire of self-management skills can controlmany behaviors.

• People with diverse abilities can learn self-manage-ment skills.

• Some people perform better under self-selected tasksand performance criteria.

• People with good self-management skills contribute tomore efficient and effective group environments.

• Teaching students self-management skills providesmeaningful practice for other areas of the curriculum.

• Self-management is an ultimate goal of education.

• Self-management benefits society.

• Self-management helps a person feel free.

• Self-management feels good.

Antecedent-Based Self-Management Tactics

9. Antecedent-based self-management tactics feature the ma-nipulation of events or stimuli antecedent to the target(controlled) behavior, such as the following:

• Manipulating motivating operations to make a desired(or undesired) behavior more (or less) likely

• Providing response prompts

• Performing the initial steps of a behavior chain to ensurebeing confronted later with a discriminative stimulusthat will evoke the desired behavior

• Removing the materials required for an undesired behavior

• Limiting an undesired behavior to restricted stimulusconditions

• Dedicating a specific environment for a desired behavior

Self-Monitoring

10. Self-monitoring is a procedure whereby a person observesand responds to, usually by recording, the behavior he istrying to change.

11. Originally developed as a method of clinical assessmentfor collecting data on behaviors that only the client couldobserve, self-monitoring evolved into the most widely usedand studied self-management strategy because it often re-sults in desired behavior change.

12. Self-monitoring is often combined with goal setting andself-evaluation. A person using self-evaluation comparesher performance with a predetermined goal or standard.

13. Self-monitoring is often part of an intervention that in-cludes reinforcement for meeting either self- or teacher-selected goals.

619

14. It is difficult to determine exactly how self-monitoringworks because the procedure necessarily includes, and istherefore confounded by, private events (covert verbal be-havior); it often includes either explicit or implicit con-tingencies of reinforcement.

15. Children can be taught to self-monitor and self-record theirbehavior accurately by means of a faded matching tech-nique, in which the child is rewarded initially for produc-ing data that match the teacher’s or parent’s data. Overtime the child is required to match the adult’s record lessoften, eventually monitoring the behavior independently.

16. Accuracy of self-monitoring is neither necessary nor suf-ficient to achieve improvement in the behavior beingmonitored.

17. Suggested guidelines for self-monitoring are as follows:

• Provide materials that make self-monitoring easy.

• Provide supplementary cues or prompts.

• Self-monitor the most important dimension of the targetbehavior.

• Self-monitor early and often, but do not interrupt theflow of a desired behavior targeted for increase.

• Reinforce accurate self-monitoring.

Self-Administered Consequences

18. As a technical term, self-reinforcement (as also, self-punishment) is a misnomer. Although behavior can bechanged by self-administered consequences, the variablesinfluencing the controlling response make such self-management tactics more than a straightforward applica-tion of operant reinforcement.

19. Self-administered contingencies analogous to positive andnegative reinforcement and positive and negative punish-ment can be incorporated into self-management programs.

20. When designing self-management programs involvingself-administered consequences, a person should:

• Select small, easy-to-deliver consequences.

• Set a meaningful but easy-to-meet criterion for rein-forcement.

• Eliminate “bootleg reinforcement.”

• If necessary, put someone else in control of deliveryconsequences.

• Use the least complicated and intrusive contingenciesthat will be effective.

Other Self-Management Tactics

21. Self-instruction (talking to oneself ) can function as con-trolling responses (verbal mediators) that affect the oc-currence of other behaviors.

22. Habit reversal is a multiple-component treatment packagein which clients are taught to self-monitor their unwantedhabits and interrupt the behavior chain as early as possi-ble by engaging in a behavior incompatible with the prob-lem behavior.

23. Systematic desensitization is a behavior therapy treatmentfor anxieties, fears, and phobias that involves substitutingone behavior, generally muscle relaxation, for the un-wanted behavior—the fear and anxiety. Self-directed sys-tematic desensitization involves developing a hierarchy ofsituations from the least to the most fearful and then learn-ing to relax while imagining these anxiety-producing sit-uations, first the least fearful situation, then the next fearfulone, and so on.

24. Massed practice, forcing oneself to perform an undesiredbehavior again and again, can decrease the future fre-quency of the behavior.

Suggestions for Conducting an Effective Self-Management Program

25. Following are six steps in designing and implementing aself-management program:

Step 1: Specify a goal and define the behavior to be changed.

Step 2: Begin self-monitoring the behavior.

Step 3: Create contingencies that will compete with nat-ural contingencies.

Step 4: Go public with the commitment to change behavior.

Step 5: Get a self-management partner.

Step 6: Continually evaluate and redesign the programas needed.

Behavior Changes Behavior

26. The most fundamental principle of self-management isthat behavior changes behavior.

Self-Management

620

621

Generalization and Maintenance of Behavior Change

Key Terms

behavior trapcontrived contingencycontrived mediating stimulusgeneral case analysisgeneralizationgeneralization across subjectsgeneralization probe

generalization settingindiscriminable contingencyinstructional settinglag reinforcement schedulemultiple exemplar trainingnaturally existing contingency

programming common stimuliresponse generalizationresponse maintenancesetting/situation generalizationteaching sufficient examplesteaching loosely

Behavior Analyst Certification Board® BCBA® & BCABA®Behavior Analyst Task List©,Third Edition

Content Area 3: Principles, Processes, and Concepts

3-12 Define and provide examples of generalization and discrimination.

9-28 Use behavior change procedures to promote stimulus and response generalization.

9-29 Use behavior change procedures to promote maintenance.

From Chapter 28 of Applied Behavior Analysis, Second Edition. John O. Cooper, Timothy E. Heron, William L. Heward. Copyright © 2007by Pearson Education, Inc. All rights reserved.

622

Sherry’s teacher implemented an intervention thathelped Sherry to complete each part of multiple-part, in-school assignments before submitting them and begin-ning another activity. Now, three weeks after theprogram ended, most of the work Sherry submits as“finished” is incomplete and her stick-with-a-task-until-it’s-finished behavior is as poor as it was before the in-tervention began.

Ricardo has just begun his first competitive job workingas a copy machine operator in a downtown business of-fice. In spite of his long history of distractibility andpoor endurance, Ricardo had learned to work indepen-dently for several hours at a time in the copy room at thevocational training center. His employer, however, iscomplaining that Ricardo frequently stops working aftera few minutes to seek attention from others. Ricardomay soon lose his job.

Brian is a 10-year-old boy diagnosed with autism. In aneffort to meet an objective on his individualized educa-tion program that targets functional language and com-munication skills, Brian’s teacher taught him to say,“Hello, how are you?” as a greeting. Now, wheneverBrian meets anyone, he invariably responds with,“Hello, how are you?” Brian’s parents are concernedthat their son’s language seems stilted and parrot-like.

Each of these three situations illustrates a com-mon type of teaching failure insofar as the mostsocially significant behavior changes are those

that last over time, are used by the learner in all relevantsettings and situations, and are accompanied by changesin other relevant responses. The student who learns tocount money and make change in the classroom todaymust be able to count and make change at the conve-nience store tomorrow and at the supermarket nextmonth. The beginning writer who has been taught to writea few good sentences in school must be able to writemany more meaningful sentences when writing notes orletters to family or friends. To perform below this stan-dard is more than just regrettable; it is a clear indicationthat the initial instruction was not entirely successful.

In the first scenario, the mere passage of time re-sulted in Sherry losing her ability to complete assign-ments. A change of scenery threw Ricardo off his game;the excellent work habits he had acquired at the voca-tional training center disappeared completely when hearrived at the community job site. Although Brian usedhis new greeting skill, its restricted form was not servinghim well in the real world. In a very real sense, the in-struction they received failed all three of these people.

Applied behavior analysts face no more challeng-ing or important task than that of designing, imple-menting, and evaluating interventions that producegeneralized outcomes. This chapter defines the major

types of generalized behavior change and describes thestrategies and tactics researchers and practitioners usemost often to promote them.

Generalized BehaviorChange: Definitions and Key ConceptsWhen Baer, Wolf, and Risley (1968) described the emerg-ing field of applied behavior analysis, they included gen-erality of behavior change as one of the discipline’s sevendefining characteristics.

A behavior change may be said to have generality if itproves durable over time, if it appears in a wide varietyof possible environments, or if it spreads to a wide vari-ety of related behaviors. (p. 96)

In their seminal review paper, “An Implicit Technol-ogy of Generalization,” Stokes and Baer (1977) alsostressed those three facets of generalized behaviorchange—across time, settings, and behaviors—when theydefined generalization as

the occurrence of relevant behavior under different, non-training conditions (i.e., across subjects, settings, peo-ple, behaviors, and/or time) without the scheduling ofthe same events in those conditions. Thus, generaliza-tion may be claimed when no extratraining manipula-tions are needed for extratraining changes; or may beclaimed when some extra manipulations are necessary,but their cost is clearly less than that of the direct inter-vention. Generalization will not be claimed when simi-lar events are necessary for similar effects acrossconditions. (p. 350)

Stokes and Baer’s pragmatic orientation toward gen-eralized behavior change has proven useful for appliedbehavior analysis. They stated simply that if a trainedbehavior occurs at other times or in other places withoutit having to be retrained completely at those times or inthose places, or if functionally related behaviors occurthat were not taught directly, then generalized behaviorchange has occurred. The following sections provide def-initions and examples of the three basic forms of gener-alized behavior change: response maintenance, setting/situation generalization, and response generalization.Box 1, “Perspectives on the Sometimes Confusing andMisleading Terminology of Generalization,” discussesthe many and varied terms applied behavior analysts useto describe these outcomes.

Response Maintenance

Response maintenance refers to the extent to which alearner continues to perform the target behavior after a portion or all of the intervention responsible for the

Generalization and Maintenance of Behavior Change

623

*Response maintenance can be measured under extinction conditions,in which case the relative frequency of continued responding is de-scribed correctly in terms of resistance to extinction. However, usingresistance to extinction to describe response maintenance in most appliedsituations is incorrect because reinforcement typically follows some oc-currences of the target behavior in the post-treatment environment.

Applied behavior analysts have used many terms to de-scribe behavior changes that appear as adjuncts or by-products of direct intervention. Unfortunately, theoverlapping and multiple meanings of some terms canlead to confusion and misunderstanding. For example,maintenance, the most frequently used term for behaviorchanges that persist after an intervention has been with-drawn or terminated, is also the most common name fora condition in which treatment has been discontinued orpartially withdrawn. Applied behavior analysts shoulddistinguish between response maintenance as a measureof behavior (i.e., a dependent variable) and maintenanceas the name for an environmental condition (i.e., an in-dependent variable). Other terms found in the behavioranalysis literature for continued responding after pro-grammed contingencies are no longer in effect includedurability, behavioral persistence, and (incorrectly) re-sistance to extinction.*

Terms used in the applied behavior analysis literaturefor behavior changes that occur in nontraining settings orstimulus conditions include stimulus generalization, set-ting generalization, transfer of training, or simply, gen-eralization. It is technically incorrect to use stimulusgeneralization to refer to the generalized behavior changeachieved by many applied interventions. Stimulus gen-eralization refers to the phenomenon in which a responsethat has been reinforced in the presence of a given stim-ulus occurs with an increased frequency in the presenceof different but similar stimuli under extinction condi-tions (Guttman & Kalish, 1956; see Chapter 17). Stimulusgeneralization is a technical term referring to a specificbehavioral process, and its use should be restricted tothose instances (Cuvo, 2003; Johnston, 1979).

Terms such as collateral or side effects, responsevariability, induction, and concomitant behavior changeare often used to indicate the occurrence of behaviorsthat have not been trained directly. To further complicatematters, generalization is often used as a catchall term torefer to all three types of generalized behavior change.

Johnston (1979) discussed some problems causedby using generalization (the term for a specific behavioralprocess) to describe any desirable behavior change in ageneralization setting.

This kind of usage is misleading in that it suggests that asingle phenomenon is at work when actually a number ofdifferent phenomena need to be described, explained, andcontrolled. . . . Carefully designing procedures to opti-mize the contributions of stimulus and response general-ization would hardly exhaust our repertoire of tactics forgetting the subject to behave in a desirable way in non-instructional settings. Our successes will be more frequentwhen we realize that maximizing behavioral influence insuch settings requires careful consideration of all behav-ioral principles and processes. (pp. 1–2)

Inconsistent use of the “terminology of generaliza-tion” can lead researchers and practitioners to incorrectassumptions and conclusions regarding the principlesand processes responsible for the presence or absence ofgeneralized outcomes. Nevertheless, applied behavioranalysts will probably continue to use generalization asa dual-purpose term, referring sometimes to types of be-havior change and sometimes to behavioral processesthat can bring such changes about. Stokes and Baer(1977) clearly indicated their awareness of the differ-ences in definitions.

The notion of generalization developed here is an essen-tially pragmatic one; it does not closely follow the tradi-tional conceptualizations (Keller & Schoenfeld, 1950;Skinner, 1953). In many ways, this discussion will sidestepmuch of the controversy concerning terminology. (p. 350)

While discussing the use of naturally existing con-tingencies of reinforcement to maintain and extend pro-grammed behavior changes, Baer (1999) explained hispreference for using the term generalization:

It is the best of the techniques described here and, inter-estingly, it does not deserve the textbook definition of“generalization.” It is a reinforcement technique, and thetextbook definition of generalization refers to unreinforcedbehavior changes resulting from other directly reinforcedbehavior changes. . . . [But] we are dealing with the prag-matic use of the word generalization, not the textbookmeaning. We reinforce each other for using the word prag-matically, and it serves us well enough so far, so we shallprobably maintain this imprecise usage. (p. 30, emphasisin original)

In an effort to promote the precise use of the tech-nical terminology of behavior analysis and as a reminderthat the phenomena of interest are usually products ofmultiple behavior principles and procedures, we useterms for generalized behavior change that focus on thetype of behavior change rather than the principles orprocesses that bring it about.

Box 1Perspectives on the Sometimes Confusing

and Misleading Terminology of Generalization

Generalization and Maintenance of Behavior Change

624

behavior’s initial appearance in the learner’s repertoirehas been terminated. For example:

• Sayaka was having difficulty identifying the low-est common denominator (LCD) when adding andsubtracting fractions. Her teacher had Sayakawrite the steps for finding the LCD on an indexcard and told her to refer to the card when needed.Sayaka began using the LCD cue card, and the ac-curacy of her math assignments improved. Afterusing the cue card for a week, Sayaka said she nolonger needed it and returned it to her teacher. Thenext day Sayaka correctly computed the LCD forevery problem on a quiz on adding and subtractingfractions.

• On Loraine’s first day on the job with a residentiallandscaping company, a coworker taught her howto use a long-handled tool to extract dandelions,root and all. Without further instruction, Lorainecontinues to use the tool correctly a month later.

• When he was in the seventh grade, one of Derek’steachers taught him how to write down his assign-ments and keep materials for each class in sepa-rate folders. As a college sophomore, Derekcontinues to apply those organizational skills tohis academic work.

These examples illustrate the relative nature of gen-eralized behavior change. Response maintenance was ev-ident in Sayaka’s performance on a math quiz one dayafter the cue card intervention ended and also in Derek’scontinued use of the organizational skills he had learnedyears earlier. How long a newly learned behavior needsto be maintained depends on the importance of that be-havior in the person’s life. If covertly reciting a telephonenumber three times after hearing it enables a person to re-member the number long enough to dial it correctly whenhe locates a telephone a few minutes later, sufficient re-sponse maintenance has been achieved. Other behaviors,such as self-care and social skills, must be maintained ina person’s repertoire for a lifetime.

Setting/Situation Generalization

Setting/situation generalization occurs when a target be-havior is emitted in the presence of stimulus conditionsother than those in which it was trained directly. We de-fine setting/situation generalization as the extent towhich a learner emits the target behavior in a setting orstimulus situation that is different from the instructionalsetting. For example:

• While waiting for his new motorized wheelchair toarrive from the factory, Chaz used a computer sim-ulation program and a joystick to learn how to op-

erate his soon-to-arrive chair. When the new chairarrived, Chaz grabbed the joystick and immedi-ately began zipping up and down the hall and spin-ning perfectly executed donuts.

• Loraine had been taught to pull weeds from flower-beds and mulched areas. Although she had neverbeen instructed to do so, Loraine has begun remov-ing dandelions and other large weeds from lawnsas she crosses on her way to the flowerbeds.

• After Brandy’s teacher taught her to read 10 differ-ent C-V-C-E words (e.g., bike, cute, made), Brandycould read C-V-C-E words for which she had notreceived any instruction (e.g., cake, bite, mute).

A study by van den Pol and colleagues (1981) pro-vides an excellent example of setting/situation general-ization. They taught three young adults with multipledisabilities to eat independently in fast-food restaurants.All three students had previously eaten in restaurants butcould not order or pay for a meal without assistance. Theresearchers began by constructing a task analysis of thesteps required to order, pay for, and eat a meal appropri-ately in a fast-food restaurant. Instruction took place inthe students’ classroom and consisted of role-playing eachof the steps during simulated customer–cashier interac-tions and responding to questions about photographicslides showing customers at a fast-food restaurant per-forming the various steps in the sequence. The 22 stepsin the task analysis were divided into four major compo-nents: locating, ordering, paying, and eating and exiting.After a student had mastered the steps in each compo-nent in the classroom, he was given “a randomly deter-mined number of bills equaling two to five dollars andinstructed to go eat lunch” at a local restaurant (p. 64).Observers stationed inside the restaurant recorded eachstudent’s performance of each step in the task analysis.The results of these generalization probes, which werealso conducted before training (baseline) and after train-ing (follow-up) are shown in Figure 1. In addition to as-sessing the degree of generalization from the classroom,which was based on the specific McDonald’s restaurantused for most of the probes, the researchers conductedfollow-up probes in a Burger King restaurant (also a mea-sure of maintenance).

This study is indicative of the pragmatic approach toassessing and promoting generalized behavior changeused by most applied behavior analysts. The setting inwhich generalized responding is desired can contain oneor more components of the intervention that was imple-mented in the instructional environment, but not all ofthe components. If the complete intervention program isrequired to produce behavior change in a novel environ-ment, then no setting/situation generalization can beclaimed. However, if some component(s) of the training

Generalization and Maintenance of Behavior Change

625

100

50

0

1.0 2.0 3.0 4.0

Baseline Training Follow-Up

CovertNovel Restaurant

OvertNovel Restaurant

Covert1-YearFollow-Up

100

50

0

1.0 2.0 3.0 4.0

100

50

0

2.0 3.04.0

5 10 15 20 25 30 35Probe Sessions

Per

cen

t C

orre

ct R

esp

onse

s

1.0

Student 1

Student 2

Student 3

Figure 1 Percentage of steps necessary to order a meal at a fast-food restaurant correctly performed bythree students with disabilities before, during, and afterinstruction in the classroom. During follow-up, theclosed triangles represent probes conducted at aBurger King restaurant using typical observation proce-dures, open triangles represent Burger King probesduring which students did not know they were beingobserved, and open circles represent covert probesconducted in a different McDonald’s 1 year aftertraining.From “Teaching the Handicapped to Eat in Public Places: Acquisition,Generalization and Maintenance of Restaurant Skills” by R. A. van den Pol, B. A.Iwata, M. T. Ivanic, T. J. Page, N. A. Neef, and F. P. Whitley, 1981, Journal ofApplied Behavior Analysis, 14, p. 66. Copyright 1981 by the Society for theExperimental Analysis of Behavior, Inc. Reprinted by permission.

program results in meaningful behavior change in a gen-eralization setting, then setting/situation generalizationcan be claimed, provided it can be shown that the com-ponent(s) used in the generalization setting was insuffi-cient to produce the behavior change alone in the trainingenvironment.

For example, van den Pol and colleagues taught Stu-dent 3, who was deaf, how to use a prosthetic orderingdevice in the classroom. The device, a plastic laminatedsheet of cardboard with a wax pencil, had preprinted ques-tions (e.g., “How much is . . . ?”), generic item names(e.g., large hamburger), and spaces where the cashiercould write responses. Simply giving the student somemoney and the prosthetic ordering card would not haveenabled him to order, purchase, and eat a meal indepen-dently. However, after classroom instruction that includedguided practice, role playing, social reinforcement (“Goodjob! You remembered to ask for your change” [p. 64]),

1Because the majority of the examples in this chapter are school based, wehave used the language of education. For our purposes here, instructioncan be a synonym for treatment, intervention, or therapy, and instructionalsetting can be a synonym for clinical setting or therapy setting.

corrective feedback, and review sessions with the pros-thetic ordering card produced the desired behaviors in theinstructional setting, Student 3 was able to order, pay for,and eat meals in a restaurant aided only by the card.

Distinguishing Between Instructional and Generalization Settings

We use instructional setting to denote the total envi-ronment where instruction occurs, including any aspectsof the environment, planned or unplanned, that may in-fluence the learner’s acquisition and generalization of thetarget behavior.1 Planned elements are the stimuli and

Generalization and Maintenance of Behavior Change

626

Figure 2 Examples of an instructional setting and a generalization setting for six target behaviors.

Instructional Setting1. Raising hand when special education

teacher asks a question in the resourceroom.

2. Practicing conversational skills withspeech therapist at school.

3. Passing basketball during a teamscrimmage on home court.

4. Answering addition problems in verticalformat at desk at school.

5. Solving word problems with no distracternumbers on homework assignment.

6. Operating package sealer at communityjob site in presence of supervisor.

Generalization Setting1. Raising hand when general education

teacher asks a question in the regularclassroom.

2. Talking with peers in town.3. Passing basketball during a game on

the opponent’s court.4. Answering addition problems in

horizontal format at desk at school.5. Solving word problems with distracter

numbers on homework assignment.6. Operating package sealer at community

job site in absence of supervisor.

events the teacher has programmed in an effort to achieveinitial behavior change and promote generalization.Planned elements of an instructional setting for a mathlesson, for example, would include the specific mathproblems to be presented during the lesson and the for-mat and sequencing of those problems. Unplanned as-pects of the instructional setting are elements the teacheris not aware of or has not considered that might affectthe acquisition and generalization of the target behavior.For example, the phrase, how much in a word problemmay acquire stimulus control over a student’s use of ad-dition, even when the correct solution to the problem re-quires a different arithmetic operation. Or, perhaps astudent always uses subtraction for the first problem oneach page of word problems because a subtraction prob-lem has always been presented first during instruction.

A generalization setting is any place or stimulussituation that differs in some meaningful way from the in-structional setting and in which performance of the tar-get behavior is desired. There are multiple generalizationsettings for many important target behaviors. The studentwho learns to solve addition and subtraction word prob-lems in the classroom should be able to solve similarproblems at home, at the store, and on the ball diamondwith his friends.

Examples of instructional and generalization settingsfor six target behaviors are shown in Figure 2. When aperson uses a skill in an environment physically re-moved from the setting where he learned it—as with Behaviors 1 through 3 in Figure 2—it is easy to under-stand that event as an example of generalization acrosssettings. However, many important generalized outcomesoccur across more subtle differences between the in-structional setting and generalization setting. It is a mis-take to think that a generalization setting must besomewhere different from the place where instruction is

Generalization and Maintenance of Behavior Change

provided. Students often receive instruction in the sameplace where they will need to maintain and generalizewhat they have learned. In other words, the instructionalsetting and generalization setting can, and often do, sharethe same physical location (as with Behaviors 4 through6 in Figure 2).

Distinguishing between Setting/SituationGeneralization and Response Maintenance

Because any measure of setting/situation generalizationis conducted after some instruction has taken place, itmight be argued that setting/situation generalization andresponse maintenance are the same, or are inseparablephenomena at least. Most measures of setting/situationgeneralization do provide information on response main-tenance, and vice versa. For example, the post-traininggeneralization probes conducted by van den Pol and col-leagues (1981) at the Burger King restaurant and at thesecond McDonald’s provided data on setting/situationgeneralization (i.e., to novel restaurants) and on responsemaintenance of up to 1 year. However, a functional dis-tinction exists between setting/situation generalization andresponse maintenance, with each outcome presenting asomewhat different set of challenges for programmingand ensuring enduring behavior change. When a behaviorchange produced in the classroom or clinic is not observedin the generalization environment, a lack of setting/situation generalization is evident. When a behaviorchange produced in the classroom or clinic has occurredat least once in the generalization setting and then ceasesto occur, a lack of response maintenance is evident.

An experiment by Koegel and Rincover (1977) illus-trated the functional difference between setting/situationgeneralization and response maintenance. Participants

627

100

50

0

Child 1

Instructional Setting

Generalization Setting

100

50

0100

50

0

Child 2

Child 3

50 100 150 200 250 300 350 400 450Trials

Per

cen

t C

orre

ct T

rial

s

Figure 3 Correct responding by three children on alternating blocks of 10 trials in the instructional setting and 10 trials in the generalization setting.From “Research on the Differences Between Generalization and Maintenance in Extra-Therapy Responding” by R. L.Koegel and A. Rincover, 1977, Journal of Applied Behavior Analysis, 10, p. 4. Copyright 1977 by the Society for theExperimental Analysis of Behavior, Inc. Reprinted by permission.

were three young boys with autism; each was mute,echolalic, or displayed no appropriate contextual speech.One-to-one instructional sessions were conducted in asmall room with the trainer and child seated across fromeach other at a table. Each child was taught a series ofimitative responses (e.g., the trainer said, “Touch your[nose, ear]” or “Do this” and [raised his arm, clapped hishands]). Each 40-minute session consisted of blocks of 10training trials in the instructional setting alternated withblocks of 10 trials conducted by an unfamiliar adult stand-ing outside, surrounded by trees. All correct responses inthe instructional setting were followed by candy and so-cial praise. During the generalization trials the childrenreceived the same instructions and model prompts as inthe classroom, but no reinforcement or other conse-quences were provided for correct responses in the gen-eralization setting.

Figure 3 shows the percentage of trials in which eachchild responded correctly in the instructional setting andin the generalization setting. All three children learnedto respond to the imitative models in the instructional setting. All three children showed 0% correct respond-ing in the generalization setting at the end of the experi-ment, but for different reasons. Child 1 and Child 3 began emitting correct responses in the generalization

setting as their performances improved in the instruc-tional setting, but their generalized responding was notmaintained (most likely the result of the extinction con-ditions in effect in the generalization setting). The imita-tive responding acquired by Child 2 in the instructionalsetting never generalized to the outside setting. There-fore, the 0% correct responding at the experiment’s con-clusion represents a lack of response maintenance forChild 1 and Child 3, but for Child 2 it represents a fail-ure of setting generalization.

Response Generalization

We define response generalization as the extent to whicha learner emits untrained responses that are functionallyequivalent to the trained target behavior. In other words,in response generalization forms of behavior for which noprogrammed contingencies have been applied appear asa function of the contingencies that have been applied toother responses. For example:

• Traci wanted to earn some extra money by helpingher older brother with his lawn mowing business.Her brother taught Traci to walk the mower up anddown parallel rows that moved progressively from

Generalization and Maintenance of Behavior Change

628

one side of a lawn to the other. Traci discoveredthat she could mow some lawns just as quickly byfirst cutting around the perimeter of the lawn andthen walking the mower in concentric patterns in-ward toward the center of the lawn.

• Loraine was taught to remove weeds with a longweed-removal tool. Although she has never beentaught or asked to do so, sometimes Loraine removesweeds with a hand trowel or with her bare hands.

• Michael’s mother taught him how to take phonemessages by using the pencil and notepaper next tothe phone to write the caller’s name, phone num-ber, and message. One day, Michael’s mother camehome and saw her son’s tape recorder next to thephone. She pushed the play button and heardMichael’s voice say, “Grandma called. She wantsto know what you’d like her to cook for dinnerWednesday. Mr. Stone called. His number is 555-1234, and he said the insurance payment is due.”

The study by Goetz and Baer (1973) of the blockbuilding by three preschool girls provides a good exam-ple of response generalization. During baseline the teachersat by each girl as she played with the blocks, watchingclosely but quietly, and displaying neither enthusiasm norcriticism for any particular use of the blocks. During thenext phase of the experiment, each time the child placedor rearranged the blocks to create a new form that had notappeared previously in that session’s constructions, theteacher commented with enthusiasm and interest (e.g.,“Oh, that’s very nice—that’s different!”). Another phasefollowed in which each repeated construction of a givenform within the session was praised (e.g., “How nice—an-other arch!”). The study ended with a phase in which de-scriptive praise was again contingent on the constructionof different block forms. All three children constructedmore new forms with the blocks when form diversity wasreinforced than they did under baseline or under the rein-forcement-for-the-same-forms condition (see Figure 7).

Even though specific responses produced reinforce-ment (i.e., the actual block forms that preceded each in-stance of teacher praise), other responses sharing thatfunctional characteristic (i.e., being different from blockforms constructed previously by the child) increased infrequency as a function of the teacher’s praise. As a re-sult, during reinforcement for different forms, the childrenconstructed new forms with the blocks even though eachnew form itself had never before appeared and thereforecould not have been reinforced previously. Reinforcing afew members of the response class of new forms in-creased the frequency of other members of the same re-sponse class.

Generalization and Maintenance of Behavior Change

Generalized Behavior Change:A Relative and Intermixed Concept

As the examples presented previously show, generalizedbehavior change is a relative concept. We might think ofit as existing along a continuum. At one end of the con-tinuum are interventions that might produce a great dealof generalized behavior change; that is, after all compo-nents of an intervention have been terminated, the learnermay emit the newly acquired target behavior, as well asseveral functionally related behaviors not observed pre-viously in his repertoire, at every appropriate opportu-nity in all relevant settings, and he may do so indefinitely.At the other end of the continuum of generalized out-comes are interventions that yield only a small amount ofgeneralized behavior change—the learner uses the newskill only in a limited range of nontraining settings andsituations, and only after some contrived responseprompts or consequences are applied.

We have presented each of the three primary formsof generalized behavior change individually to isolate itsdefining features, but they often overlap and occur incombination. Although it is possible to obtain responsemaintenance without generalization across settings/situations or behaviors (i.e., the target behavior continuesto occur in the same setting in which it was trained afterthe training contingencies have been terminated), anymeaningful measure of setting generalization will entailsome degree of response maintenance. And it is commonfor all three forms of generalized behavior change to berepresented in the same instance. For example, during arelatively quiet shift at the widget factory on Monday,Joyce’s supervisor taught her to obtain assistance by call-ing out, “Ms. Johnson, I need some help.” Later that week(response maintenance) when it was very noisy on thefactory floor (setting/situation generalization), Joyce sig-naled her supervisor by waving her hand back and forth(response generalization).

Generalized Behavior Change Is NotAlways Desirable

It is hard to imagine any behavior that is importantenough to target for systematic instruction for which re-sponse maintenance would be undesirable. However, un-wanted setting/situation generalization and responsegeneralization occur often, and practitioners should de-sign intervention plans to prevent or minimize such unwanted outcomes. Undesirable setting/situation gen-eralization takes two common forms: overgeneralizationand faulty stimulus control.

Overgeneralization, a nontechnical but effectivelydescriptive term, refers to an outcome in which the

629

behavior has come under the control of a stimulus classthat is too broad. That is, the learner emits the target be-havior in the presence of stimuli that, although similar insome way to the instructional examples or situation, areinappropriate occasions for the behavior. For example, astudent learns to spell division, mission, and fusion withthe grapheme, –sion. When asked to spell fraction, thestudent writes f-r-a-c-s-i-o-n.

With faulty stimulus control, the target behaviorcomes under the restricted control of an irrelevant an-tecedent stimulus. For example, after learning to solveword problems such as, “Natalie has 3 books. Amy has5 books. How many books do they have in total?” byadding the numerals in the problem, the student adds thenumerals in any problem that includes the words “intotal” (e.g., “Corinne has 3 candies. Amanda and Corinnehave 8 candies in total. How many candies does Amandahave?”).2

Undesired response generalization occurs when anyof a learner’s untrained but functionally equivalent re-sponses results in poor performance or undesirable out-comes. For example, although Jack’s supervisor at thewidget factory taught him to operate the drill press withtwo hands because that is the safest method, sometimesJack operates the press with one hand. One-handed re-sponses are functionally equivalent to two-handed re-sponses because both topographies cause the drill pressto stamp out a widget, but one-handed responses com-promise Jack’s health and the factory’s safety record. Or,perhaps some of her brother’s customers do not like howtheir lawns look after Traci has mowed them in concen-tric rectangles.

Other Types of GeneralizedOutcomes

Other types of generalized outcomes that do not fit eas-ily into categories of response maintenance, setting/situ-ation generalization, and response generalization havebeen reported in the behavior analysis literature. For ex-ample, complex members of a person’s repertoire some-times appear quickly with little or no apparent directconditioning, such as the stimulus equivalence relations(Sidman, 1994). Another type of such rapid learning that appears to be a generalized outcome of other eventshas been called contingency adduction, a processwhereby a behavior that was initially selected and shaped under one set of conditions is recruited by a dif-ferent set of contingencies and takes on a new function

2Examples of faulty stimulus control caused by flaws in the design of in-structional materials, and suggestions for detecting and correcting thoseflaws, can be found in J. S. Vargas (1984).

in a person’s repertoire (Adronis, 1983; Johnson &Layng, 1992).

Sometimes an intervention applied to one or morepeople results in behavior changes in other people whowere not directly treated by the contingencies. Genera-lization across subjects refers to changes in the behav-ior of people not directly treated by an intervention as afunction of treatment contingencies applied to other peo-ple. This phenomenon, which has been described with avariety of related or synonymous terms—vicarious rein-forcement (Bandura, 1971; Kazdin, 1973), ripple effect(Kounin, 1970), and spillover effect (Strain, Shores, &Kerr, 1976)—provides another dimension for assessingthe generalization of treatment effects. For example, Fan-tuzzo and Clement (1981) examined the degree to whichbehavior changes would generalize from one child whoreceived teacher-administered or self-administered tokenreinforcement during a math activity to a peer seated nextto the child.

Drabman, Hammer, and Rosenbaum (1979) combinedfour basic types of generalized treatment effects—(a) across time (i.e., response maintenance), (b) across set-tings (i.e., setting/situation generalization), (c) across be-haviors (i.e., response generalization), and (d) acrosssubjects—into a conceptual framework they called thegeneralization map. By viewing each type of generalizedoutcome as dichotomous (i.e., either present or absent) andby combining all possible permutations of the four cate-gories, Drabman and colleagues arrived at 16 categoriesof generalized behavior change ranging from maintenance(Class 1) to subject-behavior-setting-time generalization(Class 16). Class 1 generalization is evident if the target be-havior of the target subject(s) continues in the treatmentsetting after any “experiment-controlled contingencies”have been discontinued. Class 16 generalization, whichDrabman and colleagues (1979) called the “ultimate form”of generalization, is evidenced by “a change in a nontar-get subject’s nontarget behavior which endures in a dif-ferent setting after the contingencies have been withdrawnin the treatment setting” (p. 213).

Although Drabman and colleagues recognized that“with any heuristic technique the classifications mayprove arbitrary” (p. 204), they provided objectively statedrules for determining whether a given behavioral eventfits the requirements of each of their 16 classifications.Regardless of whether generalized behavior change con-sists of such distinctly separate and wide-ranging phe-nomena as detailed by Drabman and colleagues, theirgeneralization map provided an objective framework bywhich the extended effects of behavioral interventionscan be described and communicated. For example,Stevenson and Fantuzzo (1984) measured 15 of the 16generalization map categories in a study of the effects of teaching a fifth-grade boy to use self-management

Generalization and Maintenance of Behavior Change

630

techniques. They not only measured the effects of the in-tervention on the target behavior (math performance) inthe instructional setting (school), but they also assessedeffects on the student’s math behavior at home, disruptivebehavior at home and at school, both behaviors for a non-treated peer in both settings, and maintenance of all ofthe above.

Planning for GeneralizedBehavior Change

In general, generalization should be programmed, ratherthan expected or lamented.

—Baer, Wolf, and Risley (1968, p. 97)

In their review of 270 published studies relevant togeneralized behavior change, Stokes and Baer (1977)concluded that practitioners should always “assume thatgeneralization does not occur except through some formof programming . . . and act as if there were no such an-imal as ‘free’ generalization—as if generalization neveroccurs ‘naturally,’ but always requires programming” (p. 365). Of course, generalization of some type and de-gree does usually occur, whether or not it is planned. Suchunplanned and unprogrammed generalization may be suf-ficient, but often it is not, particularly for many learnersserved by applied behavior analysts (e.g., children andadults with learning problems and developmental dis-abilities). And if left unchecked, unplanned generalizedoutcomes may be undesirable outcomes.

Achieving optimal generalized outcomes requiresthoughtful, systematic planning. This planning beginswith two major steps: (1) selecting target behaviors thatwill meet natural contingencies of reinforcement, and (2) specifying all desired variations of the target behav-ior and the settings/situations in which it should (andshould not) occur after instruction has ended.

Selecting Target Behaviors ThatWill Meet Naturally ExistingContingencies of Reinforcement

The everyday environment is full of steady, dependable,hardworking sources of reinforcement for almost all ofthe behaviors that seem natural to us. That is why theyseem natural to us.

—Donald M. Baer (1999, p. 15)

Numerous criteria have been suggested for determiningwhether a proposed teaching objective is relevant or functional for the learner. For example, the age-appropriateness of a skill and the degree to which it rep-resents normalization are often cited as important

Generalization and Maintenance of Behavior Change

criteria for choosing target behaviors for students withdisabilities (e.g., Snell & Brown, 2006). In the end, how-ever, there is just one ultimate criterion of functionality:A behavior is functional only to the extent that it pro-duces reinforcement for the learner. This criterion holdsno matter how important the behavior may be to the per-son’s health or welfare, or no matter how much teach-ers, family, friends, or the learner himself considers thebehavior to be desirable. To repeat: A behavior is notfunctional if it does not produce reinforcement for thelearner. Said another way: Behaviors that are not fol-lowed by reinforcers on at least some occasions will notbe maintained.

Ayllon and Azrin (1968) recognized this fundamen-tal truth when they recommended that practitioners fol-low the relevance-of-behavior rule when selecting targetbehaviors. The rule: Choose only those behaviors tochange that will produce reinforcers in the postinterven-tion environment. Baer (1999) believed so strongly in theimportance of this criterion that he recommended thatpractitioners heed a similar rule:

A good rule is to not make any deliberate behaviorchanges that will not meet natural communities of rein-forcement. Breaking this rule commits you to maintainand extend the behavior changes that you want, by your-self, indefinitely. If you break this rule, do so knowingly.Be sure that you are willing and able to do what will benecessary. (p. 16, emphasis in original)

Programming for the generalization and maintenanceof any behavior for which a natural contingency of rein-forcement exists, no matter the specific tactics employed,consists of getting the learner to emit the behavior in thegeneralization setting just often enough to contact the oc-curring contingencies of reinforcement. Generalizationand maintenance of the behavior from that point forward,while not assured, is a very good bet. For example, afterreceiving some basic instruction on how to operate thesteering wheel, gas pedal, and brakes on a car, the natu-rally existing reinforcement and punishment contingen-cies involving moving automobiles and the road willselect and maintain effective steering, acceleration, andbraking. Very few drivers need booster training sessionson the basic operation of the steering wheel, gas pedal,and brakes.

We define a naturally existing contingency as anycontingency of reinforcement (or punishment) that oper-ates independent of the behavior analyst’s or practi-tioner’s efforts. This is a pragmatic, functional conceptionof a naturally existing contingency defined by the ab-sence of the behavior analyst’s efforts. Naturally exist-ing contingencies include contingencies that operate

631

without social mediation (e.g., walking fast on an icysidewalk is often punished by a slip and fall) and sociallymediated contingencies contrived and implemented byother people in the generalization setting. From the per-spective of a special educator who is teaching a set oftargeted social and academic skills to students for whomthe general education classroom represents the general-ization setting, a token economy operated by the generaleducation classroom teacher is an example of the lattertype of naturally existing contingency.3 Even though thetoken economy was contrived by the teacher in the gen-eral education classroom, it is a naturally existing con-tingency because it already operates in the generalizationsetting.

We define a contrived contingency as any contin-gency of reinforcement (or punishment) designed andimplemented by a behavior analyst or practitioner toachieve the acquisition, maintenance, and/or generaliza-tion of a targeted behavior change. From the perspectiveof the teacher in the general education classroom whodesigned and implemented it, the token economy in theprevious example is a contrived contingency.

In reality, practitioners are often charged with thedifficult task of teaching important skills for which thereare no dependable naturally existing contingencies of re-inforcement. In such cases, practitioners should realizeand plan for the fact that the generalization and mainte-nance of target behaviors will have to be supported, per-haps indefinitely, with contrived contingencies.

Specifying All Desired Variations of the Behavior and the Settings/Situations Where It Should (and Should Not) Occur

This stage of planning for generalized outcomes includesidentifying all the desired behavior changes that need tobe made and all the environments and stimulus condi-tions in which the learner should emit the target behav-ior(s) after direct training has ceased (Baer, 1999). Forsome target behaviors, the most important stimulus con-trol for each response variation is clearly defined (e.g.,reading C-V-C-E words) and restricted in number (e.g.,solving multiplication facts). For many important targetbehaviors, however, the learner is likely to encounter amultitude of settings and stimulus conditions where thebehavior, in a wide variety of response forms, is desired.Only by considering these possibilities prior to instructioncan the behavior analyst design an intervention with thebest chance of preparing the learner for them.

In one sense, this component of planning for gener-alized outcomes is similar to preparing a student for a fu-

ture test without knowing the content or the format of allof the questions that will be on the test. The stimulus con-ditions and contingencies of reinforcement that exist inthe generalization setting(s) will provide that test to thelearner. Planning involves trying to determine what thefinal exam will cover (type and form of questions),whether there will be any trick questions (e.g., confus-ing stimuli that might evoke the target response when itshould not occur), and whether the learner will need touse his new knowledge or skill in different ways (re-sponse generalization).

List All the Behaviors That Need to Be Changed

A list should be made of all the forms of the target be-havior that need to be changed. This is not an easy task,but a necessary one to obtain a complete picture of theteaching task ahead. For example, if the target behavioris teaching Brian, the young boy with autism, to greetpeople, he should learn a variety of greetings in additionto “Hello, how are you?” Brian may also need manyother behaviors to initiate and participate in conversa-tions, such as responding to questions, taking turns, stay-ing on topic, and so forth. He may also need to be taughtwhen and with whom to introduce himself. Only by hav-ing a complete list of all the desired forms of the behav-ior can the practitioner make meaningful decisions aboutwhich behaviors to teach directly and which to leave togeneralization.

The practitioner should determine whether and towhat extent response generalization is desirable for allof the behavior changes listed, and then, make a priori-tized list of the variations of the target behavior he wouldlike to see as generalized outcomes.

List All the Settings and Situations in Which the Target Behavior Should Occur

A list should be made of all the desired settings and sit-uations in which the learner will emit the target behaviorif optimal generalization is achieved. Will Brian need tointroduce himself and talk with children his own age, toadults, to males and females? Will he need to talk withothers at home, at school, in the lunchroom, on the play-ground? Will he be confronted with situations that mayappear to be appropriate opportunities to converse butare not (e.g., an unknown adult approaches and offerscandy) and for which an alternative response is needed(e.g., walking away, seeking out a known adult). (Thiskind of analysis often adds additional behaviors to thelist of skills to be taught.)

When all of the possible situations and settings havebeen identified, they should be prioritized according to

Generalization and Maintenance of Behavior Change

3Token economies are described in Chapter entitled “Contingency Con-tracting, Token Economy, and Group Contingencies.

632

their importance and the client’s likelihood of encoun-tering them. Further analysis of the prioritized environ-ments should then be conducted. What discriminativestimuli usually set the occasion for the target behavior inthese various settings and situations? What schedules ofreinforcement for the target behavior are typical in thesenontraining environments? What kinds of reinforcers arelikely to be contingent on the emission of the target be-havior in each of the settings? Only when she has an-swered all of these questions, if not by objectiveobservation then at least by considered estimation, canthe behavior analyst begin to have a full picture of theteaching task ahead.

Is the Pre-Intervention Planning Worth It?

Obtaining all of the information just described requiresconsiderable time and effort. Given limited resources,why not design an intervention and immediately begintrying to change the target behavior? It is true that manybehavior changes do show generalization, even thoughthe extension of the trained behavior across time, set-tings, and other behaviors was unplanned and unpro-grammed. When target behaviors have been chosen thatare truly functional for the subject and when those be-haviors have been brought to a high level of proficiencyunder discriminative stimuli relevant to generalizationsettings, the chances of generalization are good. But whatconstitutes a high level of proficiency for certain behav-iors in various settings? What are all of the relevant dis-criminative stimuli in all of the relevant settings? Whatare all the relevant settings?

Without a systematic plan, a practitioner will usu-ally be ignorant of the answers to these vital questions.Few behaviors that are important enough to target havesuch limited needs for generalized outcomes that the an-swers to such questions are obvious. Just a cursory con-sideration of the behaviors, settings, and people related toa child introducing himself revealed numerous factorsthat may need to be incorporated into an instructionalplan. A more thorough analysis would produce manymore. In fact, a complete analysis will invariably revealmore behaviors to be taught, to one person or another,than time or resources would ever allow. And Brian—the10-year-old who is learning to greet people and introducehimself—in all likelihood needs to learn many other skillsalso, such as self-help, academic, and recreation andleisure skills, to name just a few. Why then create all thelists in the first place when everything cannot be taughtanyway? Why not just train and hope?4

Generalization and Maintenance of Behavior Change

4Teaching a new behavior without developing and implementing a plan tofacilitate its maintenance and generalization is done so often that Stokes andBaer 1977) called it the “train and hope” approach to generalization.

Baer (1999) described six possible benefits of list-ing all the forms of behavior change and all the situationsin which these behavior changes should occur.

1. You now see the full scope of the problem ahead of you, and thus see the corresponding scope thatyour teaching program needs to have.

2. If you teach less than the full scope of the problem,you do so by choice rather than by forgetting thatsome forms of the behavior could be important, orthat there were some other situations in which thebehavior change should or should not occur.

3. If less than a complete teaching program results inless than a complete set of behavior changes, youwill not be surprised.

4. You can decide to teach less than there is to learn,perhaps because that is all that is practical or pos-sible for you to do.

5. You can decide what is most important to teach.You can also decide to teach the behavior in a waythat encourages the indirect development of someof the other forms of the desired behavior, as wellas the indirect occurrence of the behavior in someother desired situations, that you will not or cannotteach directly.

6. But if you choose the option discussed in number 5 above, rather than the complete program implicitin number 1, you will do so knowing that the de-sired outcome would have been more certain hadyou taught every desirable behavior change di-rectly. The best that you can do is to encourage thebehavior changes that you do not cause directly.So, you will have chosen the option in number 5 either of necessity or else as a well-consideredgamble after a thoughtful consideration of possibil-ities, costs, and benefits. (pp. 10–11)

After determining which behaviors to teach directlyand in which situations and settings to teach those be-haviors, the behavior analyst is ready to consider strate-gies and tactics for achieving generalization to untrainedbehaviors and settings.

Strategies and Tactics for Promoting GeneralizedBehavior ChangeVarious authors have described conceptual schemes andtaxonomies of methods for promoting generalized be-havior change (e.g., Egel, 1982; Horner, Dunlap, &Koegel, 1988; Osnes & Lieblein, 2003; Stokes & Baer,

633

Figure 4 Strategies and tactics for promoting generalized behavior change.

Teach the Full-Range of Relevant Stimulus Conditions and Response Requirements1. Teach sufficient stimulus examples2. Teach sufficient response examples

Make the Instructional Setting Similar to the Generalization Setting3. Program common stimuli4. Teach loosely

Maximize Contact with Reinforcement in the Generalization Setting5. Teach the target behavior to levels of performance required by naturally existing

contingencies of reinforcement6. Program indiscriminable contingencies7. Set behavior traps8. Ask people in the generalization setting to reinforce the target behavior9. Teach the learner to recruit reinforcement

Mediate Generalization10. Contrive a mediating stimulus11. Teach self-management skills

Train to Generalize12. Reinforce response variability13. Instruct the learner to generalize

1977; Stokes & Osnes, 1989). The conceptual schemepresented here is informed by the work of those authorsand others, and by our own experiences in designing,implementing, and evaluating procedures for promot-ing generalized outcomes and in teaching practitionersto use them. Although numerous methods and tech-niques have been demonstrated and given a variety ofnames, most tactics that effectively promote general-ized behavior change can be classified under five strate-gic approaches:

• Teach the full range of relevant stimulus conditionsand response requirements.

• Make the instructional setting similar to the gener-alization setting.

• Maximize the target behavior’s contact with rein-forcement in the generalization setting.

• Mediate generalization.

• Train to generalize.

In the following sections we describe and provideexamples of 13 tactics applied behavior analysts haveused to accomplish these five strategies (see Figure 4).Although each tactic is described individually, most ef-forts to promote generalized behavior change entail acombination of these tactics (e.g., Ducharme & Hol-born, 1997; Grossi, Kimball, & Heward, 1994; Hughes,Harmer, Killina, & Niarhos, 1995; Ninness, Fuerst, &Rutherford, 1991; Trask-Tyler, Grossi, & Heward,1994).

Teach the Full Range of Relevant Stimulus Conditionsand Response Requirements

The most common mistake that teachers make, whenthey want to establish a generalized behavior change, isto teach one good example of it and expect the studentto generalize from that example.

— Donald M. Baer (1999, p. 15)

To be most useful, most important behaviors must be per-formed in various ways across a wide range of stimulusconditions. Consider a person skilled in reading, math,conversing with others, and cooking. That person canread thousands of different words; add, subtract, multiply,and divide any combination of numbers; make a multi-tude of relevant and appropriate comments when talkingwith others; and measure, combine, and prepare numer-ous ingredients in hundreds of recipes. Helping learnersachieve such wide-ranging performances presents anenormous challenge to the practitioner.

One approach to this challenge would be to teachevery desired form of a target behavior in every setting/situation in which the learner may need that behavior inthe future. Although this approach would eliminate theneed to program for response generalization and setting/situation generalization (response maintenance would re-main the only problem), it is seldom possible and neverpractical. A teacher cannot provide direct instruction onevery printed word a student may encounter in the fu-ture, or teach a student every measuring, pouring, stir-ring, and sautéing movement needed to make every dish

Generalization and Maintenance of Behavior Change

634

he may want to make in the future. Even for most skillareas for which it would be possible to teach every pos-sible example (e.g., instruction could be provided on all900 different single-digit-times-two-digit multiplicationproblems), to do so would be impractical for many rea-sons, not the least of which is that the student needs tolearn not only many other types of math problems butalso skills in other curriculum areas.

A general strategy called teaching sufficient exam-ples consists of teaching the student to respond to a sub-set of all of the possible stimulus and response examplesand then assessing the student’s performance on untrainedexamples.5 For example, the generalization of a student’sability to solve two-digit-minus-two-digit arithmetic prob-lems with regrouping can be assessed by asking the stu-dent to solve several problems of the same type for whichno instruction or guided practice has been provided. If theresults of this generalization probe show that the studentresponds correctly to untaught examples, then instructioncan be halted on this class of problems. If the student per-forms poorly on the generalization probe, the practitionerteaches additional examples before again assessing thestudent’s performance on a new set of untaught examples.This cycle of teaching new examples and probing withuntaught examples continues until the learner consistentlyresponds correctly to untrained examples representing thefull range of stimulus conditions and response require-ments found in the generalization setting.

Teach Sufficient Stimulus Examples

The tactic for promoting setting/situation generalizationcalled teaching sufficient stimulus examples involvesteaching the learner to respond correctly to more thanone example of antecedent stimulus conditions and prob-ing for generalization to untaught stimulus examples. Adifferent stimulus example is incorporated into the teach-ing program each time a change is made in any dimen-sion of the instructional item itself or the environmentalcontext in which the item is taught. Examples of four di-mensions by which different instructional examples canbe identified and programmed are the following:

• The specific item taught (e.g., multiplication facts:7 � 2, 4 � 5; letter sounds: a, t)

• The stimulus context in which the item is taught(e.g., multiplication facts presented in vertical for-mat, in horizontal format, in word problems; say-ing the sound of t when it appears at the beginningand end of words: tab, bat)

Generalization and Maintenance of Behavior Change

5Other terms commonly used for this strategy for promoting generalized be-havior change are training sufficient exemplars (Stokes & Baer, 1977) andtraining diversely (Stokes & Osnes, 1989).

• The setting where instruction occurs (e.g., large-group instruction at school, collaborative learninggroup, home)

• The person doing the teaching (e.g., classroomteacher, peer, parent)

As a general rule, the more examples the practitioneruses during instruction, the more likely the learner will re-spond correctly to untrained examples or situations. Theactual number of examples that must be taught beforesufficient generalization occurs will vary as a function offactors such as the complexity of the target behavior beingtaught, the teaching procedures employed, the student’sopportunities to emit the target behavior under the vari-ous conditions, the naturally existing contingencies ofreinforcement, and the learner’s history of reinforcementfor generalized responding.

Sometimes teaching as few as two examples will pro-duce significant generalization to untaught examples.Stokes, Baer, and Jackson (1974) taught a greeting re-sponse to four children with severe mental retardationwho seldom acknowledged or greeted other people. Thesenior author, working as a dormitory assistant, used un-conditioned reinforcers (potato chips and M&Ms) andpraise to shape the greeting response (at least two back-and-forth waves of a raised hand). Then this initial trainermaintained the newly learned hand wave by contrivingthree to six contacts per day with each of the children invarious settings (e.g., playroom, corridor, dormitory,courtyard). Throughout the study as many as 23 differentstaff members systematically approached the childrenduring different times of the day in different settings andrecorded whether the children greeted them with a handwaving response. If a child greeted a prober with the wav-ing response, the prober responded with “Hello, (name).”Approximately 20 such generalization probes were con-ducted each day with each child.

Immediately after learning the greeting response withjust one trainer, one of the children (Kerry) showed goodsetting/situation generalization by using it appropriatelyin most of her contacts with other staff members. How-ever, the other three children usually failed to greet staffmembers most of the time, even though they continued togreet the original trainer on virtually every occasion. Asecond staff member then began to reinforce and main-tain the greeting responses of these three children. As aresult of adding the second trainer, the children’s greet-ing behavior showed widespread generalization to theother staff members. Stokes and colleagues’ (1974) studyis important for at least two reasons. First, it demonstratedan effective method for continual assessment of setting/situation generalization across numerous examples (inthis case, people). Second, the study showed that it is

635

sometimes possible to produce widespread generaliza-tion by programming only two examples.

Teach Sufficient Response Examples

Instruction that provides practice with a variety of re-sponse topographies helps to ensure the acquisition ofdesired response forms and also promotes response gen-eralization in the form of untrained topographies. Oftencalled multiple exemplar training, this tactic typicallyincorporates both stimulus and response variations. Mul-tiple exemplar training was used to achieve the acquisi-tion and generalization of affective behavior by childrenwith autism (Gena, Krantz, McClannahan, & Poulson,1996); cooking skills by young adults with disabilities(Trask-Tyler, Grossi, & Heward, 1994); domestic skills(Neef, Lensbower, Hockersmith, DePalma, & Gray,1990); vocational skills (Horner, Eberhard, & Sheehan,1986); daily living skills (Horner, Williams, & Steveley,1987); and requests for assistance (Chadsey-Rusch, Dras-gow, Reinoehl, Halle, & Collet-Klingenberg, 1993).

Four female high school students with moderatemental retardation participated in a study by Hughes andcolleagues (1995) that assessed the effects of an inter-vention they called multiple-exemplar self-instructionaltraining on the acquisition and generalization of the stu-dents’ conversational interactions with peers. The youngwomen were recommended for the study because theyinitiated conversations and responded to peers efforts totalk with them at “low or nonexistent rates” and main-tained little eye contact. One of the students, Tanya, hadrecently been refused a job at a restaurant because of her“reticence and lack of eye contact during her job inter-view” (p. 202).

A key element of Hughes and colleagues’ interven-tion was practicing a wide variety of conversation startersand statements with different peer teachers. Ten volunteerpeer teachers recruited from general education class-rooms helped teach conversation skills to the participants.The peer teachers were male and female, ranged in gradelevel from freshmen to seniors, and represented AfricanAmerican, Asian American, and Euro-American ethnicgroups. Instead of learning a few scripted conversationopeners, the participants practiced using multiple exam-ples of conversation starters selected from a pooled list ofconversation openers used by general education students.Additionally, participants were encouraged to developindividual adaptations of statements, which further pro-moted response generalization by increasing the numberand range of conversational statements that were likely tobe used in subsequent conversations.

Before, during, and after multiple exemplar training,generalization probes were conducted of each partici-pant’s use of self-instructions, eye contact, and initiating

6Siegfried Engelmann and Douglas Carnine’s (1982) Theory of Instruc-tion: Principles and Applications is one of the most thorough and sophis-ticated treatments of the selection and sequencing of teaching examplesfor effective and efficient curriculum design.

and responding to conversation partners. The 23 to 32different students who served as conversation partnersfor each participant represented the full range of studentcharacteristics in the school population (e.g., gender, age,ethnicity, students with and without disabilities) and in-cluded students who were known and unknown to theparticipants prior to the study. The rate of conversationinitiations by all four participants increased during mul-tiple exemplar training to levels approximating those ofgeneral education students and was maintained at thoserates after the intervention was terminated completely(see Figure 5).

General Case Analysis

Teaching a learner to respond correctly to multiple ex-amples will not automatically produce generalized re-sponding to untaught examples. To achieve an optimaldegree of generalization and discrimination, the behavioranalyst must pay close attention to the specific examplesused during instruction; not just any examples will do.Optimally effective instructional design requires select-ing teaching examples that represent the full range ofstimulus situations and response requirements in the nat-ural environment.6 General case analysis (also calledgeneral case strategy) is a systematic method for select-ing teaching examples that represent the full range ofstimulus variations and response requirements in the gen-eralization setting (Albin & Horner, 1988; Becker & En-gelmann, 1978; Engelmann & Carnine, 1982).

A series of studies by Horner and colleagues demon-strated the importance of teaching examples that sys-tematically sample the range of stimulus variations andresponse requirements the learner will encounter in thegeneralization setting (e.g., Horner, Eberhard, & Shee-han, 1986; Horner & McDonald, 1982; Horner, Williams,& Steveley, 1987). In a classic example of this line of re-search, Sprague and Horner (1984) evaluated the effectsof general case instruction on the generalized use of vend-ing machines by six high school students with moderateto severe mental retardation. The dependent variable wasthe number of vending machines each student operatedcorrectly during generalization probes of 10 different ma-chines located within the community. For a probe trial tobe scored as correct, a student had to correctly performa chain of five responses (i.e., insert the proper numberof coins, activate the machine for the desired item, and soon). The researchers selected the 10 vending machinesused to assess generalization because each student’s per-formance on those machines would serve as an index of

Generalization and Maintenance of Behavior Change

636

6543210

Follow-Up

Patti

MaintenanceMultiple ExemplarSelf-Instruction Training

Baseline

6543210

Carrie Ann

6543210

Tanya

6543210 Melissa

Range of Expected Behavior

0 5 10 15 20 25 30 35 40 45 50 55 9 11

MonthsGeneralization SessionsGymClassroomLunchroomWorkroomParticipant Absence

Open symbols = untrained settingFilled symbols = trained setting

Init

iati

ons

per

Min

ute

Figure 5 Conversation initiations per minute by four high school students with disabili-ties to conversation partners with and without disabilities during generalization sessions.The shaded bands represent typical performance by general education students.From “The Effects of Multiple-Exemplar Training on High-School Students’ Generalized Conversational Interactions” by C. Hughes,M. L. Harmer, D. J. Killina, and F. Niarhos, 1995. Journal of Applied Behavior Analysis, 28, p. 210. Copyright 1995 by the Society forthe Experimental Analysis of Behavior, Inc. Reprinted by permission.

his performance “across all vending machines dispensingfood and beverage items costing between $.20 and $.75in Eugene, Oregon” (p. 274). None of the vending ma-chines used in the generalization probes was identical toany of the vending machines used during instruction.

After a single-baseline probe verified each student’sinability to use the 10 vending machines in the commu-nity, a condition the researchers called “single-instanceinstruction” began. Under this condition each student re-ceived individual training on a single vending machine located in the school until he used the machine in-dependently for three consecutive correct trials on each

Generalization and Maintenance of Behavior Change

of two consecutive days. Even though each student hadlearned to operate the training machine without errors,the generalization probe following single-instance in-struction revealed little or no success with the vendingmachines in the community (see Probe Session 2 inFigure 6). The continued poor performance of Students2, 3, 5, and 6 on successive generalization probes thatfollowed additional instruction with the single-instancetraining machine shows that overlearning on a single example does not necessarily aid generalization. Furtherevidence of the limited generalization obtained fromsingle-instance instruction is the fact that seven of

637

Baseline

10

0Su

bje

ct 1

A

10

0

Sub

ject

2

10

0

Sub

ject

3

Single-Instance

General Case

Baseline10

0

Sub

ject

4

10

0

Sub

ject

5

10

0

Sub

ject

6

Single-Instance

Multiple-Instance

General Case

1 2 3 4 5 6 7Probe Sessions

Nu

mb

er o

f V

end

ing

Mac

hin

es O

per

ated

Cor

rect

ly

Figure 6 The number of nontrained probe machines operated correctly by studentsacross phases and probe sessions.From “The Effects of Single Instance, Multiple Instance, and General Case Training on Generalized Vending Machine Use by Moderately and Severely Handicapped Students” by J. R. Sprague and R. H. Horner, 1984, Journal of AppliedBehavior Analysis, 17, p. 276. Copyright 1984 by the Society for the Experimental Analysis of Behavior, Inc. Reprinted by permission.

the eight total probe trials performed correctly by all stu-dents after single-instance instruction were on Probe Ma-chine 1, the vending machine that most closely resembledthe machine on which the students had been trained.

Next, multiple-instance training was implementedwith Students 4, 5, and 6. The teaching procedures andperformance criteria for multiple-instance training werethe same as those used in the single-instance condition ex-cept that each student received instruction until hereached criterion on all three new machines. Sprague andHorner (1984) purposely selected vending machines touse in the multiple-instance instruction that were similar

to one another and that did not sample the range of stim-ulus variations and response requirements that definedthe vending machines in the community. After reachingtraining criterion on three additional machines, Students4, 5, and 6 were still unable to operate the machines in thecommunity. During the six probe sessions that followedmultiple-instance instruction, these students correctlycompleted only 9 of the 60 total trials.

The researchers then introduced general case in-struction in multiple-baseline fashion across students.This condition was the same as multiple-instance in-struction except that the three different vending machines

Generalization and Maintenance of Behavior Change

638

used in general case training, when combined with thesingle-instance machine, provided the students with prac-tice across the total range of stimulus conditions and re-sponse variations found in vending machines in thecommunity. None of the training machines, however,were exactly the same as any machine used in the gen-eralization probes. After reaching training criterion onthe general case machines, all six students showed sub-stantial improvements in their performance on the 10 un-trained machines. Sprague and Horner (1984) speculatedthat Student 3’s poor performance on the first general-ization probe following general case instruction wascaused by a ritualistic pattern of inserting coins that hehad developed during previous probe sessions. After re-ceiving repeated practice on the coin insertion step dur-ing a training session between Probe Sessions 5 and 6,Student 3’s performance on the generalization machinesimproved greatly.

Negative, or “Don’t Do It,” Teaching Examples

Generalization across every possible condition or situationis rarely desirable. Teaching a student where and whento use a new skill or bit of knowledge does not mean thathe also knows where and when not to use this newlylearned behavior. Brian, for example, needs to learn thathe should not say, “Hello, how are you?” to people thathe greeted within the past hour or so. Learners must betaught to discriminate the stimulus conditions that signalwhen responding is appropriate from stimulus conditionsthat signal when responding is inappropriate.

Instruction that includes “don’t do it” teaching ex-amples intermixed with positive examples provides learn-ers with practice discriminating stimulus situations inwhich the target behavior should not be emitted (i.e., S�s)from those conditions when the behavior is appropriate.This sharpens the stimulus control necessary to mastermany concepts and skills (Engelmann & Carnine, 1982).7

Horner, Eberhard, and Sheehan (1986) incorporated“don’t do it” examples into training programs to teach fourhigh school students with moderate to severe mental re-tardation how to bus tables in cafeteria-style restaurants. Tocorrectly bus a table, the student had to remove all dishes,silverware, and garbage from tabletop, chairs, and the floor

Generalization and Maintenance of Behavior Change

7Teaching examples used to help students discriminate when not to re-spond (i.e., S�s) are sometimes called negative examples and contrastedwith positive examples (i.e., SDs). However, in our work in teaching thisconcept, practitioners have told us that the term negative teaching exam-ple suggests that the teacher is modeling or showing the learner how not toperform the target behavior. Instruction on the desired topography for somebehaviors may be aided by providing students with models on how not toperform certain behavior (i.e., negative examples), but the function of “don’tdo it” examples is to help the learner discriminate antecedent conditions thatsignal an inappropriate occasion for responding.

under and around the table; wipe the tabletop; straightenchairs; and place dirty dishes and garbage in appropriate re-ceptacles. In addition, the students were taught to inquire,through the use of cards, if a customer was finished withempty dishes. The three settings, one for training and twofor generalization probes, differed in terms of size and fur-niture characteristics and configurations.

Each training trial required the student to attend tothe following stimulus features of a table: (a) the pres-ence or absence of people at a table, (b) whether peoplewere eating at the table, (c) the amount and/or status offood on dishes, (d) the presence or absence of garbage ata table, and (e) the location of garbage and/or dirty dishesat a table. Training consisted of 30-minute sessions in-volving six table types that represented the range of con-ditions likely to be encountered in a cafeteria-stylerestaurant. A trainer modeled correct table bussing, ver-bally prompted correct responding and stopped the stu-dent when errors occurred, recreated the situation, andprovided additional modeling and assistance. The sixtraining examples consisted of four to-be-bussed tablesand two not-to-be-bussed tables (see Table 1).

During generalization probe sessions, which wereconducted in two restaurants not used for training, eachstudent was presented with 15 tables selected to repre-sent the range of table types the students could be ex-pected to encounter if employed in a cafeteria-stylerestaurant. The 15 probe tables consisted of 10 to-be-bussed tables and 5 not-to-be-bussed tables. The resultsshowed a functional relation between general case in-struction that included not-to-be-bussed tables and “im-mediate and pronounced improvement in the percentageof probe tables responded to correctly” (p. 467).

Negative teaching examples are necessary when dis-criminations must be made between appropriate and inappropriate conditions for a particular response. Prac-titioners should ask this question: Is responding alwaysappropriate in the generalization setting(s)? If the an-swer is no, then “don’t do it now” teaching examplesshould be part of instruction.

Does the teaching setting or situation naturally or au-tomatically include a sufficient number and range of neg-ative examples? The teaching situation must be analyzedto answer this important question. Practitioners may needto contrive some negative teaching examples. Practition-ers should not assume the natural environment will read-ily reveal sufficient negative examples. Conductingtraining in the natural environment is no guarantee thatlearners will be exposed to stimulus situations that theyare likely to encounter in the generalization environmentafter training. For example, in the study on teaching tablebussing described earlier, Horner and colleagues (1986)noted that, “on some days the trainer needed actively to

639

Table 1 Six Training Examples Used to Teach Students with Disabilities How to Bus Tables in Cafeteria-Style Restaurants

Training Presence Garbage: Location Examples of People and People Eating Dishes: Empty/ Present or of Garbage Correct

Possessions or Not Eating Part/New Food Not Present and Dishes Response

1 0 People + N/A Partial Present Table Don’t Buspossessions Chairs

2 0 People N/A Partial Present Table, Floor, Chair Bus

3 2 People Eating New food Present Table, Chair, Floor Don’t Bus

4 0 People N/A Empty Present Table, Floor Bus

5 1 Person Not eating Empty Present Chair, Floor Bus

6 2 People Not eating Empty Present Table Bus

From “Teaching Generalized Table Bussing: The Importance of Negative Teaching Examples” by R. H. Horner, J. M. Eberhard, and M. R.Sheehan, 1986, Behavior Modification, 10, p. 465. Copyright 1986 by the Sage Publications, Inc. Used by permission.

set up one or more table types to ensure that a studenthad access to a table type that was not ‘naturally’ avail-able” (p. 464).

“Don’t do it” teaching examples should be selectedand sequenced according to the degree to which they dif-fer from positive examples (i.e., SDs). The most effectivenegative teaching examples will share many of the rele-vant characteristics of the positive teaching examples(Horner, Dunlap, & Koegel, 1988). Such minimum dif-ference negative teaching examples help the learner toperform the target behavior with the precision required bythe natural environment. Minimum difference teachingexamples help eliminate “generalization errors” due toovergeneralization and faulty stimulus control. For ex-ample, the “don’t bus” tables used by Horner and col-leagues (1986) shared many features with the “bus” tables(see Table 1).

Make the Instructional SettingSimilar to the Generalization Setting

Fresno State coach Pat Hill expects Ohio Stadium to bea new experience for the Bulldogs, who will visit for thefirst time. For a practice last week in FSU’s stadium,Hill hired a production company to blast the Ohio Statefight song—at about 90 decibels—throughout the two-hour session. “We created some noise and atmosphere to give us a feel of a live game,” Hill said.

—Columbus Dispatch (August 27, 2000)

A basic strategy for promoting generalization is to incor-porate into the instructional setting stimuli that the learneris likely to encounter in the generalization setting. Thegreater the similarity between the instructional environ-ment and the generalization environment, the more likelythe target behavior will be emitted in the generalizationsetting. The principle of stimulus generalization states that

a behavior is likely to be emitted in the presence of a stim-ulus very similar to the stimulus conditions in which thebehavior was reinforced previously, but the behavior willlikely not be emitted under stimulus conditions that differsignificantly from the training stimulus.

Stimulus generalization is a relative phenomenon:The more a given stimulus configuration resembles thestimulus conditions present during instruction, the greaterthe probability that the trained response will be emitted,and vice versa. A generalization setting that differs sig-nificantly from the instructional setting may not providesufficient stimulus control over the target behavior. Sucha setting may also contain stimuli that impede the targetbehavior because their novelty confuses or startles thelearner. Exposing the learner to stimuli during instruc-tion that are commonly found in the generalization set-ting increases the likelihood that those stimuli willacquire some stimulus control over the target behaviorand also prepares the learner for the presence of stimuliin the generalization setting that have the potential ofimpeding performance. Two tactics used by applied be-havior analysts to implement this basic strategy are pro-gramming common stimuli and teaching loosely.

Program Common Stimuli

Programming common stimuli means including typicalfeatures of the generalization setting into the instructionalsetting. Although behavior analysts have attached a spe-cial term to this tactic, successful practitioners in manyfields have long used this technique for promoting gen-eralized behavior change. For example, coaches, musicteachers, and theater directors hold scrimmages, mockauditions, and dress rehearsals to prepare their athletes,musicians, and actors to perform important skills in set-tings that include the sights, sounds, materials, people,

Generalization and Maintenance of Behavior Change

640

and procedures that simulate as closely as possible thosein the “real world.”

Van den Pol and colleagues (1981) programmedcommon stimuli when they taught three young adultswith disabilities how to order and eat in fast-food restau-rants. The researchers used numerous items and photosfrom actual restaurants to make the classroom simulatethe conditions found in actual restaurants. Plastic signswith pictures and names of various McDonald’s sand-wiches were posted on the classroom wall, a table wastransformed into a mock “counter” for role-playing trans-actions, and the students practiced responding to 60 pho-tographic slides taken in actual restaurants showing bothpositive and negative (“don’t do it”) examples of situa-tions customers are likely to encounter.

Why go to all the trouble of simulating the gener-alization setting? Why not just conduct instruction inthe generalization setting itself to ensure that the learnerexperiences all of the relevant aspects of the setting?First, conducting instruction in natural settings is not al-ways possible or practical. Lots of resources and timemay be necessary to transport students to community-based settings.

Second, community-based training may not exposestudents to the full range of examples they are likely toencounter later in the same setting. For example, studentswho receive in situ instruction for grocery shopping orstreet crossing during school hours may not experiencethe long lines at the checkout counters or heavy trafficpatterns typical of evening hours.

Third, instruction in natural settings may be less ef-fective and efficient than classroom instruction becausethe trainer cannot halt the natural flow of events to con-trive an optimal number and sequence of training trialsneeded (e.g., Neef, Lensbower, Hockersmith, DePalma,& Gray, 1990).

Fourth, instruction in simulated settings can be safer,particularly with target behaviors that must be performedin potentially dangerous environments or that have se-vere consequences if performed incorrectly (e.g., Mil-tenberger et al., 2005), or when children or people withlearning problems must perform complex procedures. Ifthe procedures involve invading the body or errors dur-ing practice are potentially hazardous, simulation train-ing should be used. For example, Neef, Parrish,Hannigan, Page, and Iwata (1990) had children with neu-rogenic bladder complications practice performing self-catheterization skills on dolls.

Programming common stimuli is a straightforwardtwo-step process of (a) identifying salient stimuli thatcharacterize the generalization setting(s) and (b) incor-porating those stimuli into the instructional setting. Apractitioner can identify possible stimuli in the general-ization setting to make common by direct observation or

Generalization and Maintenance of Behavior Change

by asking people familiar with the setting. Practitionersshould conduct observations in the generalization set-ting(s) and write down prominent features of the envi-ronment that might be important to include duringtraining. When direct observation is not feasible, practi-tioners can obtain secondhand knowledge of the settingby interviewing or giving checklists to people who havefirsthand knowledge of the generalization setting—thosewho live, work in, or are otherwise familiar with the gen-eralization setting(s) in question.

If a generalization setting includes important stimulithat cannot be recreated or simulated in the instructionalsetting, then at least some training trials must be con-ducted in the generalization setting. However, as pointedout previously, practitioners should not assume thatcommunity-based instruction will guarantee students’ ex-posure to all of the important stimuli common to the gen-eralization setting.

Teach Loosely

Applied behavior analysts control and standardize inter-vention procedures to maximize their direct effects, andso the effects of their interventions can be interpreted andreplicated by others. Yet restricting teaching proceduresto a “precisely repetitive handful of stimuli or formatsmay, in fact, correspondingly restrict generalization ofthe lessons being learned” (Stokes & Baer, 1977, p. 358).To the extent that generalized behavior change can beviewed as the opposite of strict stimulus control and dis-crimination, one technique for facilitating generalizationis to vary as many of the noncritical dimensions of the an-tecedent stimuli as possible during instruction.

Teaching loosely, randomly varying noncritical as-pects of the instructional setting within and across teach-ing sessions, has two advantages or rationales forpromoting generalization. First, teaching loosely reducesthe likelihood that a single or small group of noncriticalstimuli will acquire exclusive control over the target be-havior. A target behavior that inadvertently comes underthe control of a stimulus present in the instructional set-ting but not always present in the generalization settingmay not be emitted in the generalization setting. Here aretwo examples of this type of faulty stimulus control:

• Following teachers’ directions: A student with ahistory of receiving reinforcement for complyingwith teachers’ directions when they are given in aloud voice and accompanied by a stern facial ex-pression may not follow directions that do not con-tain one or both of those noncritical variables. Thediscriminative stimulus for the student’s compli-ance with teacher directions should be the contentof the teacher’s statements.

641

• Assembling bicycle sprocket sets: A new employeeat the bicycle factory inadvertently learns to assem-ble rear sprocket sets by putting a red sprocket ontop of a green sprocket and a green sprocket on topof a blue sprocket because the sprocket sets on aparticular bicycle model in production on the dayshe was trained were colored in that fashion. How-ever, proper assembly of a sprocket set has nothingto do with the colors of the individual sprockets; therelevant variable is the relative size of the sprockets(i.e., the biggest sprocket goes on the bottom, thenext biggest on top of that one, and so on).

Systematically varying the presence and absence ofnoncritical stimuli during instruction greatly decreasesthe chance that a functionally irrelevant factor, such as ateacher’s tone of voice or sprocket color in these two ex-amples, will acquire control of the target behavior (Kirby& Bickel, 1988).

A second rationale for loose teaching is that includinga wide variety of noncritical stimuli during instruction in-creases the probability that the generalization setting will in-clude at least some of the stimuli that were present duringinstruction. In this sense, loose teaching acts as a kind ofcatchall effort at programming common stimuli and makesit less likely that a student’s performance will be impededor “thrown off” by the presence of a “strange” stimulus.

Loose teaching applied to the previous two exam-ples might entail the following:

• Following teachers’ directions: During instructionthe teacher varies all of the factors mentioned ear-lier (e.g., tone of voice, facial expression), plusgives directions while standing, while sitting, fromdifferent places within the classroom, at differenttimes of the day, while the student is alone and ingroups, while looking away from the student, andso on. In each instance, reinforcement is contin-gent on the student’s compliance with the contentof the teacher’s direction irrespective of the pres-ence or absence of any of the noncritical features.

• Assembling bicycle sprocket sets: During trainingthe new employee assembles sprocket sets contain-ing sprockets of widely varying colors, after receiv-ing the component sprockets in varied sequences,when the factory floor is busy, at different timesduring a work shift, with and without music play-ing, and so forth. Irrespective of the presence, ab-sence, or values of any of these noncritical factors,reinforcement would be contingent on correct as-sembly of sprockets by relative size.

Seldom used as a stand-alone tactic, loose teachingis often a recognizable component of interventions when

generalization to highly variable and diverse settings orsituations is desired. For example, Horner and colleagues(1986) incorporated loose teaching into their training pro-gram for table busing by systematically but randomlyvarying the location of the tables, the number of peopleat the tables, whether food was completely or partiallyeaten, the amount and location of garbage, and so forth.Hughes and colleagues (1995) incorporated loose teach-ing by varying the peer teachers and varying the loca-tions of training sessions. Loose teaching is often arecognizable feature of language training programs thatuse milieu, incidental, and naturalistic teaching methods(e.g., Charlop-Christy & Carpenter, 2000; McGee, Mor-rier, & Daly, 1999; Warner, 1992).

Few studies evaluating the effects of using looseteaching in isolation have been reported. One exceptionis an experiment by Campbell and Stremel-Campbell(1982), who evaluated the effectiveness of loose teachingas a tactic for facilitating the generalization of newly ac-quired language by two students with moderate mental re-tardation. The students were taught the correct use of thewords is and are in “wh” questions (e.g., “What are youdoing?”), yes/no reversal questions (e.g., “Is this mine?”),and statements (e.g., “These are mine?”). Each student re-ceived two 15-minute language training sessions con-ducted within the context of other instructional activitiesthat were part of each child’s individualized educationprogram, one during an academic task and the secondduring a self-help task. The student could initiate a lan-guage interaction based on the wide variety of naturallyoccurring stimuli, and the teacher could try to evoke astatement or question from the student by intentionallymisplacing instructional materials or offering indirectprompts. Generalization probes of the students’ languageuse during two daily 15-minute free-play periods revealedsubstantial generalization of the language structures ac-quired during the loose teaching sessions.

The learner’s performance of the target behaviorshould be established under fairly restricted, simplified,and consistent conditions, before much “looseness” is in-troduced. This is particularly important when teachingcomplex and difficult skills. Only noncritical (i.e., func-tionally irrelevant) stimuli should be “loosened.” Practi-tioners should not inadvertently loosen stimuli thatreliably function in the generalization setting as discrim-inative stimuli (SDs) or as “don’t do it” examples (S�s).Stimuli known to play important roles in signaling whenand when not to respond should be systematically incor-porated into instructional programs as teaching examples.A stimulus condition that may be functionally irrelevantfor one skill may be a critical SD for another skill.

Taking the notion of varying noncritical aspects ofthe instructional setting and procedures to its logical

Generalization and Maintenance of Behavior Change

642

limit, Baer (1999) offered the following advice for looseteaching:

• Use two or more teachers.

• Teach in two or more places.

• Teach from a variety of positions.

• Vary your tone of voice.

• Vary your choice of words.

• Show the stimuli from a variety of angles, usingsometimes one hand and sometimes the other.

• Have other persons present sometimes and notother times.

• Dress quite differently on different days.

• Vary the reinforcers.

• Teach sometimes in bright light, sometimes indim light.

• Teach sometimes in noisy settings, sometimes inquiet ones.

• In any setting, vary the decorations, vary the furni-ture, and vary their locations.

• Vary the times of day when you and everyoneelse teach.

• Vary the temperature in the teaching settings.

• Vary the smells in the teaching settings.

• Within the limits possible vary the content ofwhat’s being taught.

• Do all of this as often and as unpredictably as pos-sible. (p. 24)

Of course, Baer (1999) was not suggesting that ateacher needs to vary all of these factors for every be-havior taught. But building a reasonable degree of ”loose-ness” into teaching is an important element of a teacher’soverall effort to program for generalization rather thantrain and hope.

Maximize Contactwith Reinforcementin the Generalization Setting

Even though a practitioner is successful in getting thelearner to emit a newly acquired target behavior in a gen-eralization setting with a naturally existing contingencyof reinforcement, generalization and maintenance maybe short-lived if the behavior makes insufficient contactwith reinforcement. In such cases the practitioner’s ef-forts to promote generalization revolves around ensuringthat the target behavior contacts reinforcement in the gen-eralization setting. Five of the 13 tactics for promoting

Generalization and Maintenance of Behavior Change

generalized behavior change described in this chapter in-volve some form of arranging or contriving for the targetbehavior to be reinforced in the generalization setting.

Teach Behavior to Levels Required by Natural Contingencies

Baer (1999) suggested that a common mistake practi-tioners make when attempting to employ natural contin-gencies of reinforcement is failing to teach the behaviorchange well enough so that it contacts the contingency.

Sometimes behavior changes that seem to need general-ization may only need better teaching. Try making thestudents fluent, and see if they still need further supportfor generalization. Fluency may consist of any or all ofthe following: high rate of performance, high accuracyof performance, fast latency, given the opportunity to re-spond, and strong response. (p. 17)

A new behavior may occur in the generalization set-ting but fail to make contact with the naturally existingcontingencies of reinforcement. Common variables thatdiminish contact with reinforcement in the generaliza-tion setting include the accuracy of the behavior, the di-mensional quality of the behavior (i.e., frequency,duration, latency, magnitude), and the form (topography)of the behavior. The practitioner may need to enhancethe learner’s performance in one or more of these vari-ables to ensure that the new behavior will meet the nat-urally existing contingencies of reinforcement. Forexample, when given a worksheet to complete at his desk,a student’s behavior that is consistent with the followingdimensions is unlikely to contact reinforcement for com-pleting the task, even if the student has the ability to com-plete each worksheet item accurately.

• Latency too long. A student who spends 5 minutes“daydreaming” before he begins reading the direc-tions may not finish in time to obtain reinforcement.

• Rate too low. A student who needs 5 minutes toread the directions for an independent seatwork as-signment that his peers read in less than 1 minutemay not finish in time to obtain reinforcement.

• Duration too brief. A student who can work with-out direct supervision for only 5 minutes at a timewill not be able to complete any task requiringmore than 5 minutes of independent work.

The solution for this kind of generalization problem,if not always simple, is straightforward. The behaviorchange must be made more fluent: The learner must betaught to emit the target behavior at a rate commensuratewith the naturally occurring contingency, with more

643

accuracy, within a shorter latency, and/or at a greater mag-nitude. Generalization planning should include identifi-cation of the levels of performance necessary to accessexisting criteria for reinforcement.

Program Indiscriminable Contingencies

Applied behavior analysts purposely design and imple-ment interventions so the learner receives consistent andimmediate consequences for emitting the target behav-ior. Although consistent and immediate consequences areoften necessary to help the learner acquire new behav-ior, those very contingencies can impede generalizationand maintenance. The clear, predictable, and immediateconsequences that are typically part of systematic in-struction can actually work against generalized respond-ing. This is most likely to occur when a newly acquiredskill has not yet contacted naturally existing contingen-cies of reinforcement, and the learner can discriminatewhen the instructional contingencies are absent in thegeneralization settings. If the presence or absence of thecontrolling contingencies in the generalization setting isobvious or predictable to the learner (“Hey, the game’soff. There’s no need to respond here/now”), the learnermay stop responding in the generalization setting, andthe behavior change the practitioner worked so hard todevelop may cease to occur before it can contact the nat-urally existing contingency of reinforcement.

An indiscriminable contingency is one in whichthe learner cannot discriminate whether the next responsewill produce reinforcement. As a tactic for promotinggeneralization and maintenance, programming indis-criminable contingencies involves contriving a contin-gency in which (a) reinforcement is contingent on some,but not all, occurrences of the target behavior in the gen-eralization setting, and (b) the learner is unable to pre-dict which responses will produce reinforcement.

The basic rationale for programming indiscriminablecontingencies is to keep the learner responding oftenenough and long enough in the generalization setting forthe target behavior to make sufficient contact with thenaturally existing contingencies of reinforcement. Fromthat point on, the need to program contrived contingen-cies to promote generalization will be moot. Applied be-havior analysts use two related techniques to programindiscriminable contingencies: intermittent schedules ofreinforcement and delayed rewards.

Intermittent Schedules of Reinforcement. Anewly learned behavior often must occur repeatedlyover a period of time in the generalization setting be-fore it contacts a naturally existing contingency of rein-forcement. During that time, an extinction condition

exists for responses emitted in the generalization set-ting. The current or most recent schedule of reinforce-ment for a behavior in the instructional setting plays asignificant role in how many responses will be emittedin the generalization setting prior to reinforcement. Be-haviors that have been under continuous schedules ofreinforcement (CRF) show very limited response main-tenance under extinction. When reinforcement is nolonger available, responding is likely to decrease rapidlyto prereinforcement levels. On the other hand, behaviorswith a history of intermittent schedules of reinforcementoften continue to be emitted for relatively long periodsof time after reinforcement is no longer available (e.g.,Dunlap & Johnson, 1985; Hoch, McComas, Thompson,& Paone, 2002).

An experiment by Koegel and Rincover (1977, Ex-periment II) showed the effects of intermittent schedulesof reinforcement on response maintenance in a general-ization setting. The participants were six boys diagnosedwith autism and severe to profound mental retardation,ages 7 to 12 years, who had participated in a previousstudy on generalization and had showed generalized re-sponding in the extra-therapy setting used in that exper-iment (Rincover & Koegel, 1975). As in Experiment I byKoegel and Rincover (1977) described earlier in thischapter, one-on-one training trials were conducted witheach child and the trainer seated at a table in a smallroom, and generalization trials were conducted by an un-familiar adult standing outside on the lawn, surroundedby trees. Two types of imitative response class consistedof (a) nonverbal imitation (e.g., raising arm) in responseto an imitative model and the verbal instruction, “Do this”and (b) touching a body part in response to verbal in-structions such as, “Touch your nose.” After acquiring animitation response, each child was given additional trialson one of three randomly chosen schedules of reinforce-ment: CRF, FR 2, or FR 5. Only after these additionaltraining trials were the children taken outside to assess re-sponse maintenance. Once outside, trials were conducteduntil the child’s correct responding had decreased to 0%,or was maintained at 80% correct or above for 100 con-secutive trials.

Behaviors that were most recently on a CRF sched-ule in the instructional setting underwent extinctionquickly in the generalization setting (see Figure 7). Gen-eralized responding occurred longer for the FR 2 trainedbehavior, and longer still for behavior that had beenshifted to an FR 5 schedule in the instructional setting.The results showed clearly that the schedule of rein-forcement in the instructional setting had a predictableeffect on responding in the absence of reinforcement inthe generalization setting: The thinner the schedule in theinstructional setting, the longer the response maintenancein the generalization setting.

Generalization and Maintenance of Behavior Change

644

100

80

60

40

20

0

Per

cen

t C

orre

ct R

esp

ond

ing

CRFChild 4

FR 2Child 5

FR 2Child 6

(SameChild)

FR 5FR5

Child 6

Child 7

50 100 200 300 400 500Trials

Figure 7 Percent of correct responses by three children in a generalization setting asa function of the schedule of reinforcement used during the final sessions in an instruc-tional setting.From “Research on the Differences between Generalization and Maintenance in Extra-Therapy Responding” by R. L. Koegeland A. Rincover, 1977, Journal of Applied Behavior Analysis, 10, p. 8. Copyright 1977 by the Society for the ExperimentalAnalysis of Behavior, Inc. Reprinted by permission.

The defining feature of all intermittent schedules ofreinforcement is that only some responses are reinforced,which means, of course, that some responses go unrein-forced. Thus, one possible explanation for the mainte-nance of responding during periods of extinction forbehaviors developed under intermittent schedules is therelative difficulty of discriminating that reinforcement isno longer available. Thus, the unpredictability of an in-termittent schedule may account for the maintenance ofbehavior after the schedule is terminated.

Practitioners should recognize that although all in-discriminable contingencies of reinforcement involve in-termittent schedules, not all schedules of intermittentreinforcement are indiscriminable. For example, althoughthe FR 2 and FR 5 schedules of reinforcement used byKoegel and Rincover (1977) were intermittent, manylearners would soon be able to discriminate whether re-inforcement would follow their next response. In con-trast, a student whose behavior is being supported by aVR 5 schedule of reinforcement cannot determinewhether his next response will be reinforced.

Delayed Rewards. Stokes and Baer (1977) sug-gested that not being able to discriminate in what set-tings a behavior will be reinforced is similar to not beingable to discriminate whether the next response will bereinforced. They cited an experiment by Schwarz andHawkins (1970) in which each day after school a sixth-grade girl was shown videotapes of her behavior in that

Generalization and Maintenance of Behavior Change

day’s math class and received praise and token rein-forcement for improvements in her posture, reducing thenumber of times she touched her face, and speakingwith sufficient volume to be heard by others. Reinforce-ment after school was contingent on behaviors emittedduring math class only, but comparable improvementswere noted in spelling class as well. The generalizationdata were taken from videotapes that were made of thegirl’s behavior in spelling class but were never shown toher. Stokes and Baer hypothesized that because rein-forcement was delayed (the behaviors that producedpraise and tokens were emitted during math class butwere not rewarded until after school), it may have beendifficult for the student to discriminate when improvedperformance was required for reinforcement. They sug-gested that the generalization across settings of the targetbehaviors may have been a result of the indiscriminablenature of the response-to-reinforcement delay.

Delayed rewards and intermittent schedules of rein-forcement are alike in two ways: (a) Reinforcement isnot delivered each time the target behavior is emitted(only some responses are followed by reinforcement),and (b) there are no clear stimuli to signal the learnerwhich current responses will produce reinforcement. Adelayed reward contingency differs from intermittent re-inforcement in that instead of delivering the consequenceimmediately following an occurrence of the target be-havior, the reward is provided after a period of time haselapsed (i.e., a response-to-reward delay). Receiving the

645

delayed reward is contingent on the learner having per-formed the target behavior in the generalization settingduring an earlier time period. With an effective delayedreward contingency, the learner cannot discriminate when(or where, depending on the details of the contingency)the target behavior must be emitted in order to receivereinforcement. As a result, to have the best chance to re-ceive the reward later, the learner must “be good all day”(Fowler & Baer, 1981).

Two similar studies by Freeland and Noell (1999,2002) investigated the effects of delayed rewards on themaintenance of students’ mathematics performance. Par-ticipants in the second study were two third-grade girlswho had been referred by their teacher for help withmathematics. That target behavior for both students waswriting answers to single-digit addition problems withsums to 18. The researchers used a multiple-treatmentreversal design to compare the effects of five conditionson the number of correct digits written as answers to sin-gle-digit addition problems during daily 5-minute workperiods (e.g., writing “11” as the answer to “5 + 6 = ?”counted as two digits correct).

• Baseline: Green worksheets; no programmed con-sequences; students were told they could attemptas many or as few problems as they wanted.

• Reinforcement: Blue worksheets with a goal num-ber at the top indicating the number of correct dig-its needed to choose a reward in the “goody box”;each student’s goal number was the median num-ber of correct digits on the last three worksheets;all worksheets were graded after each session.

• Delay 2: White worksheets with goal number; after every two sessions, one of the two worksheetscompleted by each student was randomly selectedfor grading; reinforcement was contingent onmeeting highest median of three consecutive ses-sions up to that point in the study.

• Delay 4: White worksheets with goal number and same procedures as Delay 2 except that worksheets were not graded until four sessions had been completed, at which time one of eachstudent’s previous four worksheets was randomlyselected and graded.

• Maintenance: White worksheets with goal numberas before; no worksheets were graded and no feed-back or rewards for performance were given.

The fact that different colored worksheets were usedfor each condition in this study made it easy for the stu-dents to predict the likelihood of reinforcement. A greenworksheet meant no “goody bag”—and no feedback atall—no matter how many correct digits were written.However, meeting one’s performance criterion on a white

worksheet sometimes produced reinforcement. This studyprovides powerful evidence of the importance of havingcontingencies in the instructional setting “look like” thecontingencies in effect in the generalization setting(s) intwo ways: (a) Both students showed large decreases inperformance when baseline conditions were reinstated,and immediate drops during a second return to baseline;and (b) the students continued completing math prob-lems at a high rate during the maintenance condition,even though no reinforcement was provided. (See Fig-ure 8.)

When the delayed (indiscriminable) contingencieswere implemented, all students demonstrated levels ofcorrect responding at or above levels during the rein-forcement phase. When the students were exposed tomaintenance conditions, Amy maintained high levels ofresponding for 18 sessions with variable performanceover the final six sessions, and Kristen showed a gradu-ally increasing rate of performance over 24 sessions. Theresults demonstrated that behavior with an indiscrim-inable contingency can be maintained at the same rate aswith a predictable schedule, and with greater resistanceto extinction.

Delayed consequences have been used to promotethe setting/situation generalization and response mainte-nance of a wide range of target behaviors, including aca-demic and vocational tasks by individuals with autism(Dunlap, Koegel, Johnson, & O’Neill, 1987), young chil-dren’s toy play, social initiations, and selection of healthysnacks (R. A. Baer, Blount, Dietrich, & Stokes, 1987; R. A. Baer, Williams, Osnes, & Stokes, 1984; Osnes,Guevremont, & Stokes, 1986), restaurant trainees’ re-sponding appropriately to coworkers’ initiations (Grossiet al., 1994); and performance on reading and writingtasks (Brame, 2001; Heward, Heron, Gardner, & Prayzer,1991).

The effective use of delayed consequences can re-duce (or even eliminate in some instances) the learner’sability to discriminate when a contingency is or is not ineffect. As a result, the learner needs to “be good” (i.e.,emit the target behavior) all the time. If an effective con-tingency can be made indiscriminable across settings andtarget behaviors, the learner will also have to “be good”everywhere, with all of his or her relevant skills.

Following are four examples of classroom applica-tions of indiscriminable contingencies involving delayedrewards. Each of these examples also features an inter-dependent group contingency by making rewards for thewhole class contingent on the performance of randomlyselected students.

• Spinners and dice. A procedure such as the follow-ing can make academic seatwork periods more ef-fective. Every few minutes (e.g., on a VI 5-minute

Generalization and Maintenance of Behavior Change

646

50

45

40

35

30

25

20

15

10

5

0

Dig

its

Cor

rect

per

Min

ute

BL RF BL RF D2 BL D2 D4 Maintenance

Amy

10 20 30 40 50 60 70Sessions

50

45

40

35

30

25

20

15

10

5

0

Dig

its

Cor

rect

per

Min

ute

BL RF BL RF D2 BL D2 D4 Maintenance

Kristen

10 20 30 40 50 60Sessions

DigitsCorrect

Goal

DigitsCorrect

Goal

Figure 8 Number of correct digits perminute by two third-grade students whileanswering math problems during base-line (BL), reinforcement contingent onperformance on one randomly selectedworksheet after each session (RF), rein-forcement contingent on performance onone randomly selected worksheet afterevery two (D2) or four (D4) sessions, and maintenance conditions.From “Programming for Maintenance: An Investigation ofDelayed Intermittent Reinforcement and Common Stimuli toCreate Indiscriminable Contingencies” by J. T. Freeland andG. H. Noell, 2002, Journal of Behavioral Education, 11, p. 13.Copyright 2002 by Human Sciences Press. Reprinted bypermission.

schedule), the teacher (a) randomly selects a stu-dent’s name, (b) walks to that student’s desk andhas the student spin a spinner or roll a pair of dice,(c) counts backward from the worksheet problemor item the student is currently working on by thenumber shown on the spinner or dice, and (d) givesa token to the student if that problem or item iscorrect. Students who immediately begin to workon the assignment and work quickly but carefullythroughout the seatwork period are most likely to obtain reinforcers under this indiscriminablecontingency.

• Story fact recall game. Many teachers devote 20 to30 minutes per day to sustained silent reading(SSR), a period when students can read silentlyfrom books of their choice. A story fact recallgame can encourage students to read with a pur-pose during SSR. At the end of the SSR period theteacher asks several randomly selected students aquestion about the book they are reading. For ex-

Generalization and Maintenance of Behavior Change

ample, a student who is reading Chapter 3 of Eliza-beth Winthrop’s The Castle in the Attic might beasked, What did William give the Silver Knight toeat? (Answer: bacon and toast). A correct answeris praised by the teacher, applauded by the class,and earns a marble in a jar toward a reward for thewhole class. Students do not know when they willbe called on or what they might be asked (Brame,Bicard, Heward, & Greulich, 2007).

• Numbered heads together. Collaborative learninggroups (small groups of students working togetheron a joint learning activity) can be effective, butteachers should use procedures that motivate allstudents to participate. A technique called num-bered heads together can ensure that all studentsactively participate (Maheady, Mallete, Harper, &Saca, 1991). Students are seated in heterogeneousgroups of three or four, and each student is giventhe number 1, 2, 3, or 4. The teacher asks the classa question, and each group discusses the problem

647

and comes up with an answer. Next, the teacherrandomly selects a number from 1 to 4 and thencalls on one or more students with that number to answer. It is important that every person in thegroup knows the answer to the question. This strat-egy promotes cooperation within the group ratherthan competition. Because all students must knowthe answer, group members help each other under-stand not only the answer, but also the how andwhy behind it. Finally, this strategy encourages in-dividual responsibility.

• Intermittent grading. Most students do not receivesufficient practice writing, and when students dowrite, the feedback they receive is often ineffective.One reason for this may be that providing detailedfeedback on a daily composition by each student inthe class requires more time and effort than eventhe most dedicated teacher can give. A procedurecalled intermittent grading offers one solution tothis problem (Heward, Heron, Gardner, & Prayzer,1991). Students write for 10 to 15 minutes eachday, but instead of reading and evaluating everystudent’s paper, the teacher provides detailed feed-back on a randomly selected 20 to 25% of students’daily compositions. The students whose paperswere graded earn points based on individualizedperformance criteria, and bonus points are given tothe class contingent on the quality of the selectedand graded papers (e.g., if the authors of four of the five papers that were graded met their individ-ual criteria). Students’ papers that were graded canbe used as a source of instructional examples forthe next lesson.

The success of a delayed reward tactic in promotinggeneralization and maintenance rests on (a) the indis-criminability of the contingency (i.e., the learner cannottell exactly when emitting the target behavior in the gen-eralization setting will produce a reward at a later time),and (b) the learner understanding the relation betweenhis emitting the target behavior at an earlier time and re-ceiving a reward later. A delayed rewards interventionmay not be effective with some learners with severe cog-nitive disabilities.

Guidelines for Programming IndiscriminableContingencies. Practioners should consider theseguidelines when implementing indiscriminable con-tingencies:

• Use continuous reinforcement during the initialstages of acquiring new behaviors or whenstrengthening little-used behaviors.

• Systematically thin the schedule of reinforcementbased on the learner’s performance. Remember

that the thinner the schedule of reinforcement is,the more indiscriminable it is (e.g., an FR 5 sched-ule is more indiscriminable than an FR 2 sched-ule); and variable schedules of reinforcement (e.g.,VR and VI schedules) are more indiscriminablethan fixed schedules are (e.g., FR and FI schedules).

• When using delayed rewards, begin by deliveringthe reinforcer immediately following the target be-havior and gradually increase the response-to-reinforcement delay.

• Each time a delayed reward is delivered, explain tothe learner that he is receiving the reward for spe-cific behaviors he performed earlier. This helps tobuild and strengthen the learner’s understanding ofthe rule describing the contingency.

When selecting reinforcers to use during instruction,practitioners should try to use or eventually shift to thesame reinforcers that the learner will acquire in the gen-eralization environment. The reinforcer itself may serveas a discriminative stimulus for the target behavior (e.g.,Koegel & Rincover, 1977)

Set Behavior Traps

Some contingencies of reinforcement are especially pow-erful, producing substantial and long-lasting behaviorchanges. Baer and Wolf (1970) called such contingen-cies behavior traps. Using a mouse trap as an analogy,they described how a householder has only to exert a rel-atively small amount of behavioral control over themouse—getting the mouse to smell the cheese—to pro-duce a behavior change with considerable (in this case,complete) generalization and maintenance.

A householder without a trap can, of course, still kill amouse. He can wait patiently outside the mouse’s hole,grab the mouse faster than the mouse can avoid him,and then apply various forms of force to the unfortunateanimal to accomplish the behavioral change desired. Butthis performance requires a great deal of competence:vast patience, super-coordination, extreme manual dexterity, and a well-suppressed squeamishness. By contrast, a householder with a trap need very few ac-complishments: If he can merely apply the cheese andthen leave the trap where the mouse is likely to smellthat cheese, in effect he has guaranteed general(ized)change in the mouse’s future behavior.

The essence of a trap, in behavioral terms, is thatonly a relatively simple response is necessary to enterthe trap, yet once entered, the trap cannot be resisted increating general behavior change. For the mouse, theentry response is merely to smell the cheese. Everything

Generalization and Maintenance of Behavior Change

648

proceeds from there almost automatically. (p. 321, em-phasis added)

Behavioral trapping is a fairly common phenomenonthat everyone experiences from time to time. Behaviortraps are particularly evident in the activities we “just can-not get (or do) enough of.” The most effective behaviortraps share four essential features: (a) They are “baited”with virtually irresistible reinforcers that “lure” the studentto the trap; (b) only a low-effort response already in thestudent’s repertoire is necessary to enter the trap; (c) in-terrelated contingencies of reinforcement inside the trapmotivate the student to acquire, extend, and maintain tar-geted academic and/or social skills (Kohler & Green-wood, 1986); and (d) they can remain effective for a longtime because students show few, if any, satiation effects.

Consider the case of the “reluctant bowler.” A youngman is persuaded to fill in as a substitute for a friend’sbowling team. He has always regarded bowling as uncool.And bowling looks so easy on television that he does notsee how it can be considered a real sport. Nevertheless, heagrees to go, just to help out this one time. During theevening he learns that bowling is not as easy as he had al-ways assumed (he has a history of reinforcement for ath-letic challenges) and that several people he would like toget to know are avid bowlers (i.e., it is a mixed doublesleague). Within a week he has purchased a custom-fittedbowling ball, a bag, and shoes; practiced twice on his own;and signed up for the next league season.

The reluctant bowler example illustrates the funda-mental nature of behavior traps: easy to enter and diffi-cult to exit. Some naturally existing behavior traps canlead to maladaptive behaviors, such as alcoholism, drugaddiction, and juvenile delinquency. The everyday termvicious circle refers to the natural contingencies of rein-forcement that operate in destructive behavior traps. Prac-titioners, however, can learn to create behavior traps thathelp students develop positive, constructive knowledgeand skills. Alber and Heward (1996), who providedguidelines for creating successful traps, gave the follow-ing example of an elementary teacher creating a behav-ior trap that took advantage of a student’s penchant forplaying with baseball cards.

Like many fifth graders struggling with reading andmath, Carlos experiences school as tedious and unre-warding. With few friends of his own, Carlos finds thateven recess offers little reprieve. But he does find solacein his baseball cards, often studying, sorting and playingwith them in class. His teacher, Ms. Greene, long agolost count of the number of times she had to stop an in-structional activity to separate Carlos and his belovedbaseball cards. Then one day, when she approached Car-los’ desk to confiscate his cards in the middle of a lessonon alphabetization, Ms. Greene discovered that Carloshad already alphabetized all the left-handed pitchers in

Generalization and Maintenance of Behavior Change

the National League! Ms. Greene realized she’d foundthe secret to sparking Carlos’ academic development.

Carlos was both astonished and thrilled to learn thatMs. Greene not only let him keep his baseball cards athis desk, but also encouraged him to “play with them”during class. Before long, Ms. Greene had incorporatedbaseball cards into learning activities across the curricu-lum. In math, Carlos calculated batting averages; in geography, he located the hometown of every major leaguer born in his state; and in language arts, he wroteletters to his favorite players requesting an autographedphoto. Carlos began to make significant gains academi-cally and an improvement in his attitude about schoolwas also apparent.

But school became really fun for Carlos when someof his classmates began to take an interest in his knowl-edge of baseball cards and all the wonderful things youcould do with them. Ms. Greene helped Carlos form aclassroom Baseball Card Club, giving him and his newfriends opportunities to develop and practice new socialskills as they responded to their teacher’s challenge tothink of new ways to integrate the cards into the curricu-lum. (p. 285)

Ask People in the Generalization Setting to Reinforce the Behavior

The problem may be simply that the natural community[of reinforcement] is asleep and needs to be awakenedand turned on.

—Donald M. Baer (1999, p. 16).

Sometimes a potentially effective contingency of rein-forcement in the generalization setting is not operating ina form available to the learner no matter how often orwell she performs the target behavior. The contingency isthere, but dormant. One solution for this kind of prob-lem is to inform key people in the generalization settingof the value and importance of their attention to thelearner’s efforts to acquire and use new skills and askthem to help.

For example, a special education teacher who hasbeen helping a student learn how to participate in classdiscussions by providing repeated opportunities for prac-tice and feedback in the resource room could inform theteachers in the general education classes that the studentattends about the behavior change program and ask themto look for and reinforce any reasonable effort by the stu-dent to participate in their classrooms. A small amount ofcontingent attention and praise from these teachers maybe all that is needed to achieve the desired generalizationof the new skill.

This simple but often effective technique for pro-moting generalized behavior was evident in the study byStokes and colleagues (1974) in which staff members re-sponded to the children’s waving response by saying,

649

“Hello, (name).” Approximately 20 such generalizationprobes were conducted each day with each child.

Williams, Donley, and Keller (2000) gave mothersof two preschool children with autism explicit instruc-tions on providing models, response prompts, and rein-forcement to their children who were learning to askquestions about hidden objects (e.g., “What’s that?”, “CanI see it?”).

Contingent praise and attention from significant oth-ers can add to the effectiveness of other strategies alreadyin place in the generalization environment. Broden, Hall,and Mitts (1971) showed this in the self-monitoringstudy. After self-recording had improved eighth-graderLiza’s study behavior during history class, the re-searchers asked Liza’s teacher to praise Liza for studybehavior whenever he could during history class. Liza’slevel of study behavior during this Self-Recording PlusPraise condition increased to a mean of 88% and wasmaintained at levels nearly as high during a subsequentPraise Only condition (see Figure 3).

Teach the Learner to RecruitReinforcement

Another way to “wake up” a potentially powerful but dor-mant natural contingency of reinforcement is to teach thelearner to recruit reinforcement from significant others.For example, Seymour and Stokes (1976) taught delin-quent girls to work more productively in the vocationaltraining area of a residential institution. However, obser-vation showed that staff at the institution gave the girls nopraise or positive interaction regardless of the quality oftheir work. The much-needed natural community of re-inforcement to ensure the generalization of the girls’ im-proved work behaviors was not functioning. To getaround this difficulty, the experimenters trained the girlsto use a simple response that called the attention of staffmembers to their work. With this strategy, staff praise forgood work increased. Thus, teaching the girls an addi-tional response that could be used to recruit reinforce-ment enabled the target behavior to come into contactwith natural reinforcers that would serve to extend andmaintain the desired behavior change.

Students of various ages and abilities have learnedto recruit teacher and peer attention for performing a widerange of tasks in classroom and community settings;preschoolers with developmental delays for completingpre-academic tasks and staying on task during transitions(Connell, Carta, & Baer, 1993; Stokes, Fowler, & Baer,1978) as well as students with learning disabilities (Alber,Heward, & Hippler, 1999; Wolford, Alber, & Heward,2001), students with behavioral disorders (Morgan,Young, & Goldstein, 1983), students with mental retar-

dation performing academic tasks in regular classrooms(Craft, Alber, & Heward, 1998), and secondary studentswith mental retardation for improved work performancein vocational training settings (Mank & Horner, 1987).

Craft and colleagues (1998) assessed the effects ofrecruitment training on academic assignments for whichstudents recruited teacher attention. Four elementary stu-dents were trained by their special education teacher (thefirst author) when, how, and how often to recruit teacherattention in the general education classroom. Trainingconsisted of modeling, role-playing, error correction, andpraise in the special education classroom. The studentswere taught to show their work to the teacher or ask forhelp two to three times per work page, and to use appro-priate statements such as “How am I doing?” or “Doesthis look right?”

Data on the frequency of student recruiting andteacher praise statements were collected during a daily20-minute homeroom period in a general education class-room. During this period, the general education studentscompleted a variety of independent seatwork tasks (read-ing, language arts, math) assigned by the general educa-tion teacher, while the four special education studentscompleted spelling worksheets assigned by the specialeducation teacher, an arrangement that had been estab-lished prior to the experiment. If students needed helpwith their assignments during homeroom, they took theirwork to the teacher’s desk and asked for help.

The effects of recruitment training on the children’sfrequency of recruiting and on the number of praise state-ments they received from the classroom teacher areshown in Figure 9. Recruiting across students increasedfrom a mean rate of 0.01 to 0.8 recruiting responses per20-minute session during baseline to a mean rate of 1.8to 2.7 after training. Teacher praise statements receivedby the students increased from a mean rate of 0.1 to 0.8praise statements per session during baseline to a meanrate of 1.0 to 1.7 after training. The ultimate meaning andoutcome of the intervention was in the increased amountand improved accuracy of all four of the students’ acad-emic work (see Figure 9).

For a review of research on recruiting and sugges-tions for teaching children to recruit reinforcement fromsignificant others, see Alber and Heward (2000). Box 2,“Look, Teacher, I’m All Finished!” provides suggestionsfor teaching students to recruit teacher attention.

Mediate Generalization

Another strategy for promoting generalized behaviorchange is to arrange for some thing or person to act as amedium that ensures the transfer of the target behaviorfrom the instructional setting to the generalization setting.

Generalization and Maintenance of Behavior Change

650

Generalization and Maintenance of Behavior Change

Classrooms are extremely busy places, and even the mostconscientious teachers can easily overlook students’ im-portant academic and social behaviors. Research showsthat teachers are more likely to pay attention to a dis-ruptive student than to one who is working quietly andproductively (Walker, 1997). It is hard for teachers to be aware of students who need help, especially low-achieving students who are less likely to ask for help(Newman & Golding, 1990).

Although teachers in general education classroomsare expected to adapt instruction to serve students withdisabilities, this is not always the case. Most secondaryteachers interviewed by Schumm and colleagues (1995)believed that students with disabilities should take re-sponsibility for obtaining the help they need. Thus, know-ing how to politely recruit teacher attention and assistancecan help students with disabilities function more inde-pendently and actively influence the quality of instructionthey receive.

Who Should Be Taughtto Recruit?

Although most students would probably benefit fromlearning to recruit teacher praise and feedback, here aresome ideal candidates for recruitment training:

Withdrawn Willamena. Willamena seldom asks ateacher anything. Because she is so quiet and wellbehaved, her teachers sometimes forget she’s inthe room.

In-a-Hurry Harry. Harry is usually half-done witha task before his teacher finishes explaining it.Racing through his work allows him to be the firstto turn it in. But his work is often incomplete andfilled with errors, so he doesn’t hear much praisefrom his teacher. Harry would benefit from re-cruitment training that includes self-checking andself-correction.

Shouting Shelly. Shelly has just finished her work,and she wants her teacher to look at it—right now!But Shelly doesn’t raise her hand. She gets herteacher’s attention—and disrupts most of herclassmates—by shouting across the room. Shellyshould be taught appropriate ways to solicitteacher attention.

Pestering Pete. Pete always raises his hand, waitsquietly for his teacher to come to his desk, andthen politely asks, “Have I done this right?” Buthe repeats this routine a dozen or more times in a

20-minute period, and his teachers find it annoy-ing. Positive teacher attention often turns into rep-rimands. Recruitment training for Pete will teachhim to limit the number of times he cues his teach-ers for attention.

How to Get Started

1. Identify target behaviors. Students should recruitteacher attention for target behaviors that are val-ued and therefore likely to be reinforced, such aswriting neatly and legibly, working accurately,completing assigned work, cleaning up at transi-tions, and making contributions when working in acooperative group.

2. Teach self-assessment. Students should self-assesstheir work before recruiting teacher attention (e.g.,Sue asks herself, “Is my work complete?”). Afterthe student can reliably distinguish between com-plete and incomplete work samples, she can learnhow to check the accuracy of her work with an-swer keys or checklists of the steps or componentsof the academic skill, or how to spot-check two orthree items before asking the teacher to look at it.

3. Teach appropriate recruiting. Teach studentswhen, how, and how often to recruit and how torespond to the teacher after receiving attention.

• When? Students should signal for teacher atten-tion after they have completed and self-checkeda substantial part of their work. Students shouldalso be taught when not to try to get theirteacher’s attention (e.g., when the teacher isworking with another student, talking to anotheradult, or taking the lunch count).

• How? The traditional hand raise should be partof every student’s recruiting repertoire. Othermethods of gaining attention should be taughtdepending on teacher preferences and routinesin the general education classroom (e.g., havestudents signal they need help by standing up asmall flag on their desks; expect students tobring their work to the teacher’s desk for helpand feedback).

• How often? While helping Withdrawn Willa-mena learn to seek teacher attention, don’t turnher into a Pestering Pete. How often a studentshould recruit varies across teachers and activi-ties (e.g., independent seatwork, cooperative

Box 2“Look, Teacher, I’m All Finished!”

Teaching Students to Recruit Teacher Attention

651

Generalization and Maintenance of Behavior Change

learning groups, whole-class instruction). Directobservation in the classroom is the best way toestablish an optimal rate of recruiting; it is alsoa good idea to ask the regular classroom teacherwhen, how, and with what frequency she prefersstudents to ask for help.

• What to say? Students should be taught severalstatements that are likely to evoke positive feed-back from the teacher (e.g., “Please look at mywork.” “Did I do a good job?” “How am Idoing?”). Keep it simple, but teach the studentto vary her verbal cues so she will not sound likea parrot.

• How to respond? Students should respond totheir teacher’s feedback by establishing eye con-tact, smiling, and saying, “Thank you.” Politeappreciation is very reinforcing to teachers andwill increase the likelihood of more positive at-tention the next time.

4. Model and role-play the complete sequence.Begin by providing students with a rationale forrecruiting (e.g., the teacher will be happy you dida good job, you will get more work done, yourgrades might improve). Thinking aloud whilemodeling is a good way to show the recruiting se-quence. While performing each step, say, “Okay,I’ve finished my work. Now I’m going to check it.Did I put my name on my paper? Yes. Did I do all

the problems? Yes. Did I follow all the steps? Yes.Okay, my teacher doesn’t look busy right now. I’llraise my hand and wait quietly until she comes tomy desk.” Have another student pretend to be theregular classroom teacher and come over to youwhen you have your hand up. Say, “Mr. Patterson,please look at my work.” The helper says, “Oh,you did a very nice job.” Then smile and say,“Thank you, Mr. Patterson.” Role-play with praiseand offer corrective feedback until the studentcorrectly performs the entire sequence on severalconsecutive trials.

5. Prepare students for alternate responses. Ofcourse, not every recruiting student attempt willresult in teacher praise; some recruiting responsesmay even be followed by criticism (e.g., “This isall wrong. Pay better attention the next time.”).Use role-playing to prepare students for these pos-sibilities and have them practice polite responses(e.g., “Thank you for helping me with this”).

6. Promote generalization to the regular classroom.The success of any recruitment training effort de-pends on the student actually using his or her newskill in the regular classroom.

Adapted from “Recruit it or lose it! Training students to recruit con-tingent teacher attention” by S. R. Alber and W. L. Heward, 1997, Inter-vention in School and Clinic, 5, pp. 275–282. Used with permission.

Two tactics for implementing this strategy are contrivinga mediating stimulus and teaching the learner to mediateher own generalization through self-management.

Contrive a Mediating Stimulus

One tactic for mediating generalization is to bring thetarget behavior under the control of a stimulus in the in-structional setting that will function in the generalizationsetting to reliably prompt or aid the learner’s performanceof the target behavior. The stimulus selected for this im-portant role may exist already in the generalization envi-ronment, or it may be a new stimulus added to theinstruction program that subsequently goes with thelearner to the generalization setting. Whether it is a nat-urally existing component of the generalization settingor an added element to the instructional setting, to effec-tively mediate generalization, a contrived mediatingstimulus must be (a) made functional for the target be-havior during instruction and (b) transported easily to thegeneralization setting (Baer, 1999). The mediating stim-ulus is functional for the learner if it reliably prompts or

aids the learner in performing the target behavior; themediating stimulus is transportable if it easily goes withthe learner to all important generalization settings.

Naturally existing features of generalization settingsthat are used as contrived mediating stimuli may be phys-ical objects or people. Van den Pol and colleagues (1981)used paper napkins, a common feature of any fast-foodrestaurant, as a contrived mediating stimulus. They taughtstudents that a paper napkin was the only place to putfood. In this way, the researchers eliminated the addedchallenge and difficulty of teaching the students to dis-criminate clean tables from dirty tables, to sit only atclean tables, or to wipe dirty tables and then program-ming the generalization and maintenance of those be-haviors. By contriving the special use of the napkin, onlyone response had to be trained, and napkins then servedas mediating stimulus for that behavior.

When choosing a stimulus to be made common toboth teaching and social generalization setting(s), prac-titioners should consider using people. In addition tobeing a requisite feature of social settings, people aretransportable and important sources of reinforcement for

652

Generalization and Maintenance of Behavior Change

4

3

2

1

0

5 10 15 20 25 30 35 40

LatashaPraise

Recruiting

Baseline Training

Continuous Reinforcement

Generalization Programming

Intermittent Reinforcement

Maintenance

4

3

2

1

0

5 10 15 20 25 30 35 40

Olivia

4

3

2

1

0

5 10 15 20 25 30 35 40

Octavian

4

3

2

1

0

5 10 15 20 25 30 35 40

Kenny

Sessions

Nu

mb

er p

er 2

0 m

inu

tes

Figure 9 Number of recruiting responses (data points) and teacher praise statement (bars) per 20-minute seatwork sessions. Target recruiting rate was two to three responses per session. Asterisksindicate when each student was trained in the resource room.From “Teaching Elementary Students with Developmental Disabilities to Recruit Teacher Attention in a General Education Classroom: Effects on TeacherPraise and Academic Productivity” by M. A. Craft, S. R. Alber, and W. L. Heward, 1998, Journal of Applied Behavior Analysis, 31, p. 407. Copyright 1998by the Society for the Experimental Analysis of Behavior, Inc. Reprinted by permission.

many behaviors. A study by Stokes and Baer (1976) is agood example of the potential effects of the evocative ef-fects of the presence of a person in the generalization set-ting who had a functional role in the learner’s acquisitionof the target behavior in the instructional setting. Twopreschool children with learning disabilities learnedword-recognition skills while working as reciprocal peertutors for one another. However, neither child showed re-liable generalization of those new skills in nontrainingsettings until the peer with whom he had learned the skillswas present in the generalization setting.

Some contrived mediating stimuli serve as muchmore than response prompts; they are prosthetic devicesthat assist the learner in performing the target behavior.Such devices can be especially useful in promoting gen-

eralization and maintenance of complex behaviors andextend response chains by simplifying a complex situa-tion. Three common forms are cue cards, photographicactivity schedules, and self-operated prompting devices.

Sprague and Horner (1984) gave the students in theirstudy cue cards to aid them in operating a vending ma-chine without another person’s assistance. The cue cards,which had food and drink logos on one side and picturesof quarters paired with prices on the other, not only wereused during instruction and generalization probes, butalso were kept by the students at the end of the program.A follow-up 18 months after the study was completed re-vealed that five of the six students still carried a cue cardand were using vending machines independently.

653

Generalization and Maintenance of Behavior Change

MacDuff, Krantz, and McClannahan (1993) taughtfour boys with autism ages 9 to 14 to use photographicactivity schedules when performing domestic-living skillssuch as vacuuming and table setting and for leisure ac-tivities such as using manipulative toys. Prior to trainingwith photographic activity schedules, the boys

were dependent on ongoing supervision and verbalprompts to complete self-help, housekeeping, andleisure activities. . . . In the absence of verbal promptsfrom supervising adults, it appeared that stimulus con-trol transferred to photographs and materials that wereavailable in the group home. When the study ended, all4 boys were able to display complex home-living andrecreational repertoires for an hour, during which timethey frequently changed tasks and moved to differentareas of their group home without adults’ prompts. Pho-tographic activity schedules, . . . became functional dis-criminative stimuli that promoted sustained engagementafter training ceased and fostered generalized respond-ing to new activity sequences and novel leisure materi-als. (pp. 90, 97)

Numerous studies have shown that learners across awide range of age and cognitive levels, including stu-dents with severe intellectual disabilities, can learn to usepersonal audio playback devices to independently per-form a variety of academic, vocational, and domestictasks (e.g., Briggs et al., 1990; Davis, Brady, Williams, &Burta, 1992; Grossi, 1998; Mechling & Gast, 1997; Post,Storey, & Karabin, 2002; Trask-Tyler, Grossi, & Heward,1994). The popularity of personal music devices such as“Walkman-style” tape players and iPods enables a personto listen to a series of self-delivered response prompts ina private, normalized manner that does not impose on orbother others.

Teach Self-Management Skills

The most potentially effective approach to mediatinggeneralized behavior changes rests with the one elementthat is always present in every instructional and gener-alization setting—the learner herself. There are a vari-ety of self-management tactics that people can use tomodify their own behavior. The logic of using self-management to mediate generalized behavior changesgoes like this: If the learner can be taught a behavior (notthe original target behavior, but another behavior—a controlling response from a self-management perspec-tive) that serves to prompt or reinforce the target behav-ior in all the relevant settings, at all appropriate times,and in all of its relevant forms, then the generalization ofthe target behavior is ensured. But as Baer and Fowler(1984) warned:

Giving a student self-control responses designed to mediate the generalization of some critical behavior

changes does not ensure that those mediating responseswill indeed be used. They are, after all, just responses:they, too, need generalization and maintenance just asdo the behavior changes that they are meant to general-ize and maintain. Setting up one behavior to mediatethe generalization of another behavior may succeed—but it may also represent a problem in guaranteeing thegeneralization of two responses, where before we hadonly the problem of guaranteeing the generalization ofone! (p. 149)

Train to GeneralizeIf generalization is considered as a response itself, thena reinforcement contingency may be placed on it, thesame as with any other operant.

—Stokes and Baer (1977, p. 362)

Training “to generalize” was one of the eight proactivestrategies for programming generalized behavior change inStokes and Baer’s (1977) conceptual scheme (which alsoincluded the nonstrategy, train and hope). The quotationmarks around “to generalize” signified that the authorswere hypothesizing about the possibilities of treating “togeneralize” as an operant response and that they recog-nized “the preference of behaviorists to consider general-ization to be an outcome of behavioral change, rather thanas a behavior itself” (p. 363). Since the publication ofStokes and Baer’s review, basic and applied research hasdemonstrated the pragmatic value of their hypothesis (e.g.,Neuringer, 1993, 2004; Ross & Neuringer, 2002; Shahan& Chase, 2002). Two tactics that applied behavior analystshave used are reinforcing response variability and in-structing the learner to generalize.

Reinforce Response Variability

Response variability can help a person solve problems. Aperson who can improvise by emitting a variety of re-sponses is more likely to solve problems encounteredwhen a standard response form fails to obtain reinforce-ment (e.g., Arnesen, 2000; Marckel, Neef, & Ferreri,2006; Miller & Neuringer, 2000; Shahan & Chase, 2002).Response variability also may result in behavior that isvalued because it is novel or creative (e.g., Goetz & Baer,1973; Holman, Goetz, & Baer, 1977; Pryor, Haag, andO’Reilly, 1969). Response variability may expose a per-son to sources of reinforcement and contingencies notaccessible by more constricted forms of responding. Theadditional learning that results from contacting those con-tingencies further expands the person’s repertoire.

One direct way to program desired response gener-alization is to reinforce response variability when it oc-curs. The contingency between response variability andreinforcement can be formalized with a lag reinforce-

654

Generalization and Maintenance of Behavior Change

ment schedule (Lee, McComas, & Jawar, 2002). On alag reinforcement schedule, reinforcement is contingenton a response being different in some defined way fromthe previous response (a Lag 1 schedule) or a specifiednumber of previous responses (Lag 2 or more). Cam-milleri and Hanley (2005) used a lag reinforcement contingency to increase varied selections of classroomactivities by two typically developing girls, who were se-lected to participate in the study because they spent thevast majority of their time engaged in activities not sys-tematically designed to result in a particular set of skillswithin the curriculum.

At the beginning of each 60-minute session, the chil-dren were told they could select any activity and could

switch activities at any time. A timer sounded every 5 minutes to prompt activity choices. During baselinethere were no programmed consequences for selectingany of the activities. Intervention consisted of a lag rein-forcement schedule in which the first activity selectionand each subsequent novel selection were followed bythe teacher handing the student a green card that she couldexchange later for 2 minutes of teacher attention (result-ing in a Lag 12 contingency, which was reset if all 12 ac-tivities were selected within a session).

Activity selections during baseline showed little vari-ability, with both girls showing strong preferences for stack-able blocks (Figure 10 shows results for one of the girls).When the lag contingency was introduced, both girls

20

15

10

5

0

Nu

mb

er o

f N

ovel

Sele

ctio

ns

Baseline Intervention

Tina

Baseline Intervention

RASNPR

Prog. ReadingCCC MathProg. MathMath FactsGrammar

Free ReadingNobby Nuss

Monkey MazeBlocksOther

58 8 2 8 813 8 17 15 17 178 12 12 17 1720 22 13 22 17 13

3 3 5 4 2

5 3 5 333 8 8 8 88 5 5 3 88 8 8 4 88 8 8 8 7

8 8 3 5 53 8 3 3

8 17 8 8 17 178 17 850 67 75 92 83 83

8 88 17 17 17 1717 8 21 25 1347 3 15 8 83 5 4

825 8 17 8

8 33 88 25 8 88 5 8 8

8 88 8

100100 75 97 100 100

398 943 6

8

17

338

20

15

10

5

0

Nu

mb

er o

f A

cad

emic

Un

its

Com

ple

ted

2 4 6 8 10 12 14 16 18 20 22 24Sessions

Figure 10 Number of novel activity selections (top panel), percentage of intervals of engagement inprogrammed (italicized) and nonprogrammed activities (middle panel; shaded cells indicate activities forwhich there was some engagement), and number of academic units completed (bottom panel).From “Use of a Lag Differential Reinforcement Contingency to Increase Varied Selections of Classroom Activities” by A. P. Cammilleri and G. P. Hanley, 2005,Journal of Applied Behavior Analysis, 38, p. 114. Copyright 2005 by the Society for the Experimental Analysis of Behavior, Inc. Reprinted by permission.

655

Generalization and Maintenance of Behavior Change

immediately selected and engaged in more diverse activi-ties. The researchers noted that “an indirect but importantoutcome of this shift in time allocation was a marked in-crease in the number of academic units completed” (p. 115).

Instruct the Learner to Generalize

The simplest and least expensive of all tactics for pro-moting generalized behavior change is to “tell the subjectabout the possibility of generalization and then ask for it”(Stokes & Baer, 1977, p. 363). For example, Ninness andcolleagues (1991) explicitly told three middle school stu-dents with emotional disturbance to use self-managementprocedures they had learned in the classroom to self-assessand self-record their behavior while walking from thelunchroom to the classroom. Hughes and colleagues(1995) used a similar procedure to promote generaliza-tion: “At the close of each training session, peer teachersreminded participants to self-instruct when they wished totalk to someone” (p. 207). Likewise, at the conclusion ofeach training session conducted in the special educationclassroom on how to recruit assistance from peers duringcooperative learning groups, Wolford and colleagues(2001) prompted middle school students with learningdisabilities to recruit peer assistance at least two times butnot more than four times during each cooperative learn-ing group in the language arts classroom.

To the extent that generalizations occur and are them-selves generalized, a person might then become skilled atgeneralizing newly acquired skills, or in the words of Stokesand Baer (1977), become a “generalized generalizer.”

Modifying and TerminatingSuccessful InterventionsWith most successful behavior change programs it is im-possible, impractical, or undesirable to continue the in-tervention indefinitely. Withdrawal of a successfulintervention should be carried out in a systematic fashion,guided by the learner’s performance of the target behav-ior in the most important generalization settings. Gradu-ally shifting from the contrived conditions of theintervention to the typical, everyday environment will in-crease the likelihood that the learner will maintain thenew behavior patterns. When deciding how soon and howswiftly to withdraw intervention components, practi-tioners should consider factors such as the complexity ofthe intervention, the ease or speed with which behaviorchanged, and the availability of naturally existing con-tingencies of reinforcement for the new behavior.

This shift from intervention conditions to the postin-tervention environment can be made by modifying one or

more of the following components, each representing onepart of the three-term contingency:

• Antecedents, prompts, or cue-related stimuli

• Task requirements and criteria

• Consequences or reinforcement variables

Although the order in which intervention compo-nents are withdrawn may make little or no difference, inmost programs it is probably best to make all task-relatedrequirements as similar as possible to those of the post-intervention environment before withdrawing significantantecedent or consequence components of the interven-tion. In this way the learner will be emitting the targetbehavior at the same level of fluency that will be requiredafter the complete intervention has been withdrawn.

A behavior change program carried out many yearsago by a graduate student in one of our classes illus-trates how the components of a program can be gradu-ally and systematically withdrawn. An adult male withdevelopmental disabilities took an inordinate amountof time to get dressed each morning (40 to 70 minutesduring baseline), even though he possessed the skillsneeded to dress himself. Intervention began with a con-struction paper clock hung by his bed with the handsset to indicate the time by which he had to be fullydressed to receive reinforcement. Although the mancould not tell time, he could discriminate whether theposition of the hands on the real clock nearby matchedthose on his paper clock. Two task-related interventionelements were introduced to increase the likelihood ofinitial success. First, he was given fewer and easierclothes to put on each morning (e.g., no belt, slip-onloafers instead of shoes with laces). Second, based onhis baseline performance, he was initially given 30 min-utes to dress himself, even though the objective of theprogram was for him to be completely dressed within 10minutes. An edible reinforcer paired with verbal praisewas used first on a continuous schedule of reinforce-ment. Figure 11 shows how each aspect of the inter-vention (antecedent, behavior, and consequence) wasmodified and eventually withdrawn completely, so thatby the program’s end the man was dressing himselfcompletely within 10 minutes without the aid of extraclocks or charts or contrived reinforcement other than anaturally existing schedule of intermittent praise fromstaff members.

Rusch and Kazdin (1981) described a method forsystematically withdrawing intervention componentswhile simultaneously assessing response maintenancethat they called “partial-sequential withdrawal.” Martella,Leonard, Marchand-Martella, and Agran (1993) used apartial-sequential withdrawal of various components of aself-monitoring intervention they had implemented to

656

Generalization and Maintenance of Behavior Change

Antecedents(cues, prompts)

Behavior(task criteria/modifications)

Consequences(reinforcers,punishers)

A: Mock clock showing when must be dressed

B: No clock

A: Fewer clothesB: Full set of clothesC: Time gradually

reducedD: Criterion time

limitE: Variety of clothes

A: Edible reinforcerB: Token reinforcerC: Chart (self-record)D: Praise on

intermittent schedule

ABBCCCCD

AAB

B/CB/DB/DD/ED/E

AAAAABBB

12345678

Phase

Figure 11 An example of modifying and with-drawing components of an independentmorning dressing program for an adult withdevelopmental disabilities to facilitate mainte-nance and generalization.

help Brad, a 12-year-old student with mild mental retar-dation, reduce the number of negative statements (e.g.,“I hate this @#!%ing calculator,” “Math is crappy”) hemade during classroom activities. The self-managementintervention consisted of Brad (a) self-recording nega-tive statements on a form during two class periods, (b)charting the number on a graph, (c) receiving his choicefrom a menu of “small” reinforcers (items costing 25cents or less), and (d) when his self-recorded data werein agreement with the trainer and at or below a graduallyreducing criterion level for four consecutive sessions, hewas allowed to choose a “large” reinforcer (more than25 cents). After the frequency of Brad’s negative state-ments was reduced, the researchers began a four-phasepartial-sequential withdrawal of the intervention. In thefirst phase, charting and earning of “large” reinforcerswere withdrawn; in the second phase, Brad was requiredto have zero negative statements in both periods to re-ceive a daily “small” reinforcer; in the third phase, Bradused the same self-monitoring form for both class peri-ods instead of one form for each period as before, andthe small reinforcer was no longer provided; and in thefourth phase (the follow-up condition), all componentsof the intervention were withdrawn except for the self-monitoring form without a criterion number highlighted.Brad’s negative statements remained low throughout thegradual and partial withdrawal of the intervention (seeFigure 12).

A word of caution is in order regarding the termi-nation of successful behavior change programs. Achiev-ing socially significant improvements in behavior is adefining purpose of applied behavior analysis. Addi-tionally, those improved behaviors should be maintainedand should show generalization to other relevant settingsand behaviors. In most instances, achieving optimal gen-

eralization of the behavior change will require most, ifnot all, of the intervention components to be withdrawn.However, practitioners, parents, and others responsiblefor helping children learn important behaviors are some-times more concerned with whether and with how a po-tentially effective intervention will eventually bewithdrawn than they are with whether it will producethe needed behavior change. Considering how a pro-posed intervention will lend itself to eventual withdrawalor blending into the natural environment is importantand consistent with everything recommended throughoutthis book. And clearly, when the choice is between twoor more interventions of potentially equal effectiveness,the intervention that is most similar to the natural envi-ronment and the easiest to withdraw and terminateshould be given first priority. However, an important be-havior change should not go unmade because completewithdrawal of the intervention required to achieve it maynever be possible. Some level of intervention may al-ways be required to maintain certain behaviors, in whichcase attempts to continue the necessary programmingmust be made.

The Sprague and Horner (1984) study on teachinggeneralized vending machine use provides another ex-ample of this point. The six students with moderate tosevere mental retardation who participated in the pro-gram were given cue cards to aid them in operating avending machine without another person’s assistance.The cue cards, which had food and drink logos on oneside and pictures of quarters paired with prices on theother, not only were used during instruction and gener-alization probes, but also were kept by the students at theend of the program. Five of the six students still carrieda cue card and were using vending machines indepen-dently 18 months after the study had ended.

657

Generalization and Maintenance of Behavior Change

2

1

0

2

1

0

Negative

Baseline

A B C D E I II III

Self-Monitoring Partial-SequentialWithdrawal

Follow-Up Probes(Weeks)

2 4 8

Period 2

Period 1

Positive

10 20 30 40 50

Sessions

Nu

mb

er o

f St

atem

ents

per

Min

ute

Figure 12 Number of negative (solid data points) and positive (open data points) statements intwo class periods by an adolescent boy with disabilities during baseline, self-monitoring, andpartial-sequential withdrawal conditions. Horizontal lines during self-monitoring condition showchanging criteria for teacher-delivered reinforcement.From “Self-Monitoring Negative Statements” by R. Martella, I. J. Leonard, N. E. Marchand-Martella, and M. Agran, 1993, Journal ofBehavioral Education, 3, p. 84. Copyright 1993 by Human Sciences Press. Reprinted by permission.

Guiding Principles for Promoting Generalized OutcomesRegardless of the specific tactics selected and applied,we believe that practitioners’ efforts to promote general-ized behavior change will be enhanced by adherence tofive guiding principles:

• Minimize the need for generalization as much aspossible.

• Conduct generalization probes before, during, andafter instruction.

• Involve significant others whenever possible.

• Promote generalization with the least intrusive,least costly tactics possible.

• Contrive intervention tactics as needed to achieveimportant generalized outcomes.

Minimize the Need for Generalization

Practitioners should reduce the need for generalization tountaught skills, settings, and situations as much as possi-ble. Doing so requires thoughtful, systematic assessmentof what behavior changes are most important. Practition-ers should prioritize the knowledge and skills that willmost often be required of the learner and the settings and

658

Generalization and Maintenance of Behavior Change

situations in which the learner will most often benefitfrom using those skills. In addition to the environment(s)in which the learner is currently functioning, practition-ers should consider the environments in which the learnerwill function in the immediate future and later in life.

The most critical behavior changes should not berelegated to the not-for-certain technology of general-ization. The most important skill-setting-stimulus com-binations should always be taught directly and, whenpossible, taught first. For example, a training programto teach a young adult with disabilities to ride the pub-lic bus system should use the bus routes the person willtake most often (e.g., to and from her home, school, jobsite, and community recreation center) as teaching ex-amples. Instead of providing direct instruction on routesto destinations the learner may visit only occasionallyafter training, use those routes as generalization probes.Achieving a high level of response maintenance on thetrained routes will still be a challenge.

Probe for Generalization Before,During, and After Instruction

Generalization probes should be conducted prior to, dur-ing, and after instruction.

Probe Before Instruction

Generalization probes conducted before instruction be-gins may reveal that the learner already performs some orall of the needed behaviors in the generalization setting,thereby reducing the scope of the teaching task. It is amistake to assume that because a learner does not per-form a particular behavior in the instructional setting, shedoes not or cannot do it in the generalization setting(s).Preinstruction generalization probe data are the only ob-jective basis for knowing that the learner’s performanceof the target behavior after instruction is in fact a gener-alized outcome.

Probes before instruction enable observation of thecontingencies that operate in the generalization setting.Knowledge of such information may contribute to a moreeffective treatment or instruction.

Probe During Instruction

Generalization probes during instruction reveal if andwhen generalization has occurred and if and when in-struction can be terminated or shifted in focus from ac-quisition to maintenance. For example, a teacher whofinds that a student can solve untaught examples of a par-ticular type of algebraic equation after receiving instruc-tion on just a few examples and shifts instruction to thenext type of equation in the curriculum, will cover more

of the algebra program effectively than will a teacher whocontinues to present additional examples.

Results of probes during instruction also showwhether generalization is not occurring, thereby indicat-ing that a change in instructional strategies is needed. Forexample, when Sprague and Horner (1984) observed thatone student’s poor performance on the first generalizationprobe was caused by a ritualistic pattern of insertingcoins, they incorporated repeated practice on the coin in-sertion step during a training session and the student’sperformance on the generalization task improved greatly.

Probing can often be made more efficient by con-triving opportunities for the learner to use her new knowl-edge or skill. For example, instead of waiting for (andperhaps missing) naturally occurring opportunities forthe learner to use her new conversational skills in the gen-eralization environment, a practitioner could enlist theassistance of a “confederate” peer to approach the learner.Ninness, Fuerst, and Rutherford, (1991) used contrivedgeneralization probes in which students were provoked ordistracted to test the extent to which they used a set ofself-management skills.

Contrived generalization probes can also be used asprimary measures of acquisition and generalization. Forexample, Miltenberger and colleagues (2005) contrivedopportunities to measure and teach children’s use of gunsafety skills by placing a gun in home and school loca-tions where the children would find it. If a child did notexecute the target safety skills when finding a gun (i.e.,do not touch it, get away from the gun, and tell an adult),the trainer entered the room and conducted an in situtraining session, asking the child what he should havedone and rehearsing the entire sequence five times.

Probe After Instruction

Generalization probes after instruction has ended revealthe extent of response maintenance. The question of howlong probes should be conducted after instruction hasended should be answered by factors such as the sever-ity of the target behavior, the importance of the behaviorchange to the person’s quality of life, the strength andconsistency of the response maintenance obtained byprobes to date, and so forth. The need for long-term as-sessment of maintenance is especially critical for severebehavior problems. In some cases, maintenance probesover many months and several years may be indicated(e.g., Derby et al., 1997; Foxx, Bittle, & Faw, 1989; Wag-man, Miltenberger, & Woods, 1995).

If conducting systematic generalization and mainte-nance probes seems too difficult or too contrived, thepractitioner should consider the relative importance ofthe target behavior to the student or client. If a behavioris important to target for intervention then assessing the

659

Generalization and Maintenance of Behavior Change

generalized outcomes of that intervention is worth what-ever effort that assessment requires.

Involve Significant OthersAll persons are potential teachers of all sorts of behaviorchanges. Just because you are designated as “teacher” or“behavior analyst” does not mean that you have an ex-clusive franchise on the ability to make deliberate be-havior changes. In fact, there is no possibility of such afranchise. Everyone in contact contributes to everyoneelse’s behavior, both to its changes and its maintenance.

— Donald M. Baer (1999, p. 12)

Teaching for generalized outcomes is a big job, and prac-titioners should try to get as much help as they can. People are almost always around where and when im-portant behaviors need to be prompted and reinforced;and when social behaviors are targeted, people are thereby definition.

In virtually every behavior change program peopleother than the participant and the behavior analyst are in-volved, and their cooperation is crucial to the success of theprogram. Foxx (1996) stated that in programming success-ful behavior change interventions, “10% is knowing what todo; 90% is getting people to do it. . . . Many programs areunsuccessful because these percentages have been reversed”(p. 230). Although Foxx was referring to the challenge andimportance of getting staff to implement programs with theconsistency and fidelity needed for success, the same holdsfor the involvement of significant others.

Baer (1999) suggested identifying others who will ormay be involved in a behavior change program as active sup-porters or tolerators. An active supporter is someone natu-rally present in the generalization setting who helps promotethe generalization and maintenance of the target behaviorby doing specific things. Active supporters help facilitatethe desired generalized outcomes by arranging opportunitiesfor the learner to use or practice the new skill, giving cues andresponse prompts for the behavior, and providing rein-forcement for performance of the target behavior.

Active supporters for a behavior change program toteach independent eating to a child with severe disabili-ties might include one or two key people in the school’scafeteria, a volunteer or aide who works regularly withthe child, the child’s parents, and an older sibling. Thesepeople are vital parts of the teaching team. If optimalgeneralization is to occur, the active supporters must seeto it that the learner has many opportunities to use thenew skill in the generalization environments they share,and the natural reinforcers they control in those environ-ments (e.g., praise, touch, smiles, companionship) mustbe used as consequences for the target behavior. It is notnecessary, and probably not desirable, to limit an active

supporters list, to school staff, family members, and peers.They may not be regularly available in all the environ-ments and situations in which generalization is desired.

A tolerator is someone in the generalization settingwho agrees not to behave in ways that would impede thegeneralization plan. Tolerators should be informed that thelearner will be using a new skill in the generalization en-vironment and asked to be patient. In an independent eat-ing program a list of tolerators would likely include somemembers of the child’s family, school cafeteria staff, andpeers. Beyond the home and school environments the be-havior analyst should consider the possible role of the gen-eral public with whom the child may find herself sharinga table or dining area in a public restaurant. The learner’sinitial sloppiness and slowness (she may always be slop-pier and slower than most people) as she makes the first at-tempts beyond the familiar contingencies of home andschool might result in various responses from strangersthat could punish the new eating skills. Being stared at,laughed at, talked about, told to hurry up, or even offeredassistance could reduce the possibility of generalization.Certainly, the behavior analyst can inform various schoolstaff and family members of the ongoing eating programand request that they not interfere with the child’s attemptsto eat independently. But the public at large is anotherissue. It is impossible to inform everyone of the program.However, by considering the types of intolerant behaviorsthat the learner may encounter in the generalization set-ting, the teaching program can be constructed to includepractice under such intolerant conditions. Instructional tri-als might be contrived to reinforce the learner for ignoringrude remarks and continuing to eat independently.

Use the Least Intrusive, Least CostlyTactics Possible

Behavior analysts should use less intrusive and less costlytactics to promote generalization before using more in-trusive and costly tactics. As noted previously, simply reminding students to use their new skills in the general-ization setting is the easiest and least expensive of allmethods that might promote generalization. Althoughpractitioners should never assume that telling the learnerto generalize will produce the desired outcomes, neithershould they fail to include such a simple and cost-freetactic. Similarly, incorporating some of the most relevantfeatures of the generalization setting into the instructionalsetting (i.e., programming common stimuli) often helpsproduce the needed generalization and is less costly thanconducting instruction in the natural setting (e.g., Neef,Lensbower et al., 1990; van den Pol et al., 1981).

Not only is using less costly tactics good conserva-tion of the limited resources available for teaching, but

660

Generalization and Maintenance of Behavior Change

also less intrusive interventions with fewer moving partsare easier to withdraw. Systematic generalization probeswill determine whether generalization has occurred andwhether more elaborate and intrusive intervention andsupports are needed.

Contrive Intervention Tactics as Needed to Achieve ImportantGeneralized Outcomes

A practitioner should not be so concerned about intru-siveness that she fails to implement a potentially effectiveintervention or procedures that will achieve importantoutcomes for the learner. Therefore, if necessary, prac-tioners should disregard the previous guideline and con-trive as many instruction and generalization tactics asnecessary to enable the learner to generalize and maintaincritical knowledge and skills.

Rather than lament the lack of generalization orblame the learner for his inability to show generalizedbehavior changes, the behavior analyst should work toarrange whatever socially valid contingencies may beneeded to extend and maintain the target behavior.

Some Final Words of Wisdom from Don Baer

The most difficult and important challenge facing behav-ioral practitioners is helping learners achieve generalizedchange in socially significant behaviors. A behavior

change—no matter how important initially—is of littlevalue to the learner if it does not last over time, is not emit-ted in appropriate settings and situations, or occurs in re-stricted form when varied topographies are desired.

Research during the past 30 years has developed andadvanced the knowledge base of what Stokes and Baer(1977) described as an “implicit technology of general-ization” into an increasingly explicit and effective set ofstrategies and tactics for promoting generalized behav-ior change. Knowledge of these methods, combined withknowledge of the basic principles and behavior changetactics described throughout this book, provides behav-ior analysts with a powerful approach for helping peo-ple enjoy healthy, happy, and productive lives.

We end this chapter with a dually wise observationby Don Baer (1999), in which he pointed out a funda-mental truth about the relation between a person’s expe-rience (in this case, the nature of a lesson) and what islearned or not learned from that experience. Like Skin-ner before him, Baer wisely reminded us not to blamethe learner for not behaving as we think he should.

Learning one aspect of anything never means that youknow the rest of it. Doing something skillfully nownever means that you will always do it well. Resistingone temptation consistently never means that you nowhave character, strength, and discipline. Thus, it is notthe learner who is dull, learning disabled, or immature,because all learners are alike in this regard: no onelearns a generalized lesson unless a generalized lessonis taught. (p. 1) [emphasis in original]

SummaryGeneralized Behavior Change: Definitions and Key Concepts

1. Generalized behavior change has taken place if trained be-havior occurs at other times or in other places without hav-ing to be retrained completely in those time or places, orif functionally related behaviors occur that were not taughtdirectly.

2. Response maintenance refers to the extent to which alearner continues to perform a behavior after a portion orall of the intervention responsible for the behavior’s initialappearance in the learner’s repertoire has been terminated.

3. Setting/situation generalization refers to the extent towhich a learner emits the target behavior in settings or situations that are different from the instructional setting.

4. The instructional setting is the environment where instruc-tion occurs and encompasses all aspects of the environment,

planned or unplanned, that may influence the learner’s ac-quisition and generalization of the target behavior.

5. A generalization setting is any place or stimulus situationthat differs from the instructional setting in some mean-ingful way and in which performance of the target behav-ior is desired.

6. Response generalization refers to the extent to which alearner emits untrained responses that are functionallyequivalent to the trained response.

7. Some interventions yield significant and widespread gen-eralized effects across time, settings, and other behaviors;others produce circumscribed changes in behavior withlimited endurance and spread.

8. Undesirable setting/situation generalization takes twocommon forms: overgeneralization, in which the behaviorhas come under control of a stimulus class that is toobroad, and faulty stimulus control, in which the behavior

661

comes under the control of an irrelevant antecedentstimulus.

9. Undesired response generalization occurs when any of alearner’s untrained but functionally equivalent responsesproduce undesirable outcomes.

10. Other types of generalized outcomes (e.g., stimulus equiv-alence, contingency adduction, and generalization acrosssubjects) do not fit easily into categories of responsemaintenance, setting/situation generalization, and re-sponse generalization.

11. The generalization map is a conceptual framework forcombining and categorizing the various types of gener-alized behavior change (Drabman, Hammer, & Rosen-baum, 1979).

Planning for Generalized Behavior Change

12. The first step in promoting generalized behavior changesis to select target behaviors that will meet naturally exist-ing contingencies of reinforcement.

13. A naturally existing contingency is any contingency of re-inforcement (or punishment) that operates independent ofthe behavior analyst’s or practitioner’s efforts, includingsocially mediated contingencies contrived by other peo-ple and already in effect in the relevant setting.

14. A contrived contingency is any contingency of reinforce-ment (or punishment) designed and implemented by a be-havior analyst to achieve the acquisition, maintenance,and/or generalization of a targeted behavior change.

15. Planning for generalization includes identifying all the de-sired behavior changes and all the environments in whichthe learner should emit the target behavior(s) after directtraining has ceased.

16. Benefits of developing the planning lists include a betterunderstanding of the scope of the teaching task and an op-portunity to prioritize the most important behavior changesand settings for direct instruction.

Strategies and Tactics for Promoting Generalized Behavior Change

17. Researchers have developed and advanced what Stokesand Baer (1977) called an “implicit technology of gener-alization” into an increasingly explicit and effective set ofmethods for promoting generalized behavior change.

18. The strategy of teaching sufficient examples requiresteaching a subset of all of the possible stimulus and re-sponse examples and then assessing the learner’s perfor-mance on untrained examples.

19. A generalization probe is any measurement of a learner’sperformance of a target behavior in a setting and/or stimu-lus situation in which direct training has not been provided.

20. Teaching sufficient stimulus examples involves teachingthe learner to respond correctly to more than one example

of an antecedent stimulus and probing for generalizationto untaught stimulus examples.

21. As a general rule, the more examples the practitioner usesduring instruction, the more likely the learner will be to re-spond correctly to untrained examples or situations.

22. Having the learner practice a variety of response topogra-phies helps ensure the acquisition of desired responseforms and promotes response generalization. Often calledmultiple exemplar training, this tactic typically incorpo-rates numerous stimulus examples and response variations.

23. General case analysis is a systematic method for select-ing teaching examples that represent the full range of stim-ulus variations and response requirements in thegeneralization setting.

24. Negative, or “don’t do it,” teaching examples help learn-ers identify stimulus situations in which the target behav-ior should not be performed.

25. Minimum difference negative teaching examples, whichshare many characteristics with positive teaching exam-ples, help eliminate “generalization errors” due to over-generalization and faulty stimulus control.

26. The greater the similarity between the instructional set-ting and the generalization setting, the more likely the tar-get behavior will be emitted in the generalization setting.

27. Programming common stimuli means including in the in-structional setting stimulus features typically found in thegeneralization setting. Practitioners can identify possiblestimuli to make common by direct observation in the gen-eralization setting and by asking people who are familiarwith the generalization setting.

28. Teaching loosely—randomly varying noncritical aspectsof the instructional setting within and across teaching ses-sions—(a) reduces the likelihood that a single or smallgroup of noncritical stimuli will acquire exclusive con-trol over the target behavior and (b) makes it less likelythat the learner’s performance will be impeded or “thrownoff” by the presence of a “strange” stimulus in the gen-eralization setting.

29. A newly learned behavior may fail to contact an existingcontingency of reinforcement because it has not beentaught well enough. The solution for this kind of general-ization problem is to teach the learner to emit the target be-havior at the rate, accuracy, topography, latency, duration,and/or magnitude required by the naturally occurring con-tingencies of reinforcement.

30. The use of intermittent schedules of reinforcement and de-layed rewards can create indiscriminable contingencies,which promote generalized responding by making it dif-ficult for the learner to discriminate whether the next re-sponse will produce reinforcement.

31. Behavior traps are powerful contingencies of reinforce-ment with four defining features: (a) They are “baited”

Generalization and Maintenance of Behavior Change

662

Generalization and Maintenance of Behavior Change

with virtually irresistible reinforcers; (b) only a low-effortresponse already in the student’s repertoire is needed toenter the trap; (c) interrelated contingencies of reinforce-ment inside the trap motivate the student to acquire, ex-tend, and maintain targeted skills; and (d) they can remaineffective for a long time.

32. One way to wake up an existing but inoperative contin-gency of reinforcement is to ask key people in the gener-alization setting to attend to and praise the learner’sperformance of the target behavior.

33. Another tactic for waking up a natural contingency of re-inforcement is to teach the learner how to recruit rein-forcement in the generalization setting.

34. One tactic for mediating generalization is to bring the tar-get behavior under the control of a contrived stimulus inthe instructional setting that will reliably prompt or aidthe learner’s performance of the target behavior in the gen-eralization setting.

35. Teaching a learner self-management skills with which hecan prompt and maintain targeted behavior changes in allrelevant settings at all times is the most potentially effectiveapproach to mediating generalized behavior changes.

36. The strategy of training to generalize is predicated on treat-ing “to generalize” as an operant response class that, likeany other operant, is selected and maintained by contin-gencies of reinforcement.

37. One tactic for promoting response generalization is to re-inforce response variability. On a lag reinforcement sched-ule, reinforcement is contingent on a response beingdifferent in some defined way from the previous response(a Lag 1 schedule) or a specified number of previous re-sponses (Lag 2 or more).

38. The simplest and least expensive tactic for promoting gen-eralized behavior change is to tell the learner about theusefulness of generalization and then instruct him to do so.

Modifying and Terminating Successful Interventions

39. With most successful behavior change programs it is im-possible, impractical, or undesirable to continue the in-tervention indefinitely.

40. The shift from formal intervention procedures to a normaleveryday environment can be accomplished by graduallywithdrawing elements comprising the three componentsof the training program: (a) antecedents, prompts, or cue-related stimuli; (b) task modifications and criteria; and (c) consequences or reinforcement variables.

41. An important behavior change should not go unmade be-cause complete withdrawal of the intervention required toachieve it may never be possible. Some level of interven-tion may always be required to maintain certain behav-iors, in which case attempts must be made to continuenecessary programming.

Guiding Principles for Promoting Generalized Outcomes

42. Efforts to promote generalized behavior change will beenhanced by adhering to five guiding principles:

• Minimize the need for generalization as much as possible.

• Conduct generalization probes before, during, and afterinstruction.

• Involve significant others whenever possible.

• Promote generalized behavior change with the least in-trusive, least costly tactics possible.

• Contrive intervention tactics as needed to achieve im-portant generalized outcomes.

663

Ethical Considerations for Applied Behavior Analysts

Key Terms

confidentialityconflict of interest

ethical codes of behaviorethics

informed consent

Behavior Analyst Certification Board® BCBA® & BCABA®Behavior Analyst Task List©, Third Edition

Content Area 1: Ethical Considerations

1-1 Solicit or otherwise influence clients only through the use of truthful andaccurate representations of intervention efficacy and one’s professionalcompetence in applied behavior analysis.

1-2 Practice within one’s limits of professional competence in applied behavioranalysis; obtain consultation, supervision, or training; or make referrals as necessary.

1-3 Maintain competence by engaging in ongoing professional development activities.

1-4 Obtain informed consent within applicable legal and ethical standards.

1-5 Assist the client with identifying lifestyle or systems change goals and targetsfor behavior change that are consistent with the following:

(a) The applied dimension of applied behavior analysis (Baer, Wolf, & Risley1968)

(b) Applicable laws

(c) The ethical and professional standards of the profession of applied behavioranalysis.

1-6 Initiate, continue, modify, or discontinue behavior analysis services only whenthe risk–benefit ratio of doing so is lower than the risk–benefit ratio for taking al-ternative actions.

1-7 Identify and reconcile contingencies that compromise the practitioner–clientcovenant, including relationships among the practitioner, the client, and other parties.

This chapter was written by José A. Martinez-Diaz, Thomas R. Freeman, Matthew Normand,and Timothy E. Heron.

From Chapter 29 of Applied Behavior Analysis, Second Edition. John O. Cooper, Timothy E. Heron, William L. Heward. Copyright © 2007by Pearson Education, Inc. All rights reserved.

664

Ethical dilemmas surface frequently in educa-tional and clinical practice. Consider these situations:

• A person residing in a private, rural, for-profit com-munity-based home for persons with developmentaldisabilities approaches the director and states thathe wants to move to an apartment in a nearby town.Such a move would represent a loss of income tothe agency, might generate additional transitioncosts (e.g., moving expenses, future on-site supervi-sion), and has the potential to be dangerous to theresident given the area of town that the personcould afford. How could the director respond ethi-cally to the resident’s inquiry about moving withoutbeing biased by a conflict of interest?

• Julian, a student with severe disabilities, engagesin frequent and severe self-injurious behavior(SIB) (e.g., head banging, eye gouging). Manypositive and positive-reductive approaches havebeen attempted to reduce his SIB, but none havebeen successful. The support coordinator recom-mends the Self-Injurious Behavior Inhibition Sys-tem (SIBIS) as an option, but the parents objectbecause they fear that electrical shock will hurttheir son. Given that documented positive at-tempts have failed, is it an appropriate ethicalcourse of action to recommend that SIBIS treat-ment be initiated?

• During the course of an annual individualized edu-cation program (IEP) meeting, Ms. Dougherty, afirst-year teacher, perceives that a school districtadministrator is trying to “steer” the parents of astudent with emotional disabilities into accepting arevised IEP without the provision of school-basedapplied behavior analysis services recommendedby the majority of other team members. Ms.Dougherty hypothesizes that the administrator’sposition is based on the added costs the financiallystrapped school district would bear if it providedthese services. As a first-year teacher, Ms.Dougherty is concerned that if she speaks up, shemight lose the favor of her principal and maybe herjob. If she remains silent, the student might not re-ceive needed services. How might Ms. Doughertyserve as an advocate for the student, but not loseher position on the faculty?

Given each of these situations, how is the behavioranalyst to respond ethically? This chapter addresses theissues surrounding and underlying these situations. Webegin by discussing what ethics is and why it is impor-tant. Next, we present ethical behavior and behavioranalysis standards for professional practice as a way forpractitioners to navigate ethical dilemmas that arise inday-to-day practice. Finally, we discuss ethical issueswithin the context of client services (e.g., informed con-sent and conflict of interest).

Ethical Considerations for Applied Behavior Analysts

1-8 Use the most effective assessment and behavior change procedures within ap-plicable ethical standards, taking into consideration the guideline of minimal in-trusiveness of the procedure to the client.

1-9 Protect confidentiality.

1-10 Truthfully and accurately represent one’s contributions and those of others tothe practice, discipline, and profession of applied behavior analysis.

1-11 Ensure that the dignity, health, and safety of one’s client are fully protected atall times.

1-12 Give preference to assessment and intervention methods that have been scien-tifically validated, and use scientific methods to evaluate those that have not yetbeen scientifically validated.

Content Area 8: Selecting Intervention Outcomes and Strategies

8-2 Make recommendations to the client regarding target outcomes based on such factors as: client preferences, task analysis, current repertoires,supporting environments, constraints, social validity, assessment results, and best available scientific evidence.

8-4 Make recommendations to the client regarding intervention strategies based onsuch factors as: client preferences, task analysis, current repertoires,supporting environments, constraints, social validity, assessment results andbest available scientific evidence.

© 2006 The Behavior Analyst Certification Board, Inc.,® (BACB®) all rights reserved. A current version of thisdocument may be found at www.bacb.com. Requests to reprint, copy, or distribute this document andquestions about this document must be submitted directly to the BACB.

665

What Is Ethics and WhyIs It Important?Definition of Ethics

Ethics refers to behaviors, practices, and decisions thataddress three basic and fundamental questions: What isthe right thing to do? What is worth doing? What does itmean to be a good behavior analyst? (Reich, 1988; Smith,1987, 1993). Guided by these questions, personal andprofessional practices are conducted for the principal pur-pose of helping others to improve their physical, social,psychological, familial, or personal condition. As Corey,Corey, and Callanan (1993) stated, “The basic purposeof practicing ethically is to further the welfare of theclient” (p. 4).

What Is the Right Thing to Do?

Addressing the question of what is the right thing to doleads to an examination of several areas of influence: ourpersonal histories of what is right and wrong, the con-text within which applied behavior analysis is practiced,including legal and illegal versus ethical and unethicalissues, and codified ethical rules of conduct. Further, howto determine what is the right thing to do is derived fromestablished principles, methods, and decisions that havebeen used successfully by other applied behavior ana-lysts and professionals primarily to ensure the welfare ofthose entrusted to our care, the well-being of the profes-sion, and ultimately the survival of the culture (Skinner,1953, 1971).

At the outset, we would state that what is ethical(right) or unethical (wrong) behavior is ultimately relatedto cultural practices. Therefore, it is subject to differencesacross cultures and the passage of time. What might beacceptable in one culture may be unacceptable in othercultures. What might be acceptable at one time may becompletely unacceptable 20 years later.

Personal Histories. When deciding how to pro-ceed with an assessment or intervention, all applied be-havior analysts are influenced by their personalhistories of making decisions in similar situations. Pre-sumably, the analyst’s training and experiences willbalance negative biases or predispositions that maycarry over from his or her personal or cultural back-ground. For example, assume that a behavior analysthas a sibling who engages in self-injurious behavior(SIB). Further, assume that the practitioner faces a deci-sion on how to assist another family with a child whoexhibits severe SIB. The practitioner may be influencedby his recollection of procedures his parents used (or

did not use) to help his sibling. Did they use punish-ment? Include the sibling in family gatherings? Seekcompetent programs and services? However, given theanalyst’s training in a range of assessments and inter-ventions, the potential influence of such personal expe-rience is more likely to be overridden in favor of anunbiased exploration of clinically appropriate treatmentalternatives.

Further, the practitioner’s cultural or religious up-bringing may influence decisions about the right courseof action. Prior to formal training in behavior analysis, apractitioner reared in a family culture that supported“Spare the rod and spoil the child” may manage severe be-havior differently than if that same practitioner was rearedin a family culture that supported a “He’ll grow out of it”philosophy. Subsequent to training, however, a more un-biased and informed set of principles is likely to surface.

Finally, a person’s professional training and experi-ences with other cases involving severe behavior chal-lenges will likely influence whether Method A (e.g., adifferential reinforcement procedure) is favored overMethod B (e.g., overcorrection). The behavior analystmust recognize that her personal history is likely to be afactor when making a decision. Still, the practitionerneeds to be careful lest her personal background lead herto select inappropriate or ineffective interventions. Tocounteract this possibility and ensure that personal his-tory and background will not influence decision makingover professional knowledge and experience, the practi-tioner can seek help from supervisors or colleagues, re-view the research literature, consult case studies todetermine past courses of successful action, or excuseherself from the case.

The Context of Practice. Applied behavior ana-lysts work in schools, homes, community settings, jobsites, and other natural environments. Rules withinthese environments cover a host of behaviors (e.g., at-tendance, use of sick time). Included in the rules arepolicy statements that are designed to help practitionersdifferentiate between legal issues and ethical issues.For example, some practices may be legal, but unethi-cal. Breaking a professional confidence, accepting val-ued family “heirlooms” in lieu of payment for services,or engaging in consensual sexual relations with a clientover age 18 serve as examples of behaviors that arelegal but unethical. Other behaviors are both illegal andunethical. For instance, misrepresenting personal skillsor promised services; stealing the belongings of aclient while rendering services; abusing a client physi-cally, emotionally, sexually, or socially; or engaging inconsensual sexual interaction with persons under age18 are all examples of behaviors that are both illegaland unethical (Greenspan & Negron, 1994). Behavior

Ethical Considerations for Applied Behavior Analysts

666

analysts who can discriminate between these legal andethical distinctions are more likely to provide effectiveservices, maintain a focused sensitivity toward clients,and not run afoul of the law or professional standardsof conduct.

Ethical Codes of Behavior. All professional orga-nizations have generated or adopted ethical codes ofbehavior. These codes provide guidelines for associa-tion members to consider when deciding a course ofaction or conducting their professional duties. Also,these guidelines provide the standard by which gradu-ated sanctions can be imposed for deviating from thecode (e.g., reprimand, censure, expulsion from the or-ganization). The Association for Behavior Analysishas adopted the American Psychological Association’sCode of Ethics. More discussion of this point is foundlater in the chapter.

What Is Worth Doing?

Questions related to what is worth doing directly addressthe goals and objectives of practice. What are we tryingto accomplish? How are we trying to accomplish it?Clearly, social validity, cost–benefit ratio, and existingexigencies enter into decision making about what isworth doing.

Social Validity. Questions of social validity ask:Are the goals acceptable for the planned behaviorchange intervention? Are the procedures acceptable andare they aligned with best treatment practices (Peters &Heron, 1993)? Finally, do the results show meaningful,significant, and sustainable change (Wolf, 1978)? Mostpeople would agree that teaching a child to read is a de-sirable goal. Using direct instruction, or another mea-surably effective instructional method, is procedurallysound, and outcomes that show improved reading aresocially significant. In every sense, teaching a child oradult to read meets the ethical test for social validity.The new reading skill has a positive effect on the life ofthe individual. However, we cannot make the sameclaim for every skill in every situation. Does teachingstreet safety signs to a person who has mobility, sight,and hearing problems have true worth to the individual?How about teaching an Alzheimer’s patient to recall theserial order of U.S. presidents? Why teach an adult withdevelopmental disabilities to play with coloring books?Does a first-grade student with autism need to spend 20minutes of one-on-one discrete trial instruction per dayto learn to discriminate among pictures of womenwearing fall and spring fashions? Does a child with

quadriplegia really need to learn to use a standard pen-cil to write? In each of these cases, the present and fu-ture “worth” of the treatment with respect to goals,procedures, and outcomes is not clear. Given adequateprogramming and technological advances, teachingstreet safety signage to a person with mobility, sight,and hearing challenges, or the order of presidents to anAlzheimer’s patient, or coloring skills to an adult withdevelopmental disabilities, seasonal fashions to a childwith autism, or pencil-gripping skills to a person withquadriplegia could be accomplished. However, from anethical standpoint, the questions are: Should it be ac-complished? And is it worth doing?

Cost–Benefit Ratio. Cost–benefit ratio decisionsare contextual and involve a balance among planning,implementing, and evaluating a treatment or interven-tion (i.e., the cost side) and projecting future potentialgain by the person (i.e., the benefit side). In otherwords, does the potential benefit to the individual jus-tify the short- and long-term cost for providing the ser-vice? For instance, is it worth the time, cost, andemotional resources to transport a student with learn-ing disabilities to an out-of-state private school withhigh-cost tuition when (a) comparable services can beprovided free within the neighborhood public schoolsystem and (b) improved student learning and socialbehavior outcomes are uncertain at best? Is it worth thepractitioner’s efforts (i.e., financial cost plus time) tocontinue to maintain an 11th grade student with devel-opment disabilities in an academic, college-orientedprogram when switching to a functional curriculumprogram would likely enhance employability, self-sufficiency, and independence? Ethically, it is difficultto support a decision to spend school district dollarsfor outside private services that otherwise should beprovided through public support. Likewise, it is diffi-cult to defend a decision to subject a student to thecontinued reflexive establishing operations for nega-tive reinforcement generated by an academic program,when a functional curriculum model would likely meetmore appropriate short-term and long-term goals. Toaddress the thorny issue of cost versus benefit, Spragueand Horner (1991) suggested that decisions should bemade by committee, and that the perspectives of thosewith the highest stake in the outcome be given thegreatest consideration. They also recommended that ahierarchy of opinions and inputs be sought to gain thewidest possible viewpoint.

Existing Exigencies. Some behaviors challengepractitioners to find effective solutions quickly. A childwho engages in self-injurious behavior, a child with

Ethical Considerations for Applied Behavior Analysts

667

feeding problems, or a student who engages in severedisruptive behaviors present such cases. Most practi-tioners would agree that these behaviors are worthchanging so that the potential for harm to the person orothers is reduced. Further, most practitioners wouldagree that to not act might lead to an even worse condi-tion. So from an ethical standpoint, behaviors that aremore serious warrant intervention consideration beforebehaviors that are less problematic. However, that suchbehaviors may necessitate a quick and expedited treat-ment is not an invitation to adopt a situational ethicsperspective in which the promise of fast results in theshort term is compromised by not fully considering long-term ramifications. In short, important questions rela-tive to effectiveness, intrusiveness, possible deleteriousside effects of potential treatments, and independencealso must be considered, even if that means delaying anintervention temporarily (Sprague, 1994).

What Does It Mean to Be a Good Behavior Analyst?

To be a good behavior analyst requires more than fol-lowing professional codes of conduct. Although adher-ence to the codes of the Association for Behavior Analysisand the Behavior Analyst Certification Board is neces-sary, it is not sufficient. Even keeping the client’s wel-fare at the forefront of decision making is not sufficient.Following the Golden Rule (Do unto others as you wouldhave them do unto you), a code valued by virtually everyculture and religion in the world, is also not sufficient(Maxwell, 2003). A good practitioner is self-regulating.That is, the ethical practitioner seeks ways to calibratedecisions over time to ensure that values, contingencies,and rights and responsibilities are integrated and an in-formed combination of these is considered (Smith, 1993).

Why Is Ethics Important?

Behavior analytic practitioners abide by ethical princi-ples to (a) produce “meaningful” behavior change of so-cial significance for the persons entrusted to their care(Hawkins, 1984), (b) reduce or eliminate harm (e.g., poortreatments, self-injury), and (c) conform to the ethicalstandards of learned societies and professional organiza-tions. Without an ethical compass, practitioners wouldbe adrift when deciding whether a course of action ismorally right or wrong, or whether they have slipped intothe realm of situational ethics in which decisions to act(or not to act) are based on expediency, pressure, or mis-placed priorities (Maxwell, 2003). For instance, if the

teacher mentioned in our earlier scenario acquiesced tothe administrator’s pressure to not support applied be-havior analysis services, when she might have taken afirmer stand if other teachers voiced their positions, sit-uational ethics may be at work.

Further, ethical practices are important because theyincrease the likelihood that appropriate services will berendered to individuals. As a result, the individual andthe culture will improve. Over time, practices that pro-duce such improvement survive and become codified intoethical rules of conduct. These codes are selected andchange over time in response to new environmental is-sues, dilemmas, or questions.

Standards of ProfessionalPractice for AppliedBehavior AnalystsWhat Are Professional Standards?

Professional standards are written guidelines or rules ofpractice that provide direction for conducting the prac-tices associated with an organization. Professional soci-eties and certification or licensing boards develop, refine,and revise the standards that govern their profession toprovide members with parameters for appropriate be-havior in a dynamic and changing environment. In prac-tice, organizations initially form task forces to developstandards that are reviewed and approved by their re-spective boards of directors and members. In addition toprescribing rules of conduct, most professional organi-zations exact sanctions on members who do not followthe rules. Major code violations may result in expulsionfrom the organization, or revocation of certification orlicensure.

Five complementary and interrelated documents de-scribe standards of professional conduct and ethical prac-tice for applied behavior analysts.

• Ethical Principles of Psychologists and Code of Con-duct (American Psychological Association, 2002)

• The Right to Effective Behavioral Treatment (Asso-ciation for Behavior Analysis, 1989)

• The Right to Effective Education (Association forBehavior Analysis, 1990)

• Guidelines for Responsible Conduct for Behav-ior Analysts (Behavior Analyst CertificationBoard, 2001)

• The BCBA and BCABA Behavior Analyst TaskList—Third Edition (Behavior Analyst Certifica-tion Board, 2005)

Ethical Considerations for Applied Behavior Analysts

668

Ethical Considerations for Applied Behavior Analysts

Ethical Principles of Psychologistsand Code of Conduct—AmericanPsychological Association

The American Psychological Association first publishedits code of ethics in 1953 as Ethical Standards of Psy-chologists (American Psychological Association, 1953).Reflecting the changing nature of the field, eight revi-sions of the code were published between 1959 and 2002.In 1988 the Association for Behavior Analysis firstadopted the American Psychological Association’s codeof ethics (American Psychological Association, 2002) toguide professional practice. The five general principlesand 10 areas of ethical standards on which the code isbased are shown in Figure 1.

The Right to Effective Behavioral Treatment

The Association for Behavior Analysis has published twoposition papers describing clients’ rights. In 1986 the as-sociation established a task force to examine the rights of

people receiving behavioral treatment and how behavioranalysts can ensure that clients are served appropriately.After two years of study, the task force outlined six basicclient rights as the basis for directing the ethical and ap-

The Right to Effective Education

The Association for Behavior Analysis also adopted aposition paper titled The Right to Effective Education(Association for Behavior Analysis, 1989). The full re-port of the task force (Barrett et al., 1991) was acceptedby the ABA Executive Council. The abbreviated state-ment shown in Figure 3 was subsequently approved bymajority vote of the general membership and now con-stitutes official ABA policy. This position statement es-sentially requires that assessment and educationalinterventions (a) rest on a foundation of solid researchdemonstrating effectiveness, (b) address functional re-lations between behavior and environmental events, and

Figure 2 The right to effective behavioral treatment.

1. An individual has a right to a therapeutic environment.2. An individual has a right to services whose overriding goal is personal welfare.3. An individual has a right to treatment by a competent behavior analyst.4. An individual has a right to programs that teach functional skills.5. An individual has a right to behavioral assessment and ongoing evaluation.6. An individual has a right to the most effective treatment procedures available.

Adapted from “The Right to Effective Behavioral Treatment” by the Association for Behavior Analysis, 1989.Retrieved November 11, 2006, from www.abainternational.org/ABA/statements/treatment.asp. Copyright 1989by the Association for Behavior Analysis. Adapted with permission.

Figure 1 Ethical principles of psychologists and code of conduct.

General Principles Ethical Standards

Principle A: Beneficence and 1. Resolving ethical Issuesnonmaleficence 2. Competence

Principle B: Fidelity and responsibility 3. Human relationsPrinciple C: Integrity 4. Privacy and confidentialityPrinciple D: Justice 5. Advertising and other public statementsPrinciple E: Respect for people’s rights 6. Record keeping and fees

and dignity 7. Education and training8. Research and publication9. Assessment

10. Therapy

Adapted from “Ethical Principles of Psychologists and Code of Conduct,” by the American Psychological As-sociation, 2002. Retrieved November 11, 2003, from www.apa.org/ethics/code2002.html. Copyright 2002 bythe American Psychological Association. Adapted with permission from the author.

propriate application of behavioral treatment (see Fig-ure 2) (Van Houten et al., 1988).

669

Figure 3 The right to effective education.

1. The student’s overall educational context should include:a. Social and physical school environments that encourage and maintain academic

achievement and progress, and discourage behavior inconsistent with those goals;b. Schools that treat students with care and individual attention, comparable to that

offered by a caring family;c. School programs that provide support and training for parents in parenting and

teaching skills; andd. Consequences and attention at home that encourage and maintain success at

school.2. Curriculum and instructional objectives should:

a. Be based on empirically validated hierarchies or sequences of instructional objectivesand measurable performance criteria that are demonstrated to promote cumulativemastery and that are of long-term value in the culture;

b. Specify mastery criteria that include both the accuracy and the speed dimensions offluent performance;

c. Include objectives that specify both long-term and short-term personal and vocationalsuccess, and that, once mastered, will be maintained by natural consequences ineveryday living; and

d. Include long-term retention and maintenance of skills and knowledge as explicitlymeasured instructional objectives.

3. Assessment and student placement should involve:a. Assessment and reporting methods that are sufficiently criterion-referenced to

promote useful decision-making based on actual levels of skills and knowledge ratherthan on categorical labels such as “emotionally disturbed” or “learning disabled,” and

b. Placement based on correspondence between measured entering skills and skillsrequired as prerequisites for a given level in a hierarchically sequenced curriculum.

4. Instructional methods should:a. Allow students to master instructional objectives at their own pace and to respond as

rapidly and as frequently as they are able during at least some self-pacedinstructional session each day;

b. Provide sufficient practice opportunities to enable students to master skills andknowledge at each step in the curriculum;

c. Provide consequences designed to correct errors and/or to increase frequency ofresponding and that are adjusted to individual performance until they enable studentsto achieve desired outcomes;

d. Be sensitive to and adjust in response to measures of individual learning andperformance, including use of individualized instruction when group instruction fails toproduce desired outcomes;

e. Regularly employ the most advanced equipment to promote skill mastery viaprograms incorporating validated features described in this document; and

f. Be delivered by teachers who receive performance-based training, administrative andsupervisory support, and evaluation in the use of measurably effective, scientificallyvalidated instructional procedures, programs, and materials.

5. Measurement and summative evaluation should entail:a. Decision-making via objective curriculum-based measures of performance, andb. Reports of objectively measured individual achievement and progress rather than

subjective ratings, norm-referenced comparisons, or letter grading.6. Responsibility for success should stipulate that:

a. Financial and operational consequences for school personnel depend on objectivemeasures of student achievement;

b. Teachers, administrators, and the general educational program assume responsibilityfor student success, and change programs until students achieve their highestperformance levels; and

c. Students and parents should be allowed and encouraged to change schools or schoolprograms until their educational needs are met.

Adapted from “Students’ Right to Effective Education,” by the Association for Behavior Analysis, 1990. RetrievedNovember 11, 2006, from www.abainternational.org/ABA/statements/treatment.asp. Copyright 1990 by the As-sociation for Behavior Analysis. The Behavior Analyst, 1991, Volume 14(1), pages 79–82 show the full report.Adapted with permission.

Ethical Considerations for Applied Behavior Analysts

670

(c) are monitored and evaluated on a systematic and ongoing basis (Van Houten, 1994). Interventions areconsidered only when they are likely to be effective,based on empirical evidence and assessment results.

Guidelines for Responsible Conductfor Behavior Analysts

The Behavior Analyst Certification Board’s Guidelinesfor Responsible Conduct for Behavior Analysts (2004)describes specific expectations for professional practiceand ethical conduct under 10 major areas (see Figure 4).The BACB’s guidelines are based on, and consistent with,a variety of other ethical guidelines, including The Bel-mont Report (produced by The National Commission forthe Protection of Human Subjects of Biomedical and Be-havioral Research, 1979) and ethical codes developedand adopted by nine different professional organizationsin behavior analysis and related fields (e.g., AmericanPsychological Association, 2002; Florida Association forBehavior Analysis, 1988; National Association of SchoolPsychologists, 2000; National Association of SocialWorkers, 1996).

BCBA and BCABA Behavior AnalystTask List

The BCBA and BCABA Behavior Analyst Task List de-scribes the knowledge, skills, and attributes expectedof a certified behavior analyst. The task list describes111 tasks (some have multiple subtasks) across 10 con-tent areas. Content Area 1, Ethical Considerations, con-sists of the 12 tasks listed at the beginning of thischapter.1 We will address these tasks in the remainder ofthis chapter.

Three of the tasks relate to professional competence(i.e., what behavior analysts need to do to become andremain professionally capable); the remaining nine tasksin Content Area 1 focus on ethics associated with the pro-vision of client services

Ensuring ProfessionalCompetenceProfessional competence in applied behavior analysis isachieved through academic training that involves formalcoursework, supervised practica, and mentored profes-sional experience. Many excellent behavior analysts havebeen trained through university-based master’s and doc-

1The entire Task List is reprinted on the inside front and back covers ofthis book.

3Information on the certification requirements and process is available atthe BACB Web site: www.BACB.com.

toral-level programs housed in psychology, education,social work, and other human service departments.2

ABA accreditation and certification bodies havespecified minimum curriculum and supervised experi-ence requirements for behavior analysts. Both the Asso-ciation for Behavior Analysis (1993, 1997) and theBehavior Analyst Certification Board have set minimumstandards for what constitutes training in behavior analy-sis. The Association for Behavior Analysis accredits uni-versity training programs; the BACB certifies individualpractitioners. Practitioners must not only meet the certi-fication criteria, but also pass a certification examination.The BACB has conducted an extensive occupationalanalysis and developed the Behavior Analyst Task List(Behavior Analyst Certification Board, 2005), whichspecifies the minimum content that all behavior analystsshould master (BACB, 2005; Shook, Johnston, & Mel-lichamp, 2004; cf. Martinez-Diaz, 2003).3

Obtaining Certification and Licensure

Potential consumers must be able to identify practicingbehavior analysts who have demonstrated at least a min-imum standard of training and competence (Moore &Shook, 2001; Shook & Favell, 1996; Shook & Neisworth,2005). In the past, most behavior analysts in private prac-tice were licensed in psychology, education, or clinicalsocial work. The public had no way to determine whetherthe licensed professional had specific training in appliedbehavior analysis (Martinez-Diaz, 2003). In 1999 theBACB began credentialing behavior analysts in theUnited States and other countries with a certificate. The BACB certification program was based on Florida’slong-standing innovative certification program (Shook,1993; Starin, Hemingway, & Hartsfield, 1993).

Practicing Within One’s Areas of Competence

Behavior analysts should practice within their areas ofprofessional training, experience, and competence. Forexample, professionals with extensive experience withadults with developmental disabilities should restrict theirservices to this field. They should not suddenly beginworking with young children diagnosed with autism-spectrum disorders. Likewise, analysts with extensive ex-perience working with adolescents and young adults

Ethical Considerations for Applied Behavior Analysts

2For a list of colleges and universities with graduate programs in behavioranalysis, see the ABA Directory of Graduate Training in Behavior Analysis.

671

Figure 4 Behavior Analyst Certification Board® guidelines for responsible conductfor behavior analysts.

1.0 Responsible Conduct of a Behavior Analyst1.01 Reliance on Scientific Knowledge1.02 Competence and Professional Development1.03 Competence1.04 Professional Development1.05 Integrity1.06 Professional and Scientific Relationships1.07 Dual Relationships1.08 Exploitative Relationships

2.0 The Behavior Analyst’s Responsibility to Clients2.01 Definition of Client2.02 Responsibility2.03 Consultation2.04 Third-Party Requests for Services2.05 Rights and Prerogatives of Clients2.06 Maintaining Confidentiality2.07 Maintaining Records2.08 Disclosures2.09 Treatment Efficacy2.10 Documenting Professional and Scientific Work2.11 Records and Data2.12 Fees and Financial Arrangements2.13 Accuracy in Reports to Those Who Pay for Services2.14 Referrals and Fees2.15 Interrupting or Terminating Services

3.0 Assessing Behavior3.01 Environmental Conditions that Preclude Implementation3.02 Environmental Conditions that Hamper Implementation3.03 Functional Assessment3.04 Accepting Clients3.05 Consent—Client Records3.06 Describing Program Objectives3.07 Behavioral Assessment Approval3.08 Describing Conditions for Program Success3.09 Explaining Assessment Results

4.0 The Behavior Analyst and the Individual Behavior Change Program4.01 Approving Interventions4.02 Reinforcement/Punishment4.03 Avoiding Harmful Reinforcers4.04 Ongoing Data Collection4.05 Program Modifications4.06 Program Modifications Consent4.07 Least Restrictive Procedures4.08 Termination Criteria4.09 Terminating Clients

5.0 The Behavior Analyst as a Teacher and/or Supervisor5.01 Designing Competent Training Programs5.02 Limitations on Training5.03 Providing Course Objectives5.04 Describing Course Requirements5.05 Describing Evaluation Requirements5.06 Providing Feedback to Students/Supervisees5.07 Providing Behavior Analysis Principles in Teaching5.08 Requirements of Supervisees5.09 Training and Supervision5.10 Feedback to Supervisees5.11 Reinforcing Supervisee Behavior

Ethical Considerations for Applied Behavior Analysts

672

Figure 4 (continued)

6.0 The Behavior Analyst and the Workplace6.01 Job Commitments6.02 Assessing Employee Interactions6.03 Preparing for Consultation6.04 Employees Interventions6.05 Employee Health and Well-Being6.06 Conflicts with Organizations

7.0 The Behavior Analyst and Research7.01 Scholarship and Research7.02 Using Confidential Information for Didactic or Instructive Purposes7.03 Conforming with Laws and Regulations7.04 Informed Consent7.05 Deception in Research7.06 Informing of Future Use7.07 Minimizing Interference7.08 Commitments to Research Participants7.09 Ensuring Participant Anonymity7.10 Informing of Withdrawal7.11 Debriefing7.12 Answering Research Questions7.13 Written Consent7.14 Extra Credit7.15 Acknowledging Contributions7.16 Principal Authorship and Other Publication Credits7.17 Paying Participants7.18 Withholding Payment7.19 Grant Reviews7.20 Animal Research

8.0 The Behavior Analyst’s Ethical Responsibility to the Field of Behavior Analysis8.01 Affirming Principles8.02 Disseminating Behavior Analysis8.03 Being Familiar with these Guidelines8.04 Discouraging Misrepresentation by Non-Certified Individuals

9.0 The Behavior Analyst’s Responsibility to Colleagues9.01 Ethical Violations by Colleagues9.02 Accuracy of Data9.03 Authorship and Findings9.04 Publishing Data9.05 Withholding Data

10.0 The Behavior Analyst’s Ethical Responsibility to Society10.01 Promotion in Society10.02 Scientific Inquiry10.03 Public Statements10.04 Statements by Others10.05 Avoiding False or Deceptive Statements10.06 Media Presentations10.07 Testimonials10.08 In-Person Solicitation

From the Behavior Analyst Certification Board’s “Guidelines for Responsible Conduct for Behavior Analysts,”2004. Copyright © 2001–2004 by BACB®. All rights reserved. Used by permission.

should not begin working with preschool clients, whowould be outside of their expertise. A person whose en-tire career has been set in preschool or home-based set-tings with children should not start offering services in thearea of organizational behavior management.

Ethical Considerations for Applied Behavior Analysts

Even within one’s competence area, practitionerswho come upon a situation that exceeds their training orexperience should make a referral to another behavior an-alyst or consultant. Where gaps in professional trainingexist, increased competence can be achieved by attending

673

workshops, seminars, classes, and other continuing edu-cation activities. When possible, analysts should workwith mentors, supervisors, or colleagues who can provideenhanced training and professional development.

Maintaining and ExpandingProfessional Competence

Behavior analysts have an ethical responsibility to stay in-formed of advancements in the field. For example, con-ceptual advances and technical innovations in the 1990sin areas such as antecedent interventions, functionalanalysis, and motivating operations have had profoundimplications for clinical and educational practice. Be-havior analysts can maintain and expand their profes-sional competence by earning continuing educationcredits, attending and participating in professional con-ferences, reading professional literature, and presentingcases to peer review and oversight committees.

Continuing Education Units

Behavior analysts can expand their professional compe-tence and keep abreast of new developments by attend-ing training events offering continuing education unit(CEU) credits. The BACB requires a minimum numberof CEUs per three-year renewal period to maintain cer-tification. CEU credits are available by attending work-shops at national and local conferences such as those of the Association for Behavior Analysis and its local af-filiates, or events sponsored by universities and otheragencies approved as CEU providers by the BACB. Con-tinuing education credits demonstrate that the behavioranalyst has added relevant awareness, knowledge, and/orskills to his or her repertoire.

Attending and Presenting at Conferences

Attending and participating in local, state, or national con-ferences enhance every behavior analyst’s skills. Theaxiom, “You never learn something so well as when youhave to teach it” continues to have value. Hence, partici-pating in a conference helps practitioners refine their skills.

Professional Reading

Self-study is a fundamental way to maintain currency inan ever-changing field. In addition to routinely readingthe Journal of Applied Behavior Analysis (JABA) andThe Behavior Analyst, all behavior analysts should studyother behavioral publications specific to their areas ofexpertise and interest.

Oversight and Peer Review Opportunities

When behavior analysts approach a difficult problem, theyapply skills and techniques within their repertoire to ad-dress the problem. For example, in a case of a child witha history of severe and frequent face-slapping, an inter-vention that involved the child wearing a helmet or pro-tective device might be faded as differential reinforcementfor appropriate object-holding becomes more effective.Fading the helmet presents no ethical problems as longas the technique remains conceptually systematic, is basedin the basic principles of behavior, is tied to the publishedresearch literature in some way, and is effective in the pre-sent case. However, even the most technically proficientand professionally careful practitioner is subject to con-tingencies that can lead to treatment drift or error. This iswhere oversight and peer review come into play.

Many states have laws requiring that oversight beprovided under specific circumstances, which are definedeither by the type and severity of behaviors being ad-dressed or the restrictiveness of the procedure(s) beingproposed. The presence or absence of laws in a particu-lar jurisdiction should not determine our dedication to aprocess of peer review and oversight, however these safe-guards protect not only the consumers of behavior ana-lytic services, but also behavior analysts themselves.

When findings are presented before a group of col-leagues outside of the behavior analytic community, cleardemonstrations of procedures that comply with clinical andprofessional standards must apply. Moreover, such presen-tations set the occasion for behavior analysts to clarify be-havioral outcomes, present graphic displays that are easy tointerpret, and explain in clear and rational language whyvarious educational or clinical choices were made.

Making and SubstantiatingProfessional Claims

Sometimes overzealous practicing behavior analysts areso certain of the superiority of operant and respondentprinciples that they make preemptive claims that are notrealistic. For example, asserting, “I am certain I can helpyour son” borders on an unethical claim. A more ethicaland appropriate statement may be, “I have had successworking with other children who have profiles similar toyour son’s.” Behavior analysts who are well acquaintedwith the professional literature on treatment effective-ness for the target behaviors, the functions that behav-iors serve (e.g., attention, escape), and specific clientpopulations are less likely to make unsubstantiated andfar-reaching claims.

A second aspect of this professional standard relatesto presenting oneself as having certifications, licenses,educational experiences, or training that one does not have.

Ethical Considerations for Applied Behavior Analysts

674

Figure 5 Example of an informed consent form.

Informed Consent FormA.B.A. TECHNOLOGIES, INC.

CLIENT: ____________________________________________ DOB: _______________

STATEMENT OF AUTHORITY TO CONSENT: I certify that I have the authority to legallyconsent to assessment, release of information, and all legal issues involving the above-named client. Upon request, I will provide A.B.A. Technologies, Inc., with proper legaldocumentation to support this claim. I further hereby agree that if my status as legalguardian should change, I will immediately inform A.B.A. Technologies, Inc., of this changein status and will further immediately inform A.B.A. Technologies, Inc., of the name,address, and phone number of the person or persons who have assumed guardianship ofthe above-named client.

TREATMENT CONSENT: I consent for behavioral treatment to be provided for the above-named client by A.B.A. Technologies, Inc., and its staff. I understand that the proceduresused will consist of manipulating antecedents and consequences to produceimprovements in behavior. At the beginning of treatment behavior may get worse in theenvironment where the treatment is provided (e.g., “extinction burst”) or in other settings(e.g., “behavioral contrast”). As part of the behavioral treatment, physical prompting andmanual guidance may be used. The actual treatment protocols that will be used have beenexplained to me.

I understand that I may revoke this consent at any time. However, I cannot revoke consentfor action that has already been taken. A copy of this consent shall be as valid as theoriginal.

PARENT/GUARDIAN: _________________________________ DATE: ______________

WITNESS: ___________________________________________ DATE: _______________

________________________________________________________________________

From ABA Technologies, Inc., by José A. Martinez-Diaz, Ph.D., BCBA, 129 W. Hibiscus Blvd., Melbourne, FL.Used with permission.

Falsely claiming to have applied behavior analysis expe-riences or valid credentials is always unethical and maybe illegal in many states.

Ethical Issues in ClientServicesAs stated earlier, although applied behavior analysisshares many ethical concerns with other disciplines, someethical questions are specific to the choices posed whenbehavior analysis services are considered. For example,the decision to use an aversive procedure poses complexethical questions that must be addressed prior to imple-mentation (Herr, O’Sullivan, & Dinerstein, 1999; Iwata,1988; Repp & Singh, 1990).

Although our discussion is limited to the most com-mon areas of ethical concern related to the BACB stan-dards, we strongly advise that students and practitionersread the BACB’s guidelines and task-list objectives tomaintain their ethical perspective and skills across a widerange of standards.

Informed Consent

Informed consent means that the potential recipient ofservices or participant in a research study gives his or herexplicit permission before any assessment or treatmentis provided. Informed consent requires more than ob-taining permission. Permission must come after full dis-closure and information is provided to the participant.Figure 5 shows an example of an informed consent let-ter in which such information is provided.

Three tests must be met before informed consent canbe considered valid: (a) The person must demonstrate thecapacity to decide, (b) the person’s decision must be vol-untary, and (c) the person must have adequate knowledgeof all salient aspects of the treatment.

Capacity to Decide

To be considered capable of making an informed deci-sion, a participant must have (a) an adequate mentalprocess or faculty by which he or she acquires knowl-edge, (b) the ability to select and express his or her

Ethical Considerations for Applied Behavior Analysts

675

Figure 6 Factors surrogates must consider when making informed consentdecisions for persons who are incapacitated.

FOR A PERSON WHOSE WISHES MAY BE KNOWN OR INFERRED1. The person’s current diagnosis and prognosis.2. The person’s expressed preference regarding the treatment at issue.3. The person’s relevant religious or personal beliefs.4. The person’s behavior and attitude toward medical treatment.5. The person’s attitude toward a similar treatment for another individual.6. The person’s expressed concerns about the effects of his or her illness and treatment

on family and friends.

FOR A PERSON WHOSE WISHES ARE UNKNOWN AND PROBABLY UNKNOWABLE1.The effects of the treatment on the physical, emotional, and mental functions of the

person.2. The physical pain that the person would suffer from the treatment or from the

withholding or withdrawal of the treatment.3. The humiliation, loss of dignity, and dependency the person is suffering as a result of

the condition or as a result of the treatment.4. The effect the treatment would have on the life expectancy of the person.5. The person’s potential for recovery, with the treatment and without the treatment.6. The risks, side effects, and benefits of treatment.

Adapted From “Informed Consent for Health Care” by A. D. N. Hurley & J. L. O’Sullivan, 1999, in R. D. Diner-stein, S. S. Herr, & J. L. O’Sullivan (Eds.), A Guide to Consent, Washington DC, American Association onMental Retardation, pp. 50–51. Used by permission.

choices, and (c) the ability to engage in a rational processof decision making. Concepts such as “mental processor faculty” are mentalistic to behavior analysts, and thereare no agreed-upon evaluative tools for testing preinter-vention capacity. Capacity is questioned only if the per-son “has impaired or limited ability to reason, remember,make choices, see the consequences of actions, and planfor the future” (O’Sullivan, 1999, p. 13). A person is con-sidered mentally incapacitated if a disability affects his orher ability to understand the consequences of his or heractions (Turnbull & Turnbull, 1998).

According to Hurley and O’Sullivan (1999), “Capac-ity to give informed consent is a fluid concept and varieswith each individual and proposed procedure” (p. 39). Aperson may have capacity to consent to participate in a pos-itive reinforcement program that poses little or no risk, butmay not have the capacity to understand the complexitiesthat might be involved in a more complex treatment suchas overcorrection combined with response cost. Hence,practitioners should not assume that a client capable of in-formed consent at one level can provide informed consentif the complexity of the proposed treatment changes.

Capacity must be viewed within a legal context aswell as a behavioral one. Courts have held that capacityrequires that the person “rationally understand the natureof the procedure, its risk, and other relevant information”(Kaimowitz v. Michigan Department of Mental Health,1973, as cited in Neef, Iwata, & Page, 1986, p. 237). Thedetermination of capacity by persons with developmen-

tal disabilities poses specific challenges, and the behav-ior analyst would benefit from consulting a legal expertwhenever a question of capacity arises for such persons.

When a person is deemed incapacitated, informedconsent may be obtained either through a surrogate or aguardian.

Surrogate Consent. Surrogate consent is a legalprocess by which another individual—the surrogate—isauthorized to make decisions for a person deemed incom-petent based on the knowledge of what the incapacitatedperson would have wanted. Family members or closefriends most often serve as surrogates.

In most states, a surrogate’s authority is limited. Asurrogate may not authorize treatment when the clientis actively refusing treatment (e.g., when an adult withdevelopmental disabilities refuses to sit in a dentist’schair, a surrogate cannot consent to sedation); or for con-troversial medical procedures such as sterilization, abor-tion, or certain treatments for mental disorders (e.g.,electroconvulsive therapy or psychotropic medication)(Hurley & O’Sullivan, 1999). The surrogate is requiredto consider specific information when making decisions.For a person who is incapacitated, the landmark case ofSuperintendent of Belchertown State School v. Saikewicz(1977) crystallized factors that surrogates must considerwhen making informed consent decisions for personswho cannot make their own decisions (Hurley &O’Sullivan, 1999). Figure 6 lists necessary information

Ethical Considerations for Applied Behavior Analysts

676

for the two primary areas of concern for surrogates: (a)decision making for a person who is incapacitated andwhose wishes may be known or inferred, and (b) deci-sion making for a person whose wishes are unknownand probably unknowable (e.g., a person with profounddisabilities).

Guardian Consent. Guardian consent is obtainedthrough a guardian, a person whom a court appointsas a legal custodian of an individual. Guardianship is a complex legal issue that varies from state to state.Hence, only two main points will be made. First,guardianship may be sought when treatment is deemednecessary but a surrogate is inappropriate because theclient without capacity refuses treatment. The greaterthe degree of guardianship, the less legal control aperson has over his or her own life. Because helpingthe person become more independent is a goal of be-havior analysis, a decision to seek guardianship at anylevel should be considered only as a final option to beexercised in the most limited way that still resolvesthe issue.

Second, guardianship may be limited in any way thatthe court deems appropriate. In most states, a fullguardian is responsible for essentially every importantdecision in a person’s life. The court, in protecting therights of the individual, may determine that limited ortemporary guardianship is more appropriate. Theguardianship may apply only to financial or medical con-cerns, and may remain in effect only so long as a veryspecific concern is at issue (such as a need for surgery).In all guardianship cases, the court is the ultimatedecision-making body and may take any action includingrevoking guardianship or determining who should serveas the guardian (O’Sullivan, 1999).

Voluntary Decision

Consent is considered voluntary when it is given in theabsence of coercion, duress, or any undue influence andwhen it is issued with the understanding that it can bewithdrawn at any time. As Yell (1998) stated, “Revoca-tion of consent has the same effects as an initial refusalto consent” (p. 274).

Family members, doctors, support staff, or othersmay exert strong influence over a person’s willingnessto grant or deny consent (Hayes, Adams, & Rydeen,1994). For example, individuals with developmental dis-abilities may be asked to make major decisions duringinterdisciplinary team meetings. The way questions areposed to the prospective client may suggest that consentis not entirely voluntary (e.g., “You want us to help youwith this intervention, right?”).

Ethical Considerations for Applied Behavior Analysts

To ensure that a person’s consent is voluntary, a prac-titioner may want to discuss topics related to assessmentand treatment privately and with an independent advo-cate present. Further, having discussions without severetime constraints reduces the pressure to evoke a response.Finally, anyone being asked to give consent should begiven time to think, discuss, and review all options witha trusted confidant to ensure that consent is voluntary.

Knowledge of the Treatment

A person considering services or research participationmust be provided information in clear, nontechnical lan-guage regarding (a) all important aspects of the plannedtreatment, (b) all potential risks and benefits of theplanned procedure, (c) all potential alternative treatments,and (d) the right to refuse continued treatment at any time.The recipient must be able to answer questions correctlyabout the information he or she has been given, andshould be able to describe the procedures in his or herown words. If, for example, a time-out procedure is partof the intervention package, the client should be able todescribe all aspects of how time-out will operate and beable to say more than, “I’ll get in trouble if I hit anotherperson.” Figure 7 outlines additional information that thepotential client should have to ensure voluntary, informedconsent (Cipani, Robinson, & Toro, 2003).

Treatment Without Consent

Most states have policies authorizing a course of actionwhen a potential procedure is needed and informed con-sent by the individual is not possible. Typically, consentmay be granted in the case of a life-threatening emer-gency, or when there is an imminent risk of serious harm.In educational instances in which the school district de-termines that services are needed or desired (e.g., specialeducation programming), but the parents refuse, the dis-trict has progressive recourse through administrative re-view, mediation, and ultimately the court system(Turnbull & Turnbull, 1998). State statutes differ in in-stances in which consent is denied; thus, practitionersshould consult current local or state statutes. Further,practitioners should recognize that rules and regulationsassociated with state laws may be subject to change andtherefore require regular review.

Confidentiality

Professional relationships require confidentiality, mean-ing that any information regarding an individual receiv-ing or having received services may not be discussedwith, or otherwise made available to any third party, un-less that individual has provided explicit authorization

677

Ethical Considerations for Applied Behavior Analysts

Figure 7 Information for clients to ensure informed consent.

Documentation Required:All forms indicating an understanding of information and agreement to various policieslisted below must be signed and dated.

Issues Presented/Discussed:1. Confidentiality and its limits—how information will be used, with whom it will be shared2. Provider qualifications3. Risks/benefits of treatment4. Nature of procedures and alternatives5. Logistics of service

• Monetary issues—Fee structure, billing, methods of payment, insurance issues, otherfees

• Communication—ways to contact between appointments by phone, pager, etc.• Cancellations and missed appointments• Termination of services

6. Responsibilities of client and behavior analyst

Adapted from “Ethical and Risk Management Issues in the Practice of ABA” by E. Cipani, S. Robinson, andH. Toro (2003). Paper presented at the annual conference of the Florida Association for Behavior Analysis,St. Petersburg. Adapted with permission of the authors.

for release of that information. Although confidentialityis a professional ethical standard for behavior analysts, itis also a legal requirement in some states (Koocher &Keith-Spiegel, 1998). Figure 8 shows a standard releaseof information (ROI) form. Note that the ROI formclearly specifies what can be shared and released, andthe expiration date for such releases.

Limits to Confidentiality

When service to a client begins, the limits on confiden-tiality must be fully explained to the client. For instance,confidentiality does not extend to abusive situations andwhen knowledge of impending harm to the individual orothers is known. All professionals must report suspectedabuse of children in all states, and suspected abuse of el-ders in most states. Figure 9 shows a form typical for in-forming the client of abuse-reporting requirements.

As stated previously, in circumstances in which thepracticing behavior analyst becomes aware of the possi-bility of imminent, severe harm to the individual or an-other person, confidentiality no longer applies. In suchcases, it is ethical to inform supervisors, administrators,or other caregivers of the pending injury so that appro-priate preventative action can be taken.

Breaches of Confidentiality

Breaches of confidentiality generally occur for two mainreasons: (a) The breach is intentional to protect some-one from harm, or (b) the breach is unintentional and isthe result of carelessness, neglect, or a misunderstand-ing of the nature of confidentiality. An intentional

breach of confidentiality is warranted when credible in-formation becomes available that imminent harm ordanger is possible. For instance, if a reliable student ap-proached a teacher after learning that another studentbrought a gun to school, confidentiality can be breachedto protect potential victims. Such a breach is intendedto serve the greater good (i.e., protect others from im-minent harm). Unintentional breaches can occur whena teacher unwittingly provides confidential informationto a parent about a child’s performance, but fails to con-firm that the parent who is requesting the information isthe de facto legal guardian for the child. To avoid thissecond type of breach, practitioners must remain vigi-lant about information sharing throughout all phases ofservice delivery.

A good rule to follow with respect to confidentialityis: If uncertain whether confidentiality applies in a givensituation, assume that it does. Some actions to protectconfidentiality include locking file cabinets, requiringpasswords to access computer files, avoiding the trans-mission of nonencrypted information across wireless sys-tems, and confirming an individual’s status as a surrogateor legal guardian before providing information concern-ing the client.

Protecting the Client’s Dignity,Health, and Safety

Dignity, health, and safety issues often center on the con-tingencies and physical structures present in the environ-ments in which people live and work. The behavior analystshould be acutely aware of these issues. Favell and McGim-sey (1993) provided a list of acceptable characteristics of

678

Ethical Considerations for Applied Behavior Analysts

Figure 9 Example of abuse-reporting protocol form.

A.B.A. Technologies, Inc.Confidentiality Act/Abuse-Reporting Protocol

Client: ___________________________

I understand that all information related to the above-named client’s assessment andtreatment must be handled with strict confidentiality. No information related to the client,either verbal or written, will be released to other agencies or individuals without theexpress written consent of the client’s legal guardian. By law, the rules of confidentiality donot hold under the following conditions:

1. If abuse or neglect of a minor, disabled, or elderly person is reported or suspected, theprofessional involved is required to report it to the Department of Children and Familiesfor investigation.

2. If, during the course of services, the professional involved receives information thatsomeone’s life is in danger, that professional has a duty to warn the potential victim.

3. If our records, our subcontractor records or staff testimony are subpoenaed by courtorder, we are required to produce requested information or appear in court to answerquestions regarding the client.

This consent expires 1 year after signature date below.

Parent/Legal Guardian Date

From ABA Technologies, Inc., by José A. Martinez-Diaz, Ph.D., BCBA, 129 W. Hibiscus Blvd., Melbourne, FL.Used with permission.

Figure 8 Release of information form.

A.B.A. TECHNOLOGIES, INC.RELEASE OF INFORMATION AND ASSESSMENT CONSENT FORM

CLIENT: _______________________________________________ DOB: ______________

PARENT/GUARDIAN NAME: __________________________________________________

I consent for the above-named client to participate in an assessment through A.B.A.Technologies, Inc. I consent to have the assessment with the above-named individualconducted at the following locations (Circle relevant ones)

Home School Other: ___________

I understand and consent to have the individuals responsible for care in the above-namedlocations involved in the assessment of the above-named client. In order to coordinate theassessment with these individuals, I authorize the release of the following confidentialrecords to the individuals responsible for care in the above-named locations:

Evaluations/Assessments: ____________________________________________________

___________________________________________________________________________

IEP or Other Records: ________________________________________________________

Other: _____________________________________________________________________

I understand that these records may contain psychiatric and/or drug and alcoholinformation. I understand that these records may also contain references to blood-bornepathogens (e.g., HIV, AIDS). I understand that I may revoke this consent at any time;however, I cannot revoke consent for action that has already been taken. A copy of thisrelease shall be as valid as the original. THIS CONSENT AUTOMATICALLY EXPIRES 30DAYS AFTER TERMINATION OF SERVICES.

PARENT/GUARDIAN: _______________________________ DATE: _____________

From ABA Technologies, Inc., by José A. Martinez-Diaz, Ph.D., BCBA, 129 W. Hibiscus Blvd., Melbourne, FL.Used with permission.

679

Ethical Considerations for Applied Behavior Analysts

Figure 10 Defining an acceptable treatment environment.

1. The environment is engaging: Reinforcement is readily available; problem behaviors arereduced; reinforcement contingencies are effective; exploratory play and practiceflourish; the environment is by definition humane.

2. Functional skills are taught and maintained: Environments are judged not by paperworkor records, but by observed evidence of training and progress; incidental teaching allowsthe natural environment to support skill acquisition and maintenance.

3. Behavior problems are ameliorated: A purely functional approach regarding bothbehaviors and procedures leads to effective interventions; arbitrary labels based ontopographies are replaced by individualized definitions based on function.

4. The environment is the least restrictive alternative. Again, this is functionally defined,based on freedom of movement and activity engagement parameters; communitysettings may actually be more restrictive than institutional ones, depending on theirbehavioral effects.

5. The environment is stable: Changes in schedules, programs, peers, caregivers, etc., areminimized; skills are maintained; consistency and predictability.

6. The environment is safe: Physical safety is paramount; supervision and monitoring areadequate; peer review ensures that proper program procedures are based on function.

7. The client chooses to live there: Efforts are made to determine client choice; alternativeenvironments may be sampled.

Adapted from “Defining an Acceptable Treatment Environment” by J. E. Favell and J. E. McGimsey, 1993. InR. Van Houten & S. Axelrod (Eds.), Behavior Analysis and Treatment (pp. 25–45). New York: Plenum Press.Copyright 1993 by Plenum Press. Used with permission.

treatment environments to ensure dignity, health, and safety(see Figure 10).

Dignity can be examined by addressing the follow-ing questions: Do I honor the person’s choices? Do I pro-vide adequate space for privacy? Do I look beyond theperson’s disability and treat the person with respect? Be-havior analysts can help to ensure dignity for their clientsby defining their own roles. By using operant principlesof behavior, they teach skills that will enable learners toestablish increasingly effective control over the contin-gencies in their own natural environments. Everyone hasthe right to say yes, no, or sometimes nothing at all (cf.Bannerman, Sheldon, Sherman, & Harchik, 1990).

Choice is a central principle in the delivery of ethi-cal behavioral services. In behavioral terms, the act ofmaking a choice requires that both behavioral and stim-ulus alternatives be possible and available (Hayes et al.,1994). Practitioners must provide the client with behav-ioral alternatives, and the person must be capable of per-forming the actions required by the alternatives. To leavea room, a person must be physically capable of openingthe door, and the door must not be blocked or locked.The term stimulus alternatives refers to the simultane-ous presence of more than one stimulus item from whichto choose (e.g., choosing to eat an apple instead of an or-ange or pear). The point is that, to have a fair choice, aclient must have alternatives, must be able to performeach alternative, and must be able to experience the nat-ural consequences of the chosen alternative.

Helping the Client Select Outcomesand Behavior Change Targets

The term outcomes refers to the lifestyle changes a clienthas identified as the ultimate goals of behavior analysisservices. In part, it refers to a quality of life issue for theindividual (Felce & Emerson, 2000; Risley, 1996). Ob-taining a job, establishing a loving relationship with asignificant other, pursuing personal goals, engaging incommunity-based activities, and living independently areexamples of outcomes. Ultimately, behaviors selected forchange must benefit the individual, not the practitioner orcaregiver. Peterson, Neef, Van Norman, and Ferreri(2005) expressed this point succinctly:

One reason often cited for providing opportunities forchoices is that it is consistent with social values such asself-determination and empowerment. . . . We also main-tain that we can best “empower” students by arrangingconditions to promote choices that benefit the student,and by helping students to recognize the factors that caninfluence their choices. (p. 126)

In the early days of the field, behavior analysts werecriticized rightfully for targeting behaviors that mostlybenefited staff (e.g., Winett & Winkler, 1972). For ex-ample, achieving a sheltered workshop of docile, com-pliant adults is not an appropriate goal if functionalwork-related or social skills are not addressed adequately.Compliance, although commonly identified in behavior

680

Ethical Considerations for Applied Behavior Analysts

programs, is not an appropriate goal by itself. Teachingcompliant behavior becomes ethically justifiable when itfacilitates the development of other functional or socialskills that will, in turn, help the person achieve higherlevels of independence.

To help people select appropriate outcomes and tar-get behaviors within an ethical framework, practitionersneed to have a thorough understanding of variables thataffect reinforcer assessment, the function of stimuli iden-tified as reinforcers in an assessment, and how these vari-ables interact when a choice is made. As Peterson andcolleagues (2005) stated:

It may be that the act of choosing is effective only to theextent that it permits access to preferred reinforcers. Weposit that assessment of the variables that affect prefer-ence contributes to efforts to promote choices that arebeneficial. (p. 132)

Maintaining Records

In addition to data on behavior change, behavior analystsmust keep records of their interactions with clients. As-sessment data, descriptions of interventions, and progressnotes are part of a client record that must be consideredconfidential. The following points on maintaining recordscomply with the APA Ethical Principles of Psychologistsand Code of Conduct (2002):

• A release must be obtained from the client or his orher guardian (unless ordered otherwise by a judge)before records may be shared with anyone.

• Records must be kept in a secure area.

• Well-maintained records facilitate the provision offuture services, meet agency or institutional re-quirements, ensure accurate billing, allow for future research, and may comply with legal requirements.

• Records disposal should be complete (shredding isprobably best).

• Electronic transmission of confidential recordsacross any unsecured medium (e.g., fax lines inpublic areas, e-mail) is prohibited under the HealthInsurance Portability and Accountability Act(HIPAA) regulations (1996).

Advocating for the ClientProviding Necessaryand Needed Services

Prior to initiating services, a behavior analyst has the re-sponsibility to validate that a referral warrants further ac-tion. This poses the first ethical challenge to the

practitioner: deciding whether to accept or reject the case.The decision to provide treatment may be divided intotwo sequential decisions: (1) Is the presenting problemamenable to behavioral intervention? and (2) is the pro-posed intervention likely to be successful?

Is the Problem Amenable to Behavior Treatment?

To determine whether behavioral intervention is neces-sary and appropriate, the behavior analyst should seekanswers to the following questions:

1. Has the problem emerged suddenly?

a. Might the problem have a medical cause?

b. Has a medical evaluation been done?

2. Is the problem with the client or with someoneelse? (For example, a student who has done wellthrough fourth grade suddenly exhibits “behaviorproblems” in fifth grade, although she remainswell behaved at home. Perhaps a simple change ofteachers will solve the problem.)

3. Have other interventions been tried?

4. Does the problem actually exist? (For example, aparent has serious concerns about a 3-year-oldchild who “refuses to eat” everything provided atevery meal.)

5. Can the problem be solved simply or informally?(For example, the same 3-year-old “elopes”through a back door left wide open all day.)

6. Might the problem be better addressed by anotherdiscipline? (For example, a child with cerebralpalsy may need adaptive equipment instead of be-havioral treatment.)

7. Is the behavioral problem considered an emergency?

Is the Proposed Intervention Likelyto Be Successful?

Questions to ask when considering whether the inter-vention is likely to be successful include the following:

1. Is the client willing to participate?

2. Are the caregivers surrounding the client willing orable to participate?

3. Has the behavior been successfully treated in theresearch literature?

4. Is public support likely?

5. Does the behavior analyst have the appropriate ex-perience to deal with the problem?

681

Ethical Considerations for Applied Behavior Analysts

6. Will those most likely to be involved in imple-menting the program have adequate control of thecritical environmental contingencies?

If the answer to all of these questions is yes, then thebehavior analyst can act. If the answer to any of thesequestions is no, the behavior analyst should seriouslyconsider declining to initiate treatment.

Embracing the Scientific Method

With respect to how the scientific method relates to ethics,practitioners use measurably effective, research-basedmethods and test emerging practices to assess their ef-fectiveness before implementing them. As space engi-neer James Oberg once stated, “In science keeping anopen mind is a virtue, but not so open that your brainsfall out” (cited in Sagan, 1996, p. 187). Embracing thescientific method is important as claims of the effective-ness of treatment regimens and intervention methods be-come more widespread and accepted by the publicwithout critical examination (Shermer, 1997). In Sagan’s(1996) view, extraordinary claims require extraordinaryevidence.

Heward and Silvestri (2005) elaborated further onthe essential ingredients of extraordinary evidence.

What constitutes extraordinary evidence? In the strictestsense, and the sense that should be employed when eval-uating claims of educational effectiveness, evidence isthe outcome of the application of the scientific methodto test the effectiveness of a claim, a theory, or a prac-tice. Evidence becomes extraordinary when it is extraor-dinarily well tested. . . . In addition, extraordinaryevidence requires replication. A single study, anecdote,or theoretical article, no matter how impressive the find-ings or how complicated the writing, is not a basis forpractice. (p. 209)

Applied behavior analysts should base their practiceson two primary sources: the scientific literature and directand frequent measurements of behaviors. Literature-based data inform initial decisions about intervention.However, practicing ethically also means that consulta-tions with other professionals who have tackled similarproblems is often necessary.4

Peer-reviewed scientific reports published in rep-utable outlets are a strong source of objective informa-tion concerning effective intervention strategies. Whenconfronted with data that differ from well-established,peer-reviewed literature, practitioners are obliged to in-vestigate the situation further. By examining critical vari-

4Remember that in applied behavior analysis parlance, similarity meanslikeness in function (i.e., controlling variables) as well as form (i.e.,topography).

ables or combinations of interactions, the practitionerwould be likely to determine differences that would ben-efit the individual.

There are almost as many ideas about instructionalstrategies and behavioral treatments as there are teach-ers and people treating problem behaviors. Unfortunately,many ideas have not been empirically validated. Take,for example, the popular notion in education that repeatedpractice of a skill is harmful to the student because it de-stroys his or her motivation to learn and ultimately re-sults in unhappiness with school (e.g., Heron, Tincani,Peterson, & Miller, 2005; Kohn, 1997). Despite the pop-ular appeal of this philosophy, it is simply unsubstanti-ated. Decades of carefully controlled and replicatedresearch have demonstrated that repeated practice is anecessary ingredient to skill mastery (Binder, 1996; Ericsson & Charness, 1994).

The area of autism treatment offers another exampleof untested claims permeating the mainstream culture. Inthe 1990s a technique called facilitated communicationwas hailed as an amazing breakthrough for people withautism. Reports surfaced claiming that children who pre-viously had no communication skills were now able tocommunicate, sometimes in very sophisticated language,with the aid of a facilitator. The facilitator guided thechild’s hand over a keyboard as he or she typed messages.The results of numerous experiments showed that it wasthe facilitators who typed the messages, not the children.(e.g., Green & Shane, 1994; Szempruch & Jacobson,1993; Wheeler, Jacobson, Paglieri, & Schwartz, 1993).

Carefully controlled scientific studies debunked whathad become a popular perception among desperate par-ents and teachers. For behavior analysts, the cornerstoneof ethical practice is providing effective services based onsolid, replicated research evidence.

Evidenced-Based Best Practice and LeastRestrictive Alternatives.

An essential component of ethically driven behavioranalysis is that interventions and related practices be ev-idence based, and that the most powerful, but least in-trusive methods be used first. Further, intervention plansmust be designed, implemented, and evaluated system-atically. If the individual fails to show progress, data sys-tems should be reviewed and if necessary interventionsmodified. If progress is made, the intervention should bephased out and assessed for generalization and mainte-nance. In all phases of an intervention, data and directobservations should drive treatment decisions.

682

Ethical Considerations for Applied Behavior Analysts

Conflict of Interest

A conflict of interest occurs when a principal party, aloneor in connection with family, friends, or associates, hasa vested interest in the outcome of the interaction. Themost common form of conflict arises in the form of dual-role relationships. Conflicts arise when a person acting asa therapist enters into another type of relationship with theclient, a family member, or a close associate of the client,or promises to enter into such a relationship in the fu-ture. These relationships may be financial, personal, pro-fessional (providing another service, for example), orotherwise beneficial to the therapist in some way.

Direct and frequent observations during assessmentand intervention phases bring the behavior analyst intoclose contact with the client (and often family members,other professionals, and caregivers) in various natural set-tings. Personal relationships can be formed that cross pro-fessional boundaries. Family members may offerunsolicited gifts or make invitations to parties or otherevents. In every interaction, the behavior analyst mustmonitor her own behavior with vigilance and guardagainst crossing any personal or professional boundaries.This is especially true when treatment is provided withina private home. In service contexts, personal relation-ships of any kind can quickly develop ethical tangles, andare to be avoided.

Other professional conflicts of interest might ariseas well. A teacher, for example, should not simultane-ously be a student’s employer outside of school becauseperformance in one area can affect behavior in the other.A supervisee should be off limits as a potential romanticpartner. A member of a peer review committee shouldnot participate in reviews of his own work or the work ofhis supervisees. The general rule to follow with respectto conflict of interest is to avoid it. A practitioner in doubt

should consult with a supervisor or trusted and experi-enced confidant.

ConclusionPracticing ethically can be challenging and uncertain be-cause anticipating the full ramifications of a decision ora given course of action is not always possible. Further-more, it is hard work because practicing ethically requiresvigilance, self-monitoring, and the application of dynamicguidelines and principles with virtually every case.

Returning to our scenarios at the beginning of thechapter, how might the director, the support coordinator,and the teacher proceed? Prescriptive, pat solutions arenot possible. Yet these practitioners need not be at a lossas to what to do. In our view, ethical challenges, regard-less of their sources, are best addressed by revisiting thethree questions posed early in the chapter: What is theright thing to do? What is worth doing? What does it meanto be a good practitioner? Behavior analysts who use thesethree questions as the focal points for their decision mak-ing will have a foundation for responding to such dilem-mas. Further, addressing these questions honestly, openly,and without prejudice will help practitioners avoid manypotential ethical pitfalls and keep the client, student, orcare recipient at the center of their efforts.

Finally, by practicing ethically, and by maintainingadherence to the scientific method and the principles,procedures, and dimensions of applied behavior analy-sis, practitioners will have a ready source of valid, accu-rate, reliable, and believable data to inform decisionmaking. As a result, they will be more likely to achievethe original and continuing promise of applied behavioranalysis: the application of the experimental analysis ofbehavior to problems of social importance.

SummaryWhat Is Ethics and Why Is It Important?

1. Ethics describes behaviors, practices, and decisions thataddress three basic and fundamental questions: What isthe right thing to do? What is worth doing? What does itmean to be a good behavior analyst?

2. Ethics is important because it helps practitioners decidewhether a course of action is morally right or wrong, andit guides decisions to act irrespective of the demands ofexpediency, pressure, or priorities.

3. Ethical practices are derived from other behavior analystsand professionals to ensure the well-being of (a) clients,

(b) the profession itself, and (c) the culture as a whole.Over time, practices become codified into ethical rules ofconduct that may change in response to a changing world.

4. Personal histories, including cultural and religious expe-riences, influence how practitioners decide a course of ac-tion in any given situation.

5. Practicing behavior analysts must be aware of the partic-ular settings in which they work and the specific rules andethical standards applicable to those settings. Behaviormight be illegal but ethical, legal but unethical, or illegaland unethical.

683

Standards of Professional Practice for Applied Behavior Analysts

6. Professional organizations adopt formalized statements ofconduct, codes of professional behavior, or ethical stan-dards of behavior to guide their members in making deci-sions. Also, standards exact sanctions when deviationsfrom the code occur.

7. The client’s welfare is at the forefront of ethical deci-sion making.

8. Behavior analysts must take steps to ensure that their pro-fessional conduct is self-regulating.

9. Professional standards, guidelines, or rules of practice arewritten statements that provide direction for conductingthe practices associated with an organization.

10. Behavior analysts are guided by five documents with re-spect to ethical behavior: Ethical Principles of Psycholo-gists and Code of Conduct (APA, 2002), The Right toEffective Behavioral Treatment (ABA, 1989), The Right toEffective Education (ABA, 1990), Guidelines for Respon-sible Conduct for Behavior Analysts (BACB, 2001), andthe Behavior Analyst Task List (BACB, 2005).

Ensuring Professional Competence

11. Professional competence in applied behavior analysis isachieved through formal academic training that involvescoursework, supervised practica, and mentored profes-sional experience.

12. Behavior analysts must be truthful and accurate in all pro-fessional and personal interactions.

Ethical Issues in Client Services

13. Three tests must be met before informed consent can beconsidered valid: capacity to decide, voluntary decision,and knowledge of the treatment.

14. Confidentiality refers to the professional standard requir-ing that the behavior analyst not discuss or otherwise re-lease information regarding someone in his or her care.Information may be released only with formal permissionof the individual or the individual’s guardian.

15. Practicing behavior analysts have a responsibility to pro-tect a client’s dignity, health, and safety. Various rightsmust be observed and protected, including the right tomake choices, the right to privacy, the right to a thera-peutic treatment environment, and the right to refusetreatment.

16. The individual receiving services must be given the op-portunity to assist in selecting and approve of treatmentgoals. Outcomes must be selected that are primarily aimedat benefiting the individual receiving services.

17. Records of service must be maintained, preserved, andconsidered confidential.

18. In deciding to provide services, the behavior analyst mustdetermine that services are needed, that medical causeshave been ruled out, that the treatment environment willsupport service delivery, and that a reasonable expectationof success exists.

19. All sources of conflict of interest, and in particular dual re-lationships, are to be avoided.

Advocating for the Client

20. The choice to provide treatment may be divided into twosets of decision rules: (1) determining that the problem isamenable to behavioral intervention, and (2) evaluatingthe likely success of an intervention.

21. A conflict of interest occurs when a principal party, aloneor in connection with family, friends, or associates, has avested interest in the outcome of the interaction.

Ethical Considerations for Applied Behavior Analysts

684

Achenbach, T. M., & Edelbrock, C. S.(1991). Manual for the Child BehaviorChecklist. Burlington: University ofVermont, Department of Psychiatry.

Adams, C., & Kelley, M. (1992). Managingsibling aggression: Overcorrection asan alternative to timeout. BehaviorModification, 23, 707–717.

Adelinis, J. D., Piazza, C. C., & Goh, H. L.(2001). Treatment of multiply con-trolled destructive behavior with foodreinforcement. Journal of Applied Be-havior Analysis, 34, 97–100.

Adkins, V. K., & Mathews, R. M. (1997).Prompted voiding to reduce inconti-nence in community-dwelling olderadults. Journal of Applied BehaviorAnalysis, 30, 153–156.

Adronis, P. T. (1983). Symbolic aggressionby pigeons: Contingency coadduction.Unpublished doctoral dissertation, Uni-versity of Chicago, Department of Psy-chiatry and Behavior Analysis, Chicago.

Agran, M. (Ed.). (1997). Self-directed learn-ing: Teaching self-determination skills.Pacific Grove, CA: Brooks/Cole.

Ahearn, W. H. (2003). Using simultaneouspresentation to increase vegetable con-sumption in a mildly selective childwith autism. Journal of Applied Be-havior Analysis, 36, 361–365.

Ahearn, W. H., Clark, K. M., Gardenier, N.C., Chung, B. I., & Dube, W. V. (2003).Persistence of stereotypic behavior: Ex-amining the effects of external rein-forcers. Journal of Applied BehaviorAnalysis, 36, 439–448.

Ahearn, W. H., Kerwin, M. E., Eicher, P. S.,Shantz, J., & Swearingin, W. (1996).An alternating treatments comparisonof two intensive interventions for foodrefusal. Journal of Applied BehaviorAnalysis, 29, 321–332.

Alber, S. R., & Heward, W. L. (1996).“GOTCHA!” Twenty-five behavior trapsguaranteed to extend your students’aca-demic and social skills. Intervention inSchool and Clinic, 31 (5), 285–289.

Alber, S. R., & Heward, W. L. (2000).Teaching students to recruit positive at-tention: A review and recommenda-tions. Journal of Behavioral Education,10, 177–204.

Alber, S. R., Heward, W. L., & Hippler, B. J.(1999). Training middle school stu-dents with learning disabilities to re-cruit positive teacher attention.Exceptional Children, 65, 253–270.

Alber, S. R., Nelson, J. S., & Brennan, K. B.(2002). A comparative analysis of twohomework study methods on elemen-tary and secondary students’ acquisi-tion and maintenance of social studiescontent. Education and Treatment ofChildren, 25, 172–196.

Alberto, P. A., & Troutman, A. C. (2006)Applied behavior analysis for teachers(7th ed.). Upper Saddle River, NJ: Mer-rill/Prentice Hall.

Alberto, P. A., Heflin, L. J., & Andrews, D.(2002). Use of the timeout ribbon pro-cedure during community-based in-struction. Behavior Modification, 26(2), 297–311.

Albin, R. W., & Horner, R. H. (1988). Gen-eralization with precision. In R. H.Horner, G. Dunlap, & R. L. Koegel(Eds.), Generalization and mainte-nance: Life-style changes in applied set-tings (pp. 99–120). Baltimore: Brookes.

Alessi, G. (1992). Models of proximate andultimate causation in psychology.American Psychologist, 48, 1359–1370.

Alexander, D. F. (1985). The effect of studyskill training on learning disabled stu-dents’ retelling of expository material.

Journal of Applied Behavior Analysis,18, 263–267.

Allen, K, E., Hart, B. M., Buell, J. S., Har-ris, F. R., & Wolf, M. M. (1964). Ef-fects of social reinforcement on isolatebehavior of a nursery school child.Child Development, 35, 511–518.

Allen, K. D., & Evans, J. H. (2001). Expo-sure-based treatment to control excessiveblood glucose monitoring. Journal of Applied Behavior Analysis, 34, 497–500.

Allen, L. D., Gottselig, M., & Boylan, S.(1982). A practical mechanism forusing free time as a reinforcer in theclassroom. Education and Treatment ofChildren, 5 (4), 347–353.

Allison, J. (1993). Response deprivation, re-inforcement, and economics. Journalof the Experimental Analysis of Behav-ior, 60, 129–140.

Altschuld, J. W., & Witkin, B. R. (2000).From needs assessment to action:Transforming needs into solutionstrategies. Thousand Oaks, CA: Sage.

Altus, D. E., Welsh, T. M., & Miller, L. K.(1991). A technology for programmaintenance: Programming key re-searcher behaviors in a student hous-ing cooperative. Journal of AppliedBehavior Analysis, 24, 667–675.

American Psychological Association.(1953). Ethical standards of psycholo-gists. Washington, DC: Author.

American Psychological Association.(2001). Publication Manual of theAmerican Psychological Association(5th ed.). Washington, DC: Author

American Psychological Association.(2002). Ethical principles of psycholo-gists and code of conduct. Washington,DC: Author. Retrieved November 11,2003, from www.apa.org/ethics/code2002.html.

Bibliography

From Applied Behavior Analysis, Second Edition. John O. Cooper, Timothy E. Heron, William L. Heward. Copyright © 2007 by PearsonEducation, Inc. All rights reserved.

685

Bibliography

American Psychological Association.(2004). Ethical principles of psycholo-gists and code of conduct. RetrievedOctober 21, 2004, from www.apa.org/ethics.

Andersen, B. L., & Redd, W. H. (1980). Pro-gramming generalization through stim-ulus fading with children participatingin a remedial reading program.Education and Treatment of Children,3, 297–314.

Anderson, C. M., & Long, E. S. (2002). Useof a structured descriptive assessmentmethodology to identify variable affect-ing problem behavior. Journal of Ap-plied Behavior Analysis, 35, 137–154.

Andresen, J. T. (1991). Skinner and Chom-sky 30 years later OR: The return of therepressed. The Behavior Analyst, 14,49–60.

Ardoin, S. P., Martens, B. K., & Wolfe, L.A. (1999). Using high-probability in-struction sequences with fading to in-crease student compliance duringtransitions. Journal of Applied Behav-ior Analysis, 32, 339–351.

Armendariz, F., & Umbreit, J. (1999). Usingactive responding to reduce disruptivebehavior in a general education class-room. Journal of Positive Behavior In-terventions, 1, 152–158.

Arndorfer, R. E., Miltenberger, R. G.,Woster, S. H., Rortvedt, A. K., &Gaffaney, T. (1994). Home-based de-scriptive and experimental analysis ofproblem behaviors in children. Topicsin Early Childhood Special Educa-tion, 14, 64–87.

Arndorfer, R., & Miltenberger, R. (1993).Functional assessment and treatment ofchallenging behavior: A review withimplications for early childhood. Topicsin Early Childhood Special Education,13, 82–105.

Arnesen, E. M. (2000). Reinforcement of ob-ject manipulation increases discovery.Unpublished bachelor’s thesis, ReedCollege, Portland, OR.

Arntzen, E., Halstadtrø, A., & Halstadtrø,M. (2003). Training play behavior in a5-year-old boy with developmental dis-abilities. Journal of Applied BehaviorAnalysis, 36, 367–370.

Ashbaugh, R., & Peck, S. M. (1998). Treat-ment of sleep problems in a toddler: Areplication of the faded bedtime withresponse cost protocol. Journal of Ap-plied Behavior Analysis, 31, 127–129.

Association for Behavior Analysis. (1989).The right to effective education. Kala-mazoo, MI: Author. Retrieved No-vember 11, 2006, from www.abainternational.org/ABA/statements/treatment.asp.

Association for Behavior Analysis. (1990).Students’ right to effective education.Kalamazoo, MI: Author. Retrieved November 11, 2006, from www.abainternational.org/ABA/statements/treatment.asp

Association for Behavior Analysis. (1993,1997). Guidelines for the accreditationof programs in behavior analysis. Kala-mazoo, MI:Author. Retrieved December2, 2003, from www. abainternational.org/sub/behaviorfield/education/accreditation/index.asp.

Association for Persons with Severe Hand-icaps. (1987, May). Resolution on thecessation of intrusive interventions.TASH Newsletter, 5, 3.

Atwater, J. B., & Morris, E. K. (1988).Teachers’ instructions and children’scompliance in preschool classrooms: Adescriptive analysis. Journal of AppliedBehavior Analysis, 21, 157–167.

Axelrod, S. A. (1990). Myths that(mis)guide our profession. In A. C.Repp & N. N. Singh (Eds.), Per-spectives on the use of nonaversive andaversive interventions for persons withdevelopmental disabilities (pp. 59–72).Sycamore, IL: Sycamore.

Axelrod, S., Hall, R. V., Weis, L., & Rohrer,S. (1971). Use of self-imposed contin-gencies to reduce the frequency ofsmoking behavior. Paper presented atthe Fifth Annual Meeting of the Asso-ciation for the Advancement of Behav-ior Therapy, Washington, DC.

Ayllon, T., & Azrin, N. H. (1968). The tokeneconomy: A motivational system fortherapy and rehabilitation. New York:Appleton-Century-Crofts.

Ayllon, T., & Michael, J. (1959). The psy-chiatric nurse as a behavioral engineer.Journal of the Experimental Analysisof Behavior, 2, 323–334.

Azrin, N. H. (1960). Sequential effects ofpunishment. Science, 131, 605–606.

Azrin, N. H., & Besalel, V. A. (1999). Howto use positive practice, self-correction,and overcorrection (2nd ed.). Austin,TX: Pro-Ed.

Azrin, N. H., & Foxx, R. M. (1971). A rapidmethod of toilet training the institu-

tionalized retarded. Journal of AppliedBehavior Analysis, 4, 89–99.

Azrin, N. H., & Holz, W. C. (1966). Pun-ishment. In W. K. Honig (Ed.),Operant behavior: Areas of researchand application (pp. 380–447). NewYork: Appleton-Century-Crofts.

Azrin, N. H., & Nunn, R. G. (1973). Habit-reversal for habits and tics. BehaviorResearch and Therapy, 11, 619–628.

Azrin, N. H., & Powers, M. A. (1975). Elim-inating classroom disturbances of emo-tionally disturbed children by positivepractice procedures. Behavior Therapy,6, 525–534.

Azrin, N. H., & Wesolowski, M. D. (1974).Theft reversal: An overcorrection pro-cedure for eliminating stealing by re-tarded persons. Journal of AppliedBehavior Analysis, 7, 577–581.

Azrin, N. H., Holz, W. C., & Hake, D. C.(1963). Fixed-ratio punishment by in-tense noise. Journal of the Experimen-tal Analysis of Behavior, 6, 141–148.

Azrin, N. H., Hutchinson, R. R., & Hake, D.C. (1963). Pain-induced fighting in thesquirrel monkey. Journal of the Exper-imental Analysis of Behavior, 6, 620.

Azrin, N. H., Kaplan, S. J., & Foxx, R. M.(1973). Autism reversal: Eliminatingstereotyped self-stimulation of retardedindividuals. American Journal of Men-tal Deficiency, 78, 241–248.

Azrin, N. H., Nunn, R. G., & Frantz, S. E.(1980). Habit reversal vs. negative prac-tice treatment of nail biting. BehaviorResearch and Therapy, 18, 281–285.

Azrin, N. H., Rubin, H., O’Brien, F., Ayllon,T., & Roll, D. (1968). Behavioral engi-neering: Postural control by a portableoperant apparatus. Journal of AppliedBehavior Analysis, 1, 99–108.

Babyak, A. E., Luze, G. J., & Kamps, D. M.(2000). The good student game: Be-havior management for diverse class-rooms. Intervention in School andClinic, 35 (4), 216–223.

Bacon-Prue, A., Blount, R., Pickering, D., &Drabman, R. (1980). An evaluation ofthree litter control procedures: Trash re-ceptacles, paid workers, and the markeditem techniques. Journal of Applied Be-havior Analysis, 13, 165–170.

Baer, D. M. (1960). Escape and avoidanceresponse of preschool children to twoschedules of reinforcement withdrawal.Journal of the Experimental Analysisof Behavior, 3, 155–159.

686

Bibliography

Baer, D. M. (1961). Effect of withdrawal ofpositive reinforcement on an extin-guishing response in young children.Child Development, 32, 67–74.

Baer, D. M. (1962). Laboratory control ofthumbsucking by withdrawal and rep-resentation of reinforcement. Journalof the Experimental Analysis of Behav-ior, 5, 525–528.

Baer, D. M. (1970). An age-irrelevant con-cept of development. Merrill-PalmerQuarterly, 16, 238–245.

Baer, D. M. (1971). Let’s take another look atpunishment. Psychology Today, 5, 5–32.

Baer, D. M. (1975). In the beginning, therewas the response. In E. Ramp & G.Semb (Eds.), Behavior analysis: Areasof research and application (pp. 16–30).Upper Saddle River, NJ: Prentice Hall.

Baer, D. M. (1977a). Reviewer’s comment:Just because it’s reliable doesn’t meanthat you can use it. Journal of AppliedBehavior Analysis, 10, 117–119.

Baer, D. M. (1977b). “Perhaps it would bebetter not to know everything.”Journal of Applied Behavior Analysis,10, 167–172.

Baer, D. M. (1981). A hung jury and a Scot-tish verdict: “Not proven.” Analysis andIntervention in Developmental Dis-abilities, 1, 91–97.

Baer, D. M. (1982). Applied behavior analy-sis. In G. T. Wilson & C. M. Franks(Eds.), Contemporary behavior ther-apy: Conceptual and empirical foun-dations (pp. 277–309). New York:Guilford Press.

Baer, D. M. (1985). [Symposium discus-sant]. In C. E. Naumann (Chair), Dev-eloping response classes: Why re-invent the wheel? Symposium con-ducted at the Annual Conference of theAssociation for Behavior Analysis,Columbus, OH.

Baer, D. M. (1987). Weak contingencies,strong contingencies, and too many be-haviors to change. Journal of AppliedBehavior Analysis, 20, 335–337.

Baer, D. M. (1991). Tacting “to a fault”.Journal of Applied Behavior Analysis,24, 429–431.

Baer, D. M. (1999). How to plan for gener-alization (2nd ed.). Austin, TX: Pro-Ed.

Baer, D. M. (2005). Letters to a lawyer. InW. L. Heward, T. E. Heron, N. A. Neef,S. M. Peterson, D. M. Sainato, G. Car-tledge, R. Gardner, III, L. D. Peterson,S. B. Hersh, & J. C. Dardig (Eds.),

Focus on behavior analysis in educa-tion: Achievements, challenges, and op-portunities (pp. 3–30). Upper SaddleRiver, NJ: Merrill/Prentice Hall.

Baer, D. M., & Bushell, Jr., D. (1981). Thefuture of behavior analysis in theschools? Consider its recent pact, andthen ask a different question. SchoolPsychology Review, 10(2), 259–270.

Baer, D. M., & Fowler, S. A. (1984). Howshould we measure the potential of self-control procedures for generalized ed-ucational outcomes? In W. L. Heward,T. E. Heron, D. S. Hill, & J. Trap-Porter(Eds.), Focus on behavior analysis ineducation (pp. 145–161). Columbus,OH: Charles E. Merrill.

Baer, D. M., & Richards, H. C. (1980). Aninterdependent group-oriented contin-gency system for improving academicperformance. School Psychology Re-view, 9, 190–193.

Baer, D. M., & Schwartz, I. S. (1991). If re-liance on epidemiology were to be-come epidemic, we would need toassess its social validity. Journal of Ap-plied Behavior Analysis, 24, 321–234.

Baer, D. M., & Sherman, J. A. (1964). Re-inforcement control of generalized im-itation in young children. Journal ofExperimental Child Psychology, 1,37–49.

Baer, D. M., & Wolf, M. M. (1970a). Recentexamples of behavior modification inpreschool settings. In C. Neuringer & J. L. Michael (Eds.), Behavior mod-ification in clinical psychology (pp.10–55). Upper Saddle River, NJ: Pren-tice Hall.

Baer, D. M., & Wolf, M. M. (1970b). Theentry into natural communities of rein-forcement. In R. Ulrich, T. Stachnik, &J. Mabry (Eds.), Control of human be-havior (Vol. 2, pp. 319–324). Glenview,IL: Scott, Foresman.

Baer, D. M., Peterson, R. F., & Sherman, J. A.(1967). The development of imitationby reinforcing behavioral similarity of amodel. Journal of the ExperimentalAnalysis of Behavior, 10, 405–416.

Baer, D. M., Wolf, M. M., & Risley, T. R.(1968). Some current dimensions of ap-plied behavior analysis. Journal of Ap-plied Behavior Analysis, 1, 91–97.

Baer, D. M., Wolf, M. M., & Risley, T. (1987).Some still-current dimensions of appliedbehavior analysis. Journal of AppliedBehavior Analysis, 20, 313–327.

Baer, R. A. (1987). Effects of caffeine onclassroom behavior, sustained atten-tion, and a memory task in preschoolchildren. Journal of Applied BehaviorAnalysis, 20, 225–234.

Baer, R. A., Blount, R., L., Detrich, R., &Stokes, T. F. (1987). Using intermittentreinforcement to program maintenanceof verbal/nonverbal correspondence.Journal of Applied Behavior Analysis,20, 179–184.

Baer, R. A., Tishelman, A. C., Degler, J. D.,Osnes, P. G., & Stokes, T. F. (1992). Ef-fects of self- vs. experimenter-selectionof rewards on classroom behavior inyoung children. Education and Treat-ment of Children, 15, 1–14.

Baer, R. A., Williams, J. A., Osnes, P. G., &Stokes, T. F. (1984). Delayed re-inforcement as an indiscriminable contingency in verbal/nonverbal corre-spondence training. Journal of AppliedBehavior Analysis, 17, 429–440.

Bailey, D. B. (1984). Effects of lines ofprogress and semilogarithmic charts onratings of charted data. Journal of Ap-plied Behavior Analysis, 17, 359–365.

Bailey, D. B., Jr., & Wolery, M. (1992).Teaching infants and preschoolers withdisabilities (2nd ed). Upper SaddleRiver, NJ: Merrill/Prentice Hall.

Bailey, J. S. (2000). A futurist perspectivefor applied behavior analysis. In J.Austin & J. E. Carr (Eds.), Handbookof applied behavior analysis (pp.473–488). Reno, NV: Context Press.

Bailey, J., & Meyerson, L. (1969). Vibrationas a reinforcer with a profoundly re-tarded child. Journal of Applied Be-havior Analysis, 2, 135–137.

Bailey, J. S. & Pyles, D. A. M. (1989). Be-havioral diagnostics. In E. Cipani (Ed.),The treatment of severe behavior disor-ders: Behavior analysis approach (pp.85–107). Washington, DC: AmericanAssociation on Mental Retardation.

Bailey, S. L., & Lessen, E. I. (1984). Ananalysis of target behaviors in educa-tion: Applied but how useful? In W. L.Heward, T. E. Heron, D. S. Hill, & J.Trap-Porter (Eds.), Focus on behavioranalysis in education (pp. 162–176).Columbus, OH: Charles E. Merrill.

Ballard, K. D., & Glynn, T. (1975). Behav-ioral self-management in story writ-ing with elementary school children.Journal of Applied Behavior Analysis,8, 387–398.

687

Bibliography

Bandura, A. (1969). Principles of behaviormodification. New York: Holt, Rine-hart & Winston.

Bandura, A. (1971). Vicarious and self-re-inforcement processes. In R. Glaser(Ed.), The nature of reinforcement.New York: Academic Press.

Bannerman, D. J., Sheldon, J. B., Sherman,J. A., & Harchik, A. E. (1990). Balanc-ing the rights to habilitation with theright to personal liberties: The rights ofpeople with developmental disabilitiesto eat too many doughnuts and take anap. Journal of Applied BehaviorAnalysis, 23, 79–89.

Barbetta, P. M., Heron, T. E., & Heward, W. L.(1993). Effects of active student re-sponse during error correction on the acquisition, maintenance, and general-ization of sight words by students withdevelopmental disabilities. Journal of Applied Behavior Analysis, 26,111–119.

Barker, M. R., Bailey, J. S., & Lee, N.(2004). The impact of verbal promptson child safety-belt use in shoppingcarts. Journal of Applied BehaviorAnalysis, 37, 527–530.

Barkley, R., Copeland, A., & Sivage, C.(1980). A self-control classroom for hy-peractive children. Journal of Autism andDevelopmental Disorders, 10, 75–89.

Barlow, D. H., & Hayes, S. C. (1979). Al-ternating treatments design: One strat-egy for comparing the effects of twotreatments in a single behavior.Journal of Applied Behavior Analysis,12, 199–210.

Baron, A., & Galizio, M. (2005). Positiveand negative reinforcement: Should thedistinction be preserved? The BehaviorAnalyst, 28, 85–98.

Baron, A., & Galizio, M. (2006). The dis-tinction between positive and negativereinforcement: Use with care. The Be-havior Analyst, 29, 141–151.

Barrett, B. H., Beck, R., Binder, C., Cook,D. A., Engelmann, S., Greer, R. D.,Kyrklund, S. J., Johnson, K. R., Mal-oney, M., McCorkle, N., Vargas, J. S.,& Watkins, C. L. (1991). The right toeffective education. The Behavior An-alyst, 14 (1), 79–82.

Barrish, H. H., Saunders, M., & Wolf,M. M. (1969). Good behavior game:Effects of individual contingencies forgroup consequences on disruptive be-

havior in a classroom. Journal of Ap-plied Behavior Analysis, 2, 119–124.

Barry, A. K. (1998). English grammar: Lan-guage as human behavior. Upper Sad-dle River, NJ: Prentice Hall.

Barton, E. S., Guess, D., Garcia, E., & Baer,D. M. (1970). Improvement of retar-dates’ mealtime behaviors by timeoutprocedures using multiple baselinetechniques. Journal of Applied Behav-ior Analysis, 3, 77–84.

Barton, L. E., Brulle, A. R., & Repp, A. C.(1986). Maintenance of therapeuticchange by momentary DRO. Journal ofApplied Behavior Analysis, 19, 277–282.

Barton-Arwood, S. M., Wehby, J. H., Gunter,P. L., & Lane, K. L. (2003). Functionalbehavior assessment rating scales: In-trarater reliability with students withemotional or behavioral disorders.Behavior Disorders, 28, 386–400.

Baum, W. M. (1994). Understanding be-haviorism: Science, behavior, and cul-ture. New York: Harper Collins.

Baum, W. M. (2005). Understanding be-haviorism: Science, behavior, and cul-ture (2nd ed.). Malden, MA: BlackwellPublishing.

Bay-Hinitz, A. K., Peterson, R. F., & Quil-itch, H. R. (1994). Cooerrative games:A way to modify aggressive and coop-erative behaviors in young children.Journal of Applied Behavior Analysis,27, 435–446.

Becker, W. C., & Engelmann, S. E. (1978).Systems for basic instruction: Theoryand applications. In A. Catania & T.Brigham (Eds.), Handbook of appliedbehavior analysis: Social and instruc-tional processes. New York: Irvington.

Becker, W. C., Engelmann, S., & Thomas,D. R. (1975). Teaching 2: Cognitivelearning and instruction. Chicago: Sci-ence Research Associates.

Behavior Analyst Certification Board.(2001). Guidelines for responsible con-duct for behavior analysts. Tallahassee,FL: Author. Retrieved November 11,2003, from http://bacb.com/consum_frame.html.

Behavior Analyst Certification Board.(2005). Behavior analyst task list, thirdedition. Tallahassee, FL: Author. Re-trieved November 11, 2003, fromhttp:// bacb.com/consum_frame.html.

Belfiore, P. J., Skinner, C. H., & Ferkis,M. A. (1995). Effects of response and

trial repetition on sight-word trainingfor students with learning disabilities.Journal of Applied Behavior Analysis,28, 347–348.

Bell, K. E., Young, K. R., Salzberg, C. L., &West, R. P. (1991). High school drivereducation using peer tutors, direct in-struction, and precision teaching. Jour-nal of Applied Behavior Analysis, 24,45–51.

Bellack, A. S., & Hersen, M. (1977). Be-havior modification: An introductorytextbook. New York: Oxford Univer-sity Press.

Bellack, A. S., & Schwartz, J. S. (1976). As-sessment for self-control programs. InM. Hersen & A. S. Bellack (Eds.), Be-havioral assessment: A practical hand-book (pp. 111–142). New York:Pergamon Press.

Bellamy, G. T., Horner, R. H., & Inman,D. P. (1979). Vocational habilitationof severely retarded adults. Austin,TX: Pro-Ed.

Bender, W. N., & Mathes, M. Y. (1995). Stu-dents with ADHD in the inclusiveclassroom: A hierarchical approach tostrategy selection. Intervention inSchool & Clinic, 30 (4), 226–234.

Bennett, K., & Cavanaugh, R. A. (1998). Ef-fects of immediate self-correction, de-layed self-correction, and no correctionon the acquisition and maintenance ofmultiplication facts by a fourth-grade stu-dent with learning disabilities. Journal ofApplied BehaviorAnalysis, 31, 303–306.

Bicard, D. F. & Neef, N. A. (2002). Effects ofstrategic versus tactical instructions onadaptation to changing contingencies inchildren with ADHD. Journal of Ap-

Bijou, S. W. (1955). A systematic approach toan experimental analysis of young chil-dren. Child Development, 26, 161–168.

Bijou, S. W. (1957). Patterns of reinforcementand resistance to extinction in youngchildren. Child Development, 28, 47–54.

Bijou, S. W. (1958). Operant extinction afterfixed-interval schedules with youngchildren. Journal of the ExperimentalAnalysis of Behavior, 1, 25–29.

Bijou, S. W., & Baer, D. M. (1961). Child development: Vol. 1. A systematic andempirical theory. New York: Appleton-Century-Crofts.

Bijou, S. W., & Baer, D. M. (1965). Child development: Vol. 2. Universal stage of

plied Behavior Analysis, 35, 375–389.

688

Bibliography

infancy. New York: Appleton-Century-Crofts.

Bijou, S. W., Birnbrauer, J. S., Kidder, J. D.,& Tague, C. (1966). Programmed in-struction as an approach to teaching ofreading, writing, and arithmetic to re-tarded children. The PsychologicalRecord, 16, 505–522.

Bijou, S. W., Peterson, R. F., & Ault, M. H.(1968). A method to integrate descrip-tive and experimental field studies atthe level of data and empirical con-cepts. Journal of Applied BehaviorAnalysis, 1, 175–191.

Billings, D. C., & Wasik, B. H. (1985). Self-instructional training with preschool-ers: An attempt to replicate. Journal ofApplied Behavior Analysis, 18, 61–67.

Billingsley, F., White, D. R., & Munson, R.(1980). Procedural reliability: A ration-ale and an example. Behavioral As-sessment, 2, 247–256.

Binder, C. (1996). Behavioral fluency: Evo-lution of a new paradigm. The Behav-ior Analyst, 19, 163–197.

Binder, L. M., Dixon, M. R., & Ghezzi,P. M. (2000). A procedure to teach self-control to children with attention deficithyperactivity disorder. Journal of Ap-plied Behavior Analysis, 33, 233–237.

Birnbrauer, J. S. (1979). Applied behavioranalysis, service, and the acquisition ofknowledge. The Behavior Analyst, 2,15–21.

Birnbrauer, J. S. (1981). External validityand experimental investigation of indi-vidual behavior. Analysis and Inter-vention in Developmental Disabilities,1, 117–132.

Birnbrauer, J. S., Wolf, M. M., Kidder, J. D.,& Tague, C. E. (1965). Classroom be-havior of retarded pupils with token re-inforcement. Journal of ExperimentalChild Psychology, 2, 219–235.

Bishop, B. R., & Stumphauzer, J. S. (1973).Behavior therapy of thumb sucking in children: A punishment (time out)and generalization effect—what’s amother to do? Psychological Reports,33, 939–944.

Bjork, D. W. (1997). B. F. Skinner: A life.Washington, DC: American Psycho-logical Association.

Blew, P. A., Schwartz, I. S., & Luce, S. C.(1985). Teaching functional commu-nity skills to autistic children usingnonhandicapped peer tutors. Journal

of Applied Behavior Analysis, 18,337–342.

Blick, D. W., & Test, D. W. (1987). Effectsof self-recording on high school stu-dents’ on-task behavior. Learning Dis-ability Quarterly, 10, 203–213.

Bloom, L. (1970). Language development:Form and function in emerging gram-mars. Cambridge, MA: MIT Press.

Bloom, M., Fischer, J., & Orme, J. G.(2003). Evaluating practice: Guide-lines for the accountable professional(4th ed.). Boston: Allyn & Bacon.

Bolin, E. P., & Goldberg, G. M. (1979). Be-havioral psychology and the Bible:General and specific considerations.Journal of Psychology and Theology,7, 167–175.

Bolstad, O., & Johnson, S. (1972). Self-regulation in the modification of disruptive classroom behavior. Jour-nal of Applied Behavior Analysis, 5,443–454.

Bondy, A., & Frost, L. (2002). The Picture Ex-change Communication System. Newark,DE: Pyramid Educational Products.

Boring, E. G. (1941). Statistical frequenciesas dynamic equilibria. PsychologicalReview, 48, 279–301.

Bornstein, P. H., & Quevillon, R. P. (1976).The effects of a self-instructional pack-age on overactive preschool boys.Journal of Applied Behavior Analysis,9, 179–188.

Bosch, S., & Fuqua, W. R. (2001). Behav-ioral cusps: A model for selecting tar-get behaviors. Journal of AppliedBehavior Analysis, 34, 123–125.

Bourret, J., Vollmer, T. R., & Rapp, J. T.(2004). Evaluation of a vocal mand as-sessment and vocal mand procedures.Journal of Applied Behavior Analysis,37, 129–144.

Bowers, F. E., Woods, D. W., Carlyon,W. D., & Friman, P. C. (2000). Usingpositive peer reporting to improve thesocial interactions and acceptance ofsocially isolated adolescents in resi-dential care: A systematic replication.Journal of Applied Behavior Analysis,33, 239–242.

Bowman, L. G., Piazza, C. C., Fisher, W.,Hagopian, L. P., & Kogan, J. S.(1997). Assessment of preference forvaried versus constant reinforcement.Journal of Applied Behavior Analysis,30, 451–458.

Boyajian, A. E., DuPaul, G. J., Wartel Han-dler, M., Eckert, T. L., & McGoey,K. E. (2001). The use of classroom-based brief functional analyses withpreschoolers at risk for attention deficithyperactivity disorder. School Psy-chology Review, 30, 278–293.

Boyce, T. E., & Geller, E. S. (2001). A tech-nology to measure multiple driving behaviors without self-report or partic-ipant reactivity. Journal of Applied Be-havior Analysis, 34, 39–55.

Boyle, J. R., & Hughes, C. A. (1994). Ef-fects of self-monitoring and subsequentfading of external prompts on the on-task behavior and task productivity ofelementary students with moderatemental retardation. Journal of Behav-ioral Education, 4, 439–457.

Braam, S. J., & Poling, A. (1982). Develop-ment of intraverbal behavior in men-tally retarded individuals throughtransfer of stimulus control procedures:Classification of verbal responses.Applied Research in Mental Retarda-tion, 4, 279–302.

Braine, M. D. S. (1963). The ontogeny ofEnglish phrase structure: The firstphrase. Language, 39, 1–13.

Brame, P. B. (2001). Making sustained silentreading (SSR) more effective: Effects ofa story fact recall game on students’off-task behavior during SSR and re-tention of story facts. Unpublished doc-toral dissertation, The Ohio StateUniversity, Columbus, OH.

Brame, P., Bicard, S. C., Heward, W. L., &Greulich, H. (2007). Using an indis-criminable group contingency to “wakeup” sustained silent reading: Effects onoff-task behavior and recall of storyfacts. Manuscript submitted for publi-cation review.

Brantley, D. C., & Webster, R. E. (1993).Use of an independent group contin-gency management system in a regularclassroom setting. Psychology in theSchools, 30, 60–66.

Brantner, J. P., & Doherty, M. A. (1983). Areview of timeout: A conceptual andmethodological analysis. In S. Axelrod& J. Apsche (Eds.), The effects of pun-ishment on human behavior (pp.87–132). New York: Academic Press.

Brethower, D. C., & Reynolds, G. S.(1962). A facilitative effect of punish-ment on unpunished behavior. Journal

689

Bibliography

of the Experimental Analysis of Be-havior, 5, 191–199.

Briggs, A., Alberto, P., Sharpton, W., Berlin,K., McKinley, C., & Ritts, C. (1990).Generalized use of a self-operated audioprompt system. Education and Train-ing in Mental Retardation, 25, 381–389.

Brigham, T. A. (1980). Self-control revis-ited: Or why doesn’t anyone readSkinner anymore? The Behavior Ana-lyst, 3, 25–33.

Brigham, T. A. (1983). Self-management: Aradical behavioral perspective. In P.Karoly & F. H. Kanfer (Eds.), Self-management and behavior change:From theory to practice (pp. 32–59).New York: Pergamon Press.

Brobst, B., & Ward, P. (2002). Effects ofpublic posting, goal setting, and oralfeedback on the skills of female soccerplayers. Journal of Applied BehaviorAnalysis, 27, 247–257.

Broden, M., Hall, R. V., & Mitts, B. (1971).The effect of self-recording on theclassroom behavior of two eighth-gradestudents. Journal of Applied BehaviorAnalysis, 4, 191–199.

Brothers, K. J., Krantz, P. J., & McClan-nahan, L. E. (1994). Office paper re-cycling: A function of containerproximity. Journal of Applied Behav-ior Analysis, 27, 153–160.

Browder, D. M. (2001). Curriculum and as-sessment for students with moderateand severe disabilities. New York:Guilford Press.

Brown, K. A., Wacker, D. P., Derby, K. M.,Peck, S. M., Richman, D. M., Sasso,G. M., Knutson, C. L., & Harding,J. W. (2000). Evaluating the effects offunctional communication training inthe presence and absence of establish-ing operations. Journal of Applied Be-havior Analysis, 33, 53–71.

Brown, R. (1973). A first language: Theearly stages. Cambridge, MA: HarvardUniversity Press.

Brown, R., Cazden, C., & Bellugi, U. (1969).The child’s grammar from I to III (pp.28–73). In J. P. Hill (Ed.), The 1967symposium on child psychology. Min-neapolis: University of Minnesota Press.

Brown, S. A., Dunne, J. D, & Cooper, J. O.(1996) Immediate retelling’s effect onstudent retention. Education and Treat-ment of Children, 19, 387–407.

Browning, R. M. (1967). A same-subject de-sign for simultaneous comparison of

three reinforcement contingencies.Behavior Research and Therapy, 5,237–243.

Budd, K. S., & Baer, D. M. (1976). Behaviormodification and the law: Implicationsof recent judicial decisions. Journal ofPsychiatry and Law, 4, 171–244.

Burgio, L. D., Whitman, T. L., & Johnson,M. R. (1980). A self-instructional pack-age for increasing attending behaviorin educable mentally retarded children.Journal of Applied Behavior Analysis,13, 443–459.

Bushell, D., Jr., & Baer, D. M. (1994). Mea-surably superior instruction meansclose, continual contact with the rele-vant outcome data. Revolutionary! InR. Gardner, III, D. M. Sainato, J. O.Cooper, T. E. Heron, W. L. Heward, J.Eshleman, & T. A. Grossi (Eds.),Behavior analysis in education: Focuson measurably superior instruction(pp. 3–10). Pacific Grove, CA:Brooks/Cole.

Byrd, M. R., Richards, D. F., Hove, G., &Friman, P. C. (2002). Treatment of earlyonset hair pulling as a simple habit.Behavior Modification, 26 (3), 400–411.

Byrne, T., LeSage, M. G., & Poling, A.(1997). Effects of chlorpromazine onrats’ acquisition of lever-press re-sponding with immediate and delayedreinforcement. Pharmacology Bio-chemistry and Behavior, 58, 31–35.

Caldwell, N. K., Wolery, M., Werts, M. G.,& Caldwell, Y. (1996). Embedding in-structive feedback into teacher-studentinteractions during independent seat-work. Journal of Behavioral Educa-tion, 6, 459–480.

Cameron, J. (2005). The detrimental effectsof reward hypothesis: Persistence of aview in the face of disconfirming evi-dence. In W. L. Heward, T. E. Heron,N. A. Neef, S. M. Peterson, D. M.Sainato, G. Cartledge, R. Gardner, III,L. D. Peterson, S. B. Hersh, & J. C.Dardig (Eds.), Focus on behavioranalysis in education: Achievements,challenges, and opportunities (pp.304–315). Upper Saddle River, NJ:Merrill/Prentice Hall.

Cammilleri, A. P., & Hanley, G. P. (2005).Use of a lag differential reinforcementcontingency to increase varied selec-tions of classroom activities. Jour-nal of Applied Behavior Analysis, 38,111–115.

Campbell, D. T., & Stanley, J. C. (1966).Experimental and quasi-experimentaldesigns for research. Chicago: RandMcNally.

Campbell, R. C., & Stremel-Campbell, K.(1982). Programming “loose training”as a strategy to facilitate language gen-eralization. Journal of Applied Behav-ior Analysis, 15, 295–301.

Carr, E. G., & Durand, V. M. (1985). Re-ducing behavior problems throughfunctional communication training.Journal of Applied Behavior Analysis,18, 111–126.

Carr, E. G., & Kologinsky, E. (1983). Ac-quisition of sign language by autisticchildren II: Spontaneity and general-ization effects. Journal of Applied Be-havior Analysis, 16, 297–314.

Carr, E. G., & Lovaas, I. O. (1983). Contin-gent electric shock as a treatment forsevere behavior problems. In S. Axel-rod & J. Apsche (Eds.), The effects ofpunishment on human behavior (pp.221–245). New York: Academic Press.

Carr, J. E., & Burkholder, E. O. (1998). Cre-ating single-subject design graphs withMicrosoft Excel. Journal of AppliedBehavior Analysis, 31 (2), 245–251.

Carr, J. E., Kellum, K. K., & Chong, I. M.(2001). The reductive effects of non-contingent reinforcement: Fixed-timeversus variable-time schedules. Jour-nal of Applied Behavior Analysis, 34,505–509.

Carr, J. E., Nicolson, A. C., & Higbee, T. S.(2000). Evaluation of a brief multiple-stimulus preference assessment in anaturalistic context. Journal of AppliedBehavior Analysis, 33, 353–357.

Carroll, R. J., & Hesse, B. E. (1987). The ef-fects of alternating mand and tact train-ing on the acquisition of tacts. TheAnalysis of Verbal Behavior, 5, 55–65.

Carter, J. F. (1993). Self-management: Edu-cation’s ultimate goal. Teaching Ex-ceptional Children, 25(3), 28–32.

Carter, M., & Grunsell, J. (2001). The be-havior chain interruption strategy: A re-view of research and discussion offuture directions. Journal of the Asso-ciation for Persons with Severe Hand-icaps, 26 (1), 37–49.

Carton, J. S., & Schweitzer, J. B. (1996).Use of token economy to increasecompliance during hemodialysis.Journal of Applied Behavior Analysis,29, 111–113.

690

Bibliography

Catania, A. C. (1972). Chomsky’s formalanalysis of natural languages: A behav-ioral translation. Behaviorism, 1, 1–15.

Catania, A. C. (1975). The myth of self-re-inforcement. Behaviorism, 3, 192–199.

Catania, A. C. (1976). Self-reinforcementrevisited. Behaviorism, 4, 157–162.

Catania, A. C. (1992). B.F. Skinner, Or-ganism. American Psychologist, 48,1521–1530.

Catania, A. C. (1998). Learning (4th ed.).Upper Saddle River, NJ: Prentice Hall.

Catania, A. C., & Harnad, S. (Eds.). (1988).The selection of behavior: The operantbehaviorism of B. F. Skinner: Com-ments and controversies. New York:Cambridge University Press.

Catania, A. C., & Hineline, P. N. (Eds.).(1996). Variations and selections: Ananthology of reviews from the Journalof the Experimental Analysis of Behavior. Bloomington, IN: Society forthe Experimental Analysis of Behavior.

Cautela, J. R. (1971). Covert conditioning.In A. Jacobs & L. B. Sachs (Eds.), Thepsychology of private events: Perspec-tive on covert response systems (pp.109–130). New York: Academic Press.

Cavalier, A., Ferretti, R., & Hodges, A.(1997). Self-management within aclassroom token economy for studentswith learning disabilities. Research inDevelopmental Disabilities, 18 (3),167–178.

Cavanaugh, R. A., Heward, W. L., & Donel-son, F. (1996). Effects of responsecards during lesson closure on the aca-demic performance of secondary stu-dents in an earth science course.Journal of Applied Behavior Analysis,29, 403–406.

Chadsey-Rusch, J., Drasgow, E., Reinoehl,B., Halle, J., & Collet-Klingenberg, L.(1993). Using general-case instructionto teach spontaneous and generalizedrequests for assistance to learners withsevere disabilities. Journal of the As-sociation for Persons with SevereHandicaps, 18, 177–187.

Charlop, M. H., Burgio, L. D., Iwata, B. A.,& Ivancic, M. T. (1988). Stimulus vari-ation as a means of enhancing punish-ment effects. Journal of AppliedBehavior Analysis, 21, 89–95.

Charlop-Christy, M. H., & Carpenter, M. H.(2000). Modified incidental teachingsessions: A procedure for parents to in-crease spontaneous speech in their chil-

dren with autism. Journal of PositiveBehavioral Interventions, 2, 98–112.

Charlop-Christy, M. H., & Haymes, L. K.(1998). Using objects of obsession astoken reinforcers for children withautism. Journal of Autism and Devel-opmental Disorders, 28 (3), 189–198.

Chase, P. N. (2006). Teaching the distinc-tion between positive and negative re-inforcement. The Behavior Analyst,29, 113–115.

Chase, P. N., & Danforth, J. S. (1991). Therole of rules in concept learning. In L. J. Hayes and P. N. Chase (Eds.), Dia-logues on verbal behavior (pp.205–225). Reno, NV: Context Press.

Chase, P., Johnson, K., & Sulzer-Azaroff, B.(1985). Verbal relations within instruc-tion: Are there subclasses of the in-traverbal? Journal of the ExperimentalAnalysis of Behavior, 43, 301–314.

Chiang, S. J., Iwata, B. A., & Dorsey, M. F.(1979). Elimination of disruptive busriding behavior via token reinforce-ment on a “distance-based” schedule.Education and Treatment of Children,2, 101–109.

Chiesa, M. (1994). Radical behaviorism:The philosophy and the science.Boston: Authors Cooperative.

Chomsky, N. (1957). Syntactic structures.The Hague: Mouton and Company.

Chomsky, N. (1959). Review of B. F. Skin-ner’s Verbal behavior. Language, 35,26–58.

Chomsky, N. (1965). Aspects of a theory ofsyntax. Cambridge, MA: MIT Press.

Christian, L., & Poling, A. (1997). Usingself-management procedures to im-prove the productivity of adults with developmental disabilities in a com-petitive employment setting. Journal of Applied Behavior Analysis, 30,169–172.

Christle, C. A., & Schuster, J. W. (2003). Theeffects of using response cards on stu-dent participation, academic achieve-ment, and on-task behavior duringwhole-class, math instruction. Journalof Behavioral Education, 12, 147–165.

Ciccone, F. J., Graff, R. B., & Ahearn, W. H.(2006). Stimulus preference assessmentsand the utility of a moderate category.Behavioral Intervention, 21, 59–63.

Cipani, E. C., & Spooner, F. (1994). Cur-ricular and instructional approachesfor persons with severe disabilities.Boston: Allyn & Bacon.

Cipani, E., Brendlinger, J., McDowell, L.,& Usher, S. (1991). Continuous vs. in-termittent punishment: A case study.Journal of Developmental and Physi-cal Disabilities, 3, 147–156.

Cipani, E., Robinson, S., & Toro, H. (2003).Ethical and risk management issues inthe practice of ABA. Paper presentedat annual conference of the Florida As-sociation for Behavior Analysis, St.Petersburg.

Clark, H. B., Rowbury, T., Baer, A., & Baer,D. M. (1973). Time out as a punishingstimulus in continuous and intermittentschedules. Journal of Applied Behav-ior Analysis, 6, 443–455.

Codding, R. S., Feinberg, A. B., Dunn, E. K.,& Pace, G. M. (2005). Effects of im-mediate performance feedback on im-plementation of behavior support plans.Journal of Applied Behavior Analysis,38, 205–219.

Cohen, J. A. (1960). A coefficient of agree-ment for nominal scales. Educationaland Psychological Measurement, 20,37–46.

Cohen-Almeida, D., Graff, R. B., & Ahearn,W. H. (2000). A comparison of verbaland tangible stimulus preference as-sessments. Journal of Applied Behav-ior Analysis, 33, 329–334.

Cole, G. A., Montgomery, R. W., Wilson,K. M., & Milan, M. A. (2000). Para-metric analysis of overcorrection du-ration effects: Is longer really betterthan shorter? Behavior Modification,24, 359–378.

Coleman-Martin, M. B., & Wolff Heller, K.(2004). Using a modified constantprompt-delay procedure to teachspelling to students with physical dis-abilities. Journal of Applied BehaviorAnalysis, 37, 469–480.

Conaghan, B. P., Singh, N. N., Moe, T. L.,Landrum, T. J., & Ellis, C. R. (1992).Acquisition and generalization of man-ual signs by hearing-impaired adultswith mental retardation. Journal of Be-havioral Education, 2, 175–203.

Connell, M. C., Carta, J. J., & Baer, D. M.(1993). Programming generalization ofin-class transition skills: Teachingpreschoolers with developmental de-lays to self-assess and recruit contin-gent teacher praise. Journal of AppliedBehavior Analysis, 26, 345–352.

Conners, J., Iwata, B. A., Kahng, S. W., Han-ley, G. P, Worsdell, A. S., & Thompson,

691

Bibliography

R. H. (2000). Differential responding inthe presence and absence of discrimina-tive stimuli during multi-element func-tional analyses. Journal of AppliedBehavior Analysis, 33, 299–308.

Conroy, M. A., Fox, J. J., Bucklin, A., &Good, W. (1996). An analysis of the re-liability and stability of the MotivationAssessment Scale in assessing the chal-lenging behaviors of persons with de-velopmental disabilities. Education andTraining in Mental Retardation and De-velopmental Disabilities, 31, 243–250.

Cooke, N. L. (1984). Misrepresentations ofthe behavioral model in preserviceteacher education textbooks. In W. L.Heward, T. E. Heron, D. S. Hill, & J.Trap-Porter (Eds.), Focus on behavioranalysis in education (pp. 197–217).Columbus, OH: Charles E. Merrill.

Cooper, J. O. (1981). Measuring behavior(2nd ed.). Columbus, OH: Charles E.Merrill.

Cooper, J. O. (2005). Applied research: Theseparation of applied behavior analysisand precision teaching. In W. L.Heward, T. E. Heron, N. A. Neef, S. M.Peterson, D. M. Sainato, G. Cartledge,R. Gardner, III, L. D. Peterson, S. B.Hersh, & J. C. Dardig (Eds.), Focus on behavior analysis in education:Achievements, challenges, and oppor-tunities (pp. 295–303). Upper SaddleRiver, NJ: Prentice Hall/Merrill.

Cooper, J. O., Kubina, R., & Malanga, P.(1998). Six procedures for showingstandard celeration charts. Journal ofPrecision Teaching & Celeration, 15(2), 58–76.

Cooper, K. J., & Browder, D. M. (1997). Theuse of a personal trainer to enhance par-ticipation of older adults with severedisabilities in a community water exer-cise class. Journal of Behavioral Edu-cation, 7, 421–434.

Cooper, L. J., Wacker, D. P., McComas,J. J., Brown, K., Peck, S. M., Richman,D., Drew, J., Frischmeyer, P., & Mil-lard, T. (1995). Use of componentanalysis to identify active variables intreatment packages for children withfeeding disorders. Journal of AppliedBehavior Analysis, 28, 139–153.

Cooper, L. J., Wacker, D. P., Thursby, D.,Plagmann, L. A., Harding, J., Millard,T., & Derby, M. (1992). Analysis of theeffects of task preferences, task de-mands, and adult attention on child be-

havior in outpatient and classroom set-tings. Journal of Applied BehaviorAnalysis, 25, 823–840.

Copeland, R. E., Brown, R. E., & Hall, R. V.(1974). The effects of principal-implemented techniques on the behav-ior of pupils. Journal of AppliedBehavior Analysis, 7, 77–86.

Corey, G., Corey, M. S., & Callanan, P.(1993). Issues and ethics in the helpingprofessions (4th ed.). Pacific Grove,CA: Brooks/Cole.

Costenbader, V., & Reading-Brown, M.(1995). Isolation timeout used with stu-dents with emotional disturbance. Exc-eptional Children, 61 (4), 353–364.

Cowdery, G., Iwata, B. A., & Pace, G. M.(1990). Effects and side effects of DROas treatment for self-injurious behav-ior. Journal of Applied Behavior Analy-sis, 23, 497–506.

Cox, B. S., Cox, A. B., & Cox, D. J. (2000).Motivating signage prompts safety beltuse among drivers exiting senior com-munities. Journal of Applied BehaviorAnalysis, 33, 635–638.

Craft, M. A., Alber, S. R., & Heward, W. L.(1998). Teaching elementary studentswith developmental disabilities to re-cruit teacher attention in a general ed-ucation classroom: Effects on teacherpraise and academic productivity.Journal of Applied Behavior Analysis,31, 399–415.

Crawford, J., Brockel, B., Schauss, S., &Miltenberger, R. G. (1992). A compar-ison of methods for the functional as-sessment of stereotypic behavior.Journal of the Association for Personswith Severe Handicaps, 17, 77–86.

Critchfield, T. S. (1993). Behavioral pharma-cology and verbal behavior: Diazepameffects on verbal self-reports. The Analy-sis of Verbal Behavior, 11, 43–54.

Critchfield, T. S. (1999). An unexpected ef-fect of recording frequency in reactiveself-monitoring. Journal of AppliedBehavior Analysis, 32, 389–391.

Critchfield, T. S., & Kollins, S. H. (2001).Temporal discounting: Basic researchand the analysis of socially importantbehavior. Journal of Applied BehaviorAnalysis, 34, 101–122.

Critchfield, T. S., & Lattal, K. A. (1993). Ac-quisition of a spatially defined operantwith delayed reinforcement. Journal ofthe Experimental Analysis of Behavior,59, 373–387.

Critchfield, T. S., & Vargas, E. A. (1991).Self-recording, instructions, and publicself-graphing: Effects on swimming inthe absence of coach verbal interaction.Behavior Modification, 15, 95–112.

Critchfield, T. S., Tucker, J. A., & Vuchinich,R. E. (1998). Self-report methods. InK. A. Lattal & M. Perone (Eds.),Handbook of research methods inhuman operant behavior (pp.435–470). New York: Plenum.

Cromwell, O. (1650, August 3). Letter to thegeneral assembly of the Church of Scotland. Available online: http://en.wikiquote.org/wiki/Oliver_Cromwell

Crosbie, J. (1999). Statistical inference inbehavior analysis: Useful friend. TheBehavior Analyst, 22, 105–108.

Cushing, L. S., & Kennedy, C. H. (1997).Academic effects of providing peersupport in general education class-rooms on students without disabilities.Journal of Applied Behavior Analysis,30, 139–151.

Cuvo, A. J. (1979). Multiple-baseline designin instructional research: Pitfalls ofmeasurement and procedural advan-tages. American Journal of Mental De-ficiency, 84, 219–228.

Cuvo,A. J. (2000). Development and functionof consequence classes in operant be-havior. The Behavior Analyst, 23, 57–68.

Cuvo, A. J. (2003). On stimulus generaliza-tion and stimulus classes. Journal ofBehavioral Education, 12, 77–83.

Cuvo, A. J., Lerch, L. J., Leurquin, D. A.,Gaffaney, T. J., & Poppen, R. L. (1998).Response allocation to concurrentfixed-ratio reinforcement scheduleswith work requirements by adults withmental retardation and typical pre-school children. Journal of Applied Be-havior Analysis, 31, 43–63.

Dalton, T., Martella, R., & Marchand-Martella, N. E. (1999). The effects ofa self-management program in reduc-ing off-task behavior. Journal of Be-havioral Education, 9, 157–176.

Daly, P. M., & Ranalli, P. (2003). Usingcountoons to teach self-monitoringskills. Teaching Exceptional Children,35 (5), 30–35.

Dams, P-C. (2002). A little night music. In R. W. Malott & H. Harrison, I’ll stopprocrastinating when I get around to it:Plus other cool ways to succeed inschool and life using behavior analysis to get your act together (pp. 7–3–7-4).

692

Bibliography

Kalamazoo, MI: Department of Psy-chology, Western Michigan University.

Dardig, J. C., & Heward, W. L. (1976). Signhere: A contracting book for childrenand their families. Kalamazoo, MI:Behaviordelia.

Dardig, J. C., & Heward, W. L. (1981a). Asystematic procedure for prioritizingIEP goals. The Directive Teacher, 3, 6–8.

Dardig, J. C., & Heward, W. L. (1981b).Sign here: A contracting book for chil-dren and their parents (2nd ed.).Bridgewater, NJ: Fournies.

Darwin, C. (1872/1958). The origin ofspecies (6th ed.). New York: Mentor.(Original work published 1872)

Davis, C. A., & Reichle, J. (1996). Variantand invariant high-probability requests:Increasing appropriate behaviors inchildren with emotional-behavioral dis-orders. Journal of Applied BehaviorAnalysis, 29, 471–482.

Davis, C. A., Brady, M. P., Williams, R. E.,& Burta, M. (1992). The effects of self-operated auditory prompting tapes onthe performance fluency of personswith severe mental retardation. Edu-cation and Training in Mental Retar-dation, 27, 39–50.

Davis, L. L., & O’Neill, R. E. (2004). Use ofresponse cards with a group of studentswith learning disabilities includingthose for whom English is a secondlanguage. Journal of Applied BehaviorAnalysis, 37, 219–222.

Davis, P. K., & Chittum, R. (1994). A group-oriented contingency to increase leisureactivities of adults with traumatic braininjury. Journal of Applied BehaviorAnalysis, 27, 553–554.

Davison, M. (1999). Statistical inference inbehavior analysis: Having my cakeand eating it too. The Behavior Ana-lyst, 22, 99-103.

Dawson, J. E., Piazza, C. C., Sevin, B. M.,Gulotta, C. S., Lerman, D. & Kelley,M. L. (2003). Use of the high-proba-bility instructional sequence and escapeextinction in a child with food refusal.Journal of Applied Behavior Analysis,36, 105–108.

De Luca, R. B., & Holborn, S. W. (1990).Effects of fixed-interval and fixed-ratioschedules of token reinforcement onexercise with obese and nonobese boys.Psychological Record, 40, 67–82.

De Luca, R. B., & Holborn, S. W. (1992).Effects of a variable-ratio reinforce-

ment schedule with changing criteriaon exercise in obese and nonobeseboys. Journal of Applied BehaviorAnalysis, 25, 671–679.

De Martini-Scully, D., Bray, M. A., & Kehle,T. J. (2000). A packaged interventionto reduce disruptive behaviors in gen-eral education students. Psychology inthe Schools, 37 (2), 149–156.

de Zubicaray, G., & Clair, A. (1998). Anevaluation of differential reinforcementof other behavior, differential rein-forcement of incompatible behaviors,and restitution for the management ofaggressive behaviors. Behavioral In-terventions, 13, 157–168.

Deaver, C. M., Miltenberger, R. G., &Stricker, J. M. (2001). Functionalanalysis and treatment of hair twirlingin a young child. Journal of AppliedBehavior Analysis, 34, 535–538.

DeCatanzaro, D., & Baldwin, G. (1978). Ef-fective treatment of self-injurious be-havior through a forced arm exercise.Journal of Applied Behavior Analysis,1, 433–439.

DeHaas-Warner, S. (1992). The utility ofself-monitoring for preschool on-taskbehavior. Topics in Early ChildhoodSpecial Education, 12, 478–495.

Deitz, D. E. D. (1977). An analysis of pro-gramming DRL schedules in educa-tional settings. Behavior Research andTherapy, 15, 103–111.

Deitz, D. E. D., & Repp, A. C. (1983). Re-ducing behavior through reinforcement.Exceptional Education Quarterly, 3,34–46.

Deitz, S. M. (1977). An analysis of pro-gramming DRL schedules in educa-tional settings. Behavior Research andTherapy, 15, 103–111.

Deitz, S. M., & Repp, A. C. (1973). De-creasing classroom misbehaviorthrough the use of DRL schedules ofreinforcement. Journal of Applied Be-havior Analysis, 6, 457–463.

Deitz, S. M. (1982). Defining applied be-havior analysis: An historical analogy.The Behavior Analyst, 5, 53–64.

Deitz, S. M., & Repp, A. C. (1983). Reduc-ing behavior through reinforcement.Exceptional Education Quarterly, 3,34–46.

Deitz, S. M., Slack, D. J., Schwarzmueller,E. B., Wilander, A. P., Weatherly, T. J.,& Hilliard, G. (1978). Reducing inap-propriate behavior in special class-

rooms by reinforcing average interre-sponse times: Interval DRL. BehaviorTherapy, 9, 37–46.

DeLeon, I. G., & Iwata, B. A. (1996). Eval-uation of a multiple-stimulus presenta-tion format for assessing reinforcerpreferences. Journal of Applied Be-havior Analysis, 29, 519–533.

Deleon, I. G., Anders, B. M., Rodriguez-Catter, V., & Neidert, P. L. (2000). Theeffects of noncontingent access to sin-gle-versus multiple-stimulus sets onself-injurious behavior. Journal of Ap-plied Behavior Analysis, 33, 623–626.

DeLeon, I. G., Fisher, W. W., Rodriguez-Cat-ter, V., Maglieri, K., Herman, K., &Marhefka, J. M. (2001). Examination ofrelative reinforcement effects of stimuliidentified through pretreatment and dailybrief preference assessments. Journal ofApplied Behavior Analysis, 34, 463–473.

DeLeon, I. G., Iwata, B. A., Conners, J., &Wallace, M. D. (1999). Examination ofambiguous stimulus preferences withduration-based measures. Journal of Ap-plied Behavior Analysis, 32, 111–114.

DeLeon, I. G., Iwata, B. A., Goh, H., &Worsdell, A. S. (1997). Emergence ofreinforcer preference as a function ofschedule requirements and stimulussimilarity. Journal of Applied BehaviorAnalysis, 30, 439–449.

DeLissovoy, V. (1963). Head banging in earlychildhood: A suggested cause. Journalof Genetic Psychology, 102, 109–114.

Delprato, D. J. (2002). Countercontrol in be-havior analysis. The Behavior Analyst,25, 191–200.

Delprato, D. J., & Midgley, B. D. (1992).Some fundamentals of B. F. Skinner’sbehaviorism. American Psychologist,48, 1507–1520.

DeLuca, R. V., & Holborn, S. W. (1992). Ef-fects of a variable-ratio reinforcementschedule with changing criteria on ex-ercise in obese and nonobese boys.Journal of Applied Behavior Analysis,25, 671–679.

DeMyer, M. K., & Ferster, C. B. (1962).Teaching new social behavior to schiz-ophrenic children. Journal of theAmerican Academy of Child Psychia-try, 1, 443–461.

Derby, K. M., Wacker, D. P., Berg, W., De-Raad, A., Ulrich, S., Asmus, J., Hard-ing, J., Prouty, A., Laffey, P., & Stoner,E. A. (1997). The long-term effects offunctional communication training in

693

Bibliography

home settings. Journal of Applied Be-havior Analysis, 30, 507–531.

Derby, K. M., Wacker, D. P., Sasso, G.,Steege, M., Northup, J., Cigrand, K., &Asmus, J. (1992). Brief functional as-sessment techniques to evaluate aber-rant behavior in an outpatient setting:A summary of 79 cases. Journal of Ap-plied Behavior Analysis, 25, 713–721.

DeVries, J. E., Burnette, M. M., & Redmon,W. K. (1991). AIDS prevention: Im-proving nurses’ compliance with glovewearing through performance feed-back. Journal of Applied BehaviorAnalysis, 24, 705–711.

Dewey, J. (1939). Experience and education.New York: Macmillan.

Dickerson, E. A., & Creedon, C. F. (1981).Self-selection of standards by children:The relative effectiveness of pupil-selected and teacher-selected standardsof performance. Journal of Applied Be-havior Analysis, 14, 425–433.

Didden, R., Prinsen, H., & Sigafoos, J.(2000). The blocking effect of pictor-ial prompts on sight-word reading.Journal of Applied Behavior Analysis,33, 317–320.

Dinsmoor, J. A. (1952). A discriminationbased on punishment. Quarterly Journalof Experimental Psychology, 4, 27–45.

Dinsmoor, J. A. (1995a). Stimulus control:Part I. The Behavior Analyst, 18, 51–68.

Dinsmoor, J. A. (1995b). Stimulus control:Part II. The Behavior Analyst, 18,253–269.

Dinsmoor, J. A. (2003). Experimental. TheBehavior Analyst, 26, 151–153.

Dixon, M. R., & Cummins, A. (2001), Self-control in children with autism: Re-sponse allocation during delays toreinforcement. Journal of Applied Be-havior Analysis, 34, 491–495.

Dixon, M. R., & Falcomata, T. S. (2004).Preference for progressive delays andconcurrent physical therapy exercise inan adult with acquired brain injury.Journal of Applied Behavior Analysis,37, 101–105.

Dixon, M. R., & Holcomb, S. (2000). Teach-ing self-control to small groups of du-ally diagnosed adults. Journal of AppliedBehavior Analysis, 33, 611–614.

Dixon, M. R., Benedict, H., & Larson, T.(2001). Functional analysis and treat-ment of inappropriate verbal behavior.Journal of Applied Behavior Analysis,34, 361–363.

Dixon, M. R., Hayes, L. J., Binder, L. M.,Manthey, S., Sigman, C., & Zdanow-ski, D. M. (1998). Using a self-controltraining procedure to increase appro-priate behavior. Journal of Applied Be-havior Analysis, 31, 203–210.

Dixon, M. R., Rehfeldt, R. A., & Randich, L.(2003). Enhancing tolerance to delayedreinforcers: The role of intervening ac-tivities. Journal of Applied BehaviorAnalysis, 36, 263–266.

Doke, L. A., & Risley, T. R. (1972). The or-ganization of day care environments:Required vs. optional activities.Journal of Applied Behavior Analysis,5, 453–454.

Donahoe, J. W., & Palmer, D. C. (1994).Learning and complex behavior.Boston: Allyn and Bacon.

Dorigo, M., & Colombetti, M. (1998). Robotshaping: An experiment in behavior en-gineering. Cambridge, MA: MIT Press.

Dorow, L. G., & Boyle, M. E. (1998). In-structor feedback for college writingassignments in introductory classes.Journal of Behavioral Education, 8,115–129.

Downing, J. A. (1990). Contingency con-tracting: A step-by-step format. Teach-ing Exceptional Children, 26 (2),111–113.

Drabman, R. S., Hammer, D., & Rosen-baum, M. S. (1979). Assessing gener-alization in behavior modification withchildren: The generalization map. Be-havioral Assessment, 1, 203–219.

Drabman, R. S., Spitalnik, R., & O’Leary,K. D. (1973). Teaching self-control todisruptive children. Journal of Abnor-mal Psychology, 82, 10–16.

Drasgow, E., Halle, J., & Ostrosky, M. M.(1998). Effects of differential rein-forcement on the generalization of a re-placement mand in three children withsevere language delays. Journal of Ap-plied Behavior Analysis, 31, 357–374.

Drash, P. W., High, R. L., & Tudor, R. M.(1999). Using mand training to estab-lish an echoic repertoire in young chil-dren with autism. The Analysis ofVerbal Behavior, 16, 29–44.

Ducharme, D. W., & Holborn, S. W. (1997).Programming generalization of socialskills in preschool children with hear-ing impairments. Journal of AppliedBehavior Analysis, 30, 639–651.

Ducharme, J. M., & Rushford, N. (2001).Proximal and distal effects of play on

child compliance with a brain-injuredparent. Journal of Applied BehaviorAnalysis, 34, 221–224.

Duker, P. C., & Seys, D. M. (1996). Long-term use of electrical aversion treat-ment with self-injurious behavior.Research in Developmental Disabili-ties, 17, 293–301.

Duker, P. C., & van Lent, C. (1991). Induc-ing variability in communicative ges-tures used by severely retardedindividuals. Journal of Applied Behav-ior Analysis, 24, 379–386.

Dunlap, G., & Johnson, J. (1985). Increasingthe independent responding of autisticchildren with unpredictable supervi-sion. Journal of Applied BehaviorAnalysis, 18, 227–236.

Dunlap, G., de Perczel, M., Clarke, S., Wil-son, D., Wright, S., White, R., &Gomez, A. (1994). Choice making topromote adaptive behavior for studentswith emotional and behavioral chal-lenges. Journal of Applied BehaviorAnalysis, 27, 505–518.

Dunlap, G., Kern-Dunlap, L., Clarke, S., &Robbins, F. R. (1991). Functional as-sessment, curricular revision, and severebehavior problems. Journal of AppliedBehavior Analysis, 24, 387–397.

Dunlap, G., Koegel, R. L., Johnson, J., &O’Neill, R. E. (1987). Maintaining performance of autistic clients in com-munity settings with delayed contin-gencies. Journal of Applied BehaviorAnalysis, 20, 185–191.

Dunlap, L. K., & Dunlap, G. (1989). A self-monitoring package for teaching sub-traction with regrouping to students withlearning disabilities. Journal of AppliedBehavior Analysis, 22, 309–314.

Dunlap, L. K., Dunlap, G., Koegel, L. K.,& Koegel, R. L. (1991). Using self-monitoring to increase independence.Teaching Exceptional Children, 23(3),17–22.

Dunn, L. M., & Dunn, L. M. (1997). Pea-body Picture Vocabulary Test—III. Cir-cle Pines, MN: American GuidanceService.

Durand, V. M. (1999). Functional commu-nication training using assistive de-vices: Recruiting natural communitiesof reinforcement. Journal of AppliedBehavior Analysis, 32, 247–267.

Durand, V. M., & Carr, E. G. (1987). Socialinfluences on “self-stimulatory”behavior: Analysis and treatment

694

Bibliography

application. Journal of Applied Behav-ior Analysis, 20, 119–132.

Durand, V. M., & Carr, E. G. (1992). Ananalysis of maintenance followingfunctional communication training.Journal of Applied Behavior Analysis,25, 777–794.

Durand, V. M., & Crimmins, D. (1992). TheMotivation Assessment Scale. Topeka,KS: Monaco & Associates.

Durand, V. M., Crimmins, D. B., Caufield,M., & Taylor, J. (1989). Reinforcer as-sessment I: Using problem behavior toselect reinforcers. Journal of the Asso-ciation for Persons with Severe Hand-icaps, 14, 113–126.

Duvinsky, J. D., & Poppen, R. (1982).Human performance on conjunctivefixed-interval fixed-ratio schedules.Journal of the Experimental Analysisof Behavior, 37, 243–250.

Dyer, K., Schwartz, I., & Luce, S. C. (1984).A supervision program for increasingfunctional activities for severely hand-icapped students in a residential setting.Journal of Applied Behavior Analysis,17, 249–259.

Ebanks, M. E., & Fisher, W. W. (2003). Al-tering the timing of academic promptsto treat destructive behavior maintainedby escape. Journal of Applied Behav-ior Analysis, 36, 355–359.

Eckert, T. L., Ardoin, S., P., Daly, III, E. J.,& Martens, B. K. (2002). Improvingoral reading fluency: A brief experi-mental analysis of combining an antecedent intervention with conse-quences. Journal of Applied BehaviorAnalysis, 35, 271–281.

Ecott, C. L., Foate, B. A. L., Taylor, B., &Critchfield, T. S. (1999). Further evalu-ation of reinforcer magnitude effects innoncontingent schedules. Journal of Ap-plied Behavior Analysis, 32, 529–532.

Edwards, K. J., & Christophersen, E. R.(1993). Automated data acquisitionthrough time-lapse videotape record-ing. Journal of Applied BehaviorAnalysis, 24, 503–504.

Egel, A. L. (1981). Reinforcer variation:Implications for motivating develop-mentally disabled children. Journal of Applied Behavior Analysis, 14,345–350.

Egel, A. L. (1982). Programming the gen-eralization and maintenance of treat-ment gains. In R. L. Koegal, A.Rincover, & A. L. Egel (Eds.), Edu-

cating and understanding autistic chil-dren (pp. 281–299). San Diego, CA:College-Hill Press.

Elliot, S. N., Busse, R. T., & Shapiro,E. S. (1999). Intervention techniquesfor academic problems. In C. R.Reynolds & T. B. Gutkin (Eds.), Thehandbook of school psychology (3rded., pp. 664–685). New York: JohnWiley & Sons.

Ellis, E. S., Worthington, L. A., & Larkin,M. J. (2002). Executive summary of theresearch synthesis on effective teach-ing principles and the design of qualitytools for educators. [available on-line: http://idea.uoregon.edu/~ncite/documents/techrep/tech06.html]

Emerson, E., Reever, D. J., & Felce, D.(2000). Palmtop computer technologiesfor behavioral observation research. InT. Thompson, D. Felce, & F. J. Symons(Eds.), Behavioral observation: Tech-nology and applications in develop-mental disabilities (pp. 47–59).Baltimore: Paul H. Brookes.

Engelmann, S. (1975). Your child can suc-ceed. New York: Simon & Shuster.

Engelmann, S., & Carnine, D. (1982).Theory of instruction: Principles andapplications. New York: Irvington.

Engelmann, S., & Colvin, D. (1983).Generalized compliance training: Adirect-instruction program for man-aging severe behavior problems.Austin, TX: Pro-Ed.

Epling, W. F., & Pierce, W. D. (1983). Ap-plied behavior analysis: New directionsfrom the laboratory. The Behavior An-alyst, 6, 27–37.

Epstein, L. H., Beck, B., Figueroa, J., Farkas,G., Kazdin, A., Daneman, D., & Becker,D. (1981). The effects of targeting im-provement in urine glucose on meta-bolic control in children with insulindependent diabetes. Journal of AppliedBehavior Analysis, 14, 365–375.

Epstein, R. (1982). Skinner for the class-room. Champaign, IL: Research Press.

Epstein, R. (1990). Generativity theory andcreativity. In M. A. Runco & R. S. Al-bert (Eds.), Theories of creativity (pp.116–140). Newbury Park, CA: Sage.

Epstein, R. (1991). Skinner, creativity, andthe problem of spontaneous behavior.Psychological Science, 2, 362–370.

Epstein, R. (1996). Cognition, creativity, andbehavior: Selected essays. Westport,CT: Praeger.

Epstein, R. (1997). Skinner as self-manager.Journal of Applied Behavior Analysis,30, 545–568.

Ericsson, K. A., & Charness, N. (1994). Ex-pert performance. Its structure and acquisition. American Psychologist, 49(8), 725–747.

Ervin, R., Radford, P., Bertsch, K., Piper, A.,Ehrhardt, K., & Poling, A. (2001). Adescriptive analysis and critique of theempirical literature on school-basedfunctional assessment. School Psy-chology Review, 30, 193–210.

Eshleman, J. W. (1991). Quantified trendsin the history of verbal behaviorresearch. The Analysis of Verbal Be-havior, 9, 61–80.

Eshleman, J. W. (2004, May 31). Celerationanalysis of verbal behavior: Researchpapers presented at ABA 1975–present.Paper presented at the 30th AnnualConvention of the Association for Be-havior Analysis, Boston, MA.

Falcomata, T. S., Roane, H. S., Hovanetz,A. N., Kettering, T. L., & Keeney, K. M.(2004). An evaluation of response costin the treatment of inappropriate vo-calizations maintained by automatic reinforcement. Journal of Applied Be-havior Analysis, 37, 83–87.

Falk, J. L. (1961). Production of polydipsiain normal rats by an intermittent foodschedule. Science, 133, 195–196.

Falk, J. L. (1971). The nature and determi-nants of adjunctive behavior. Physiologyand Behavior, 6, 577–588.

Fantuzzo, J. W., & Clement, P. W. (1981).Generalization of the effects ofteacher- and self-administered tokenreinforcers to nontreated students.Journal of Applied Behavior Analysis,14, 435–447.

Fantuzzo, J. W., Rohrbeck, C. A., Hightower,A. D., & Work, W. C. (1991). Teacher’suse and children’s preferences of re-wards in elementary school. Psychologyin the Schools, 28, 175–181.

Farrell, A. D. (1991). Computers and be-havioral assessment: Current applica-tions, future possibilities, and obstaclesto routine use. Behavioral Assessment,13, 159–179.

Favell, J. E., & McGimsey, J. E. (1993).Defining an acceptable treatment envi-ronment. In R. Van Houten & S. Axel-rod (Eds.), Behavior analysis andtreatment (pp. 25–45). New York:Plenum Press.

695

Bibliography

Favell, J. E., Azrin, N. H., Baumeister,A. A., Carr, E. G., Dorsey, M. F., Fore-hand, R., Foxx, R. M., Lovaas, I. O.,Rincover, A., Risley, T. R., Romanczyk,R. G., Russo, D. C., Schroeder, S. R., &Solnick, J. V. (1982). The treatment ofself-injurious behavior. Behavior Ther-apy, 13, 529–554.

Favell, J. E., McGimsey, J. F., & Jones, M. L.(1980). Rapid eating in the retarded:Reduction by nonaversive procedures.Behavior Modification, 4, 481–492.

Fawcett, S. B. (1991). Social validity: A noteon methodology. Journal of AppliedBehavior Analysis, 24, 235–239.

Felce, D., & Emerson, E. (2000). Obser-vational methods in assessment ofquality of life. In T. Thompson, D.Felce, & F. J. Symons (Eds.), Behav-ioral observation: Technology and applications in developmental dis-abilities (pp. 159–174). Baltimore:Paul H. Brookes.

Felixbrod, J. J., & O’Leary, K. D. (1973).Effects of reinforcement on children’sacademic behavior as a function of self-determined and externally imposed sys-tems. Journal of Applied BehaviorAnalysis, 6, 241–250.

Felixbrod, J. J., & O’Leary, K. D. (1974).Self-determination of academic stan-dards by children: Toward freedomfrom external control. Journal of Edu-cational Psychology, 66, 845–850.

Ferguson, D. L., & Rosales-Ruiz, J. (2001).Loading the problem loader: The ef-fects of target training and shaping ontrailer-loading behavior of horses.Journal of Applied Behavior Analysis,34, 409–424.

Ferrari, M., & Harris, S. (1981). The limitsand motivational potential of sensorystimuli as reinforcers for autistic chil-dren. Journal of Applied BehaviorAnalysis, 14, 339–343.

Ferreri, S. J., Allen, N., Hessler, T., Nobel,M., Musti-Rao, S., & Salmon, M.(2006). Battling procrastination: Self-managing studying and writing forcandidacy exams and dissertation de-fenses. Symposium at 32nd AnnualConvention of the Association for Be-havior Analysis, Atlanta, GA.

Ferreri, S. J., Neef, N. A., & Wait, T. A.(2006). The assessment of impulsivechoice as a function of the point of re-inforcer delay. Manuscript submittedfor publication review.

Ferster, C. B., & DeMyer, M. K. (1961). Thedevelopment of performances in autis-tic children in an automatically con-trolled environment. Journal of ChronicDiseases, 13, 312–345.

Ferster, C. B., & DeMyer, M. K. (1962). Amethod for the experimental analysisof the behavior of autistic children.American Journal of Orthopsychiatry,32, 89–98.

Ferster, C. B., & Skinner, B. F. (1957).Schedules of reinforcement. EnglewoodCliffs, NJ: Prentice Hall.

Fink, W. T., & Carnine, D. W. (1975). Con-trol of arithmetic errors using informa-tional feedback and graphing. Journalof Applied Behavior Analysis, 8, 461.

Finney, J. W., Putnam, D. E., & Boyd, C. M.(1998). Improving the accuracy of self-reports of adherence. Journal of Ap-plied Behavior Analysis, 31, 485–488.

Fisher, R. (1956). Statistical methods andstatistical inference. London: Oliver &Boyd.

Fisher, W. W., & Mazur, J. E. (1997). Basicand applied research on choice re-sponding. Journal of Applied BehaviorAnalysis, 30, 387–410.

Fisher, W. W., Kelley, M. E., & Lomas, J. E.(2003). Visual aids and structured cri-teria for improving visual inspectionand interpretation of single-case de-signs. Journal of Applied BehaviorAnalysis, 36, 387–406.

Fisher, W. W., Kuhn, D. E., & Thompson,R. H. (1998). Establishing discrimina-tive control of responding using func-tional and alternative reinforcers duringfunctional communication training.Journal of Applied Behavior Analysis,31, 543–560.

Fisher, W. W., Lindauer, S. E., Alterson,C. J., & Thompson, R. H. (1998). As-sessment and treatment of destructivebehavior maintained by stereotypic ob-ject manipulation. Journal of AppliedBehavior Analysis, 31, 513–527.

Fisher, W. W., Piazza, C. C., Bowman, L. G.,& Almari, A. (1996). Integrating care-giver report with a systematic choiceassessment to enhance reinforcer iden-tification. American Journal on Men-tal Retardation, 101, 15–25.

Fisher, W. W., Piazza, C. C., Bowman, L. G.,Hagopian, L. P., Owens, J. C., &Slevin, I. (1992). A comparison of twoapproaches for identifying reinforcersfor persons with severe and profound

disabilities. Journal of Applied Behav-ior Analysis, 25, 491–498.

Fisher, W. W., Piazza, C. C., Bowman, L. G.,Kurtz, P. F., Sherer, M. R., & Lachman,S. R. (1994). A preliminary evaluationof empirically derived consequencesfor the treatment of pica. Journal of Applied Behavior Analysis, 27,447–457.

Fisher, W. W., Piazza, C. C., Cataldo, M. E.,Harrell, R., Jefferson, G., & Conner, R.(1993). Functional communicationtraining with and without extinctionand punishment. Journal of Applied Behavior Analysis, 26, 23–36.

Flaute, A. J., Peterson, S. M., Van Norman,R. K., Riffle, T., & Eakins, A. (2005).Motivate me! 20 tips for using a Moti-vAider® to improve your classroom.Teaching Exceptional Children Plus,2 (2) Article 3. Retrieved March 1,2006, from http://escholarship.bc.edu/education/tecplus/vol2/iss2/art3.

Fleece, L., Gross, A., O’Brien, T., Kistner,J., Rothblum, E., & Drabman, R.(1981). Elevation of voice volume inyoung developmentally delayed chil-dren via an operant shaping procedure.Journal of Applied Behavior Analysis,14, 351–355.

Flood, W. A., & Wilder, D. A. (2002). An-tecedent assessment and assessment-based treatment of off-task behavior in a child diagnosed with Attention Deficit-Hyperactivity Disorder (ADHD). Edu-cation and Treatment of Children, 25 (3),331–338.

Flora, S. R. (2004). The power of reinforce-ment. Albany: State University of NewYork Press.

Florida Association for Behavior Analysis.(1988). The behavior analyst’s code ofethics. Tallahassee, FL: Author.

Forthman, D. L., & Ogden, J. J. (1992). Therole of applied behavior analysis in zoomanagement: Today and tomorrow.Journal of Applied Behavior Analysis,25, 647–652

Foster, W. S. (1978). Adjunctive behavior:An underreported phenomenon in applied behavior analysis? Journal of Applied Behavioral Analysis, 11,545–546.

Fowler, S. A., & Baer, D. M. (1981). ”Do Ihave to be good all day?” The timingof delayed reinforcement as a factor ingeneralization. Journal of Applied Be-havior Analysis, 14, 13–24.

696

Bibliography

Fox, D. K., Hopkins, B. L., & Anger, A. K.(1987). The long-term effects of atoken economy on safety performancein open-pit mining. Journal of AppliedBehavior Analysis, 20, 215–224.

Foxx, R. M. (1982). Decreasing behaviors ofpersons with severe retardation andautism. Champaign, IL: Research Press.

Foxx, R. M. (1996). Twenty years of appliedbehavior analysis in treating the most se-vere problem behavior: Lessons learned.The Behavior Analyst, 19, 225–235.

Foxx, R. M., & Azrin, N. H. (1972). Resti-tution: A method of eliminating ag-gressive-disruptive behavior of retardedand brain damaged patients. BehaviorResearch and Therapy, 10, 15–27.

Foxx, R. M., & Azrin, N. H. (1973). Theelimination of autistic self-stimulatorybehavior by overcorrection. Journal ofApplied Behavior Analysis, 6, 1–14.

Foxx, R. M., & Bechtel, D. R. (1983). Over-correction: A review and analysis. In S.Axelrod & J. Apsche (Eds.), The effectsof punishment on human behavior (pp.133–220). New York: Academic Press.

Foxx, R. M., & Rubinoff, A. (1979). Be-havioral treatment of caffeinism: Re-ducing excessive coffee drinking.Journal of Applied Behavior Analysis,12, 335–344.

Foxx, R. M., & Shapiro, S. T. (1978). Thetimeout ribbon: A non-exclusionarytimeout procedure. Journal of AppliedBehavior Analysis, 11, 125–143.

Foxx, R. M., Bittle, R. G., & Faw, G. D.(1989). A maintenance strategy for dis-continuing aversive procedures: A 52-month follow-up of the treatment ofaggression. American Journal on Men-tal Retardation, 94, 27–36.

Freeland, J. T., & Noell, G. H. (1999). Main-taining accurate math responses in el-ementary school students: The effectsof delayed intermittent reinforcementand programming common stimuli.Journal of Applied Behavior Analysis,32, 211–215.

Freeland, J. T., & Noell, G. H. (2002). Pro-gramming for maintenance: An in-vestigation of delayed intermittentreinforcement and common stimuli tocreate indiscriminable contingencies.Journal of Behavioral Education, 11,5–18.

Friedling, C., & O’Leary, S. G. (1979). Ef-fects of self-instructional training on sec-ond- and third-grade hyperactive

children:A failure to replicate. Journal ofApplied Behavior Analysis, 12, 211–219.

Friman, P. C. (1990). Nonaversive treat-ment of high-rate disruptions: Childand provider effects. Exceptional Chil-dren, 57, 64–69.

Friman, P. C. (2004). Up with this I shall notput: 10 reasons why I disagree withBranch and Vollmer on behavior usedas a count noun. The Behavior Analyst,27, 99–106.

Friman, P. C., & Poling, A. (1995). Makinglife easier with effort: Basic findingsand applied research on response effort.Journal of Applied Behavior Analysis,28, 538–590.

Friman, P. C., Hayes, S. C., & Wilson, K. G.(1998). Why behavior analysts shouldstudy emotion: The example of anxi-ety. Journal of Applied Behavior Analy-sis, 31, 137–156.

Fuller, P. R. (1949). Operant conditioning ofa vegetative organism. American Jour-nal of Psychology, 62, 587–590.

Fuqua, R. W., & Schwade, J. (1986). Socialvalidation of applied research: A selec-tive review and critique. In A. Poling& R. W. Fuqua (Eds.), Research meth-ods in applied behavior analysis (pp.265–292). New York: Plenum Press.

Gable, R. A., Arllen, N. L., & Hendrickson,J. M. (1994). Use of students with emo-tional/behavioral disorders as behaviorchange agents. Education and Treat-ment of Children, 17 (3), 267–276.

Galbicka, G. (1994). Shaping in the 21stcentury: Moving percentile schedulesinto applied settings. Journal of Ap-plied Behavior Analysis, 27, 739–760.

Gallagher, S. M., & Keenan, M (2000). In-dependent use of activity materials bythe elderly in a residential setting.Journal of Applied Behavior Analysis,33, 325–328.

Gambrill, E. (2003). Science and its useand neglect in the human services. InK. S. Budd & T. Stokes (Eds.), A smallmatter of proof: The legacy of DonaldM. Baer (pp. 63–76). Reno, NV: Con-text Press.

Gambrill, E. D. (1977). Behavior modifica-tion: Handbook of assessment, inter-vention, and evaluation. San Francisco:Jossey-Bass.

Garcia, E. E. (1976). The development andgeneralization of delayed imitation.Journal of Applied Behavior Analy-sis, 9, 499.

Garcia, E. E., & Batista-Wallace, M. (1977).Parental training of the plural mor-pheme in normal toddlers. Journal ofApplied Behavior Analysis, 10, 505.

Gardner, III, R., Heward, W. L., & Grossi,T. A. (1994). Effects of response cardson student participation and academicachievement: A systematic replicationwith inner-city students during whole-class science instruction. Journal of Ap-plied Behavior Analysis, 27, 63–71.

Garfinkle, A. N., & Schwartz, I. S. (2002).Peer imitation: Increasing social in-teractions in children with autism andother developmental disabilities in in-clusive preschool classrooms. Topicsin Early Childhood Special Educa-tion, 22, 26–38.

Garner, K. (2002). Case study: The consci-entious kid. In R. W. Malott & H. Har-rison, I’ll stop procrastinating when Iget around to it: Plus other cool waysto succeed in school and life using be-havior analysis to get your act to-gether (p. 3-13). Kalamazoo, MI:Department of Psychology, WesternMichigan University.

Gast, D. L., Jacobs, H. A., Logan, K. R.,Murray, A. S., Holloway, A., & Long,L. (2000). Pre-session assessment ofpreferences for students with profoundmultiple disabilities. Education andTraining in Mental Retardation and De-velopmental Disabilities, 35, 393–405.

Gaylord-Ross, R. (1980). A decision modelfor the treatment of aberrant behaviorin applied settings. In W. Sailor, B.Wilcox, & L. Brown (Eds.), Methodsof instruction for severely handicappedstudents (pp. 135–158). Baltimore:Paul H. Brookes.

Gaylord-Ross, R. J., Haring, T. G., Breen,C., & Pitts-Conway, V. (1984). Thetraining and generalization of social in-teraction skills with autistic youth.Journal of Applied Behavior Analysis,17, 229–247.

Gee, K., Graham, N., Goetz, L., Oshima, G.,& Yoshioka, K. (1991). Teaching stu-dents to request the continuation of rou-tine activities by using time delay anddecreasing physical assistance in thecontext of chain interruption. Journalof the Association for Persons with Se-vere Handicaps, 10, 154–167.

Geller, E. S., Paterson, L., & Talbott, E.(1982). A behavioral analysis of incen-tive prompts for motivating seat belt

697

Bibliography

use. Journal of Applied BehaviorAnalysis, 15, 403–415.

Gena, A., Krantz, P. J., McClannahan, L. E.,& Poulson, C. L. (1996). Training andgeneralization of affective behaviordisplayed by youth with autism.Journal of Applied Behavior Analysis,29, 291–304.

Gentile, J. R., Roden, A. H., & Klein, R. D.(1972). An analysis-of-variance modelfor the intrasubject replication design.Journal of Applied Behavior Analysis,5, 193–198.

Gewirtz, J. L., & Baer, D. M. (1958). Theeffect of brief social deprivation onbehaviors for a social reinforcer.Journal of Abnormal Social Psychol-ogy, 56, 49–56.

Gewirtz, J. L., & Pelaez-Nogueras, M.(2000). Infant emotions under the pos-itive-reinforcer control of caregiver at-tention and touch. In J. C. Leslie & D.Blackman (Eds.), Issues in experimen-tal and applied analyses of human behavior (pp. 271–291). Reno, NV:Context Press.

Glenn, S. S. (2004). Individual behavior, cul-ture, and social change. The BehaviorAnalyst, 27, 133–151.

Glenn, S. S., Ellis, J., & Greenspoon, J. (1992).On the revolutionary nature of the oper-ant as a unit of behavioral selection.American Psychologist, 47, 1329–1336.

Glynn, E. L. (1970). Classroom applicationsof self-determined reinforcement.Journal of Applied Behavior Analysis,3, 123–132.

Glynn, E. L., Thomas, J. D., & Shee, S. M.(1973). Behavioral self-control of on-task behavior in an elementary class-room. Journal of Applied BehaviorAnalysis, 6, 105–114.

Glynn, S. M. (1990). Token economy: Ap-proaches for psychiatric patients:Progress and pitfalls over 25 years.Behavior Modification, 14 (4), 383–407.

Goetz, E. M., & Baer, D. M. (1973). Socialcontrol of form diversity and the emer-gence of new forms in children’s block-building. Journal of Applied BehaviorAnalysis, 6, 209–217.

Goetz, L., Gee, K., & Sailor, W. (1985).Using a behavior chain interruptionstrategy to teach communication skillsto students with severe disabilities.Journal of the Association for Personswith Severe Handicaps, 10, 21–30.

Goh, H-L., & Iwata, B. A. (1994). Behav-ioral persistence and variability during

extinction of self-injury maintained byescape. Journal of Applied BehaviorAnalysis, 27, 173–174.

Goldiamond, I. (1965). Self-control proce-dures in personal behavior problems.Psychological Reports, 17, 851–868.

Goldiamond, I. (1974). Toward a construc-tional approach to social problems:Ethical and constitutional issues raisedby applied behavior analysis. Behavi-orism, 2, 1–85.

Goldiamond, I. (1976a). Self-reinforcement.Journal of Applied Behavior Analysis,9, 509–514.

Goldiamond, I. (1976b). Fables, armadyllics,and self-reinforcement. Journal of Ap-plied Behavior Analysis, 9, 521–525.

Gottschalk, J. M., Libby, M. E., & Graff,R. B. (2000). The effects of establish-ing operations on preference assess-ment outcomes. Journal of AppliedBehavior Analysis, 33, 85–88.

Grace, N. C., Kahng, S., & Fisher, W. W.(1994). Balancing social acceptabilitywith treatment effectiveness of an in-trusive procedure: A case report.Journal of Applied Behavior Analysis,27, 171–172.

Graf, S. A., & Lindsley, O. R. (2002).Standard Celeration Charting 2002.Poland, OH: Graf Implements.

Gray, J. A. (1979). Ivan Pavlov. New York:Penguin Books.

Green, C. W., & Reid, D. H. (1996). Defin-ing, validating, and increasing indicesof happiness among people with pro-found multiple disabilities. Journal ofApplied Behavior Analysis, 29, 67–78.

Green, C. W., Gardner, S. M., & Reid, D. H.(1997). Increasing indices of happinessamong people with profound multipledisabilities: A program replication andcomponent analysis. Journal of AppliedBehavior Analysis, 30, 217–228.

Green, G., & Shane, H. C. (1994). Science,reason, and facilitated communication.Journal of the Association for Personswith Severe Handicaps, 19, 151–172.

Green, L., & Freed, D. W. (1993). The sub-stitutability of reinforcers. Journal ofthe Experimental Analysis of Behavior,60, 141–158.

Greene, B. F., Bailey, J. S., & Barber, F.(1981). An analysis and reduction ofdisruptive behavior on school buses.Journal of Applied Behavior Analysis,14, 177–192.

Greenspan, S., & Negron, E. (1994). Ethi-cal obligations of special services per-

sonnel. Special Services in the Schools,8 (2), 185–209.

Greenwood, C. R., & Maheady, L. (1997).Measurable change in student perfor-mance: Forgotten standard in teacherpreparation? Teacher Education andSpecial Education, 20, 265–275.

Greenwood, C. R., Delquadri, J. C., &Hall, R. V. (1984). Opportunity to re-spond and student academic achieve-ment. In W. L. Heward, T. E. Heron,D. S. Hill, & J. Trap-Porter (Eds.),Focus on behavior analysis in educa-tion (pp. 58–88). Columbus, OH:Merrill.

Greer, R. D. (1983). Contingencies of thescience and technology of teachingand pre-behavioristic research prac-tices in education. Educational Re-searcher, 12, 3–9.

Gresham, F. M. (1983). Use of a home-baseddependent group contingency systemin controlling destructive behavior: Acase study. School Psychology Review,12 (2), 195–199.

Gresham, F. M., Gansle, K. A., & Noell, G. H.(1993). Treatment integrity in appliedbehavior analysis with children. Journal of Applied Behavior Analysis,26, 257–263.

Griffen, A. K., Wolery, M., & Schuster J.W. (1992). Triadic instruction ofchained food preparation responses:Acquisition and observational learn-ing. Journal of Applied BehaviorAnalysis, 25, 193–204.

Griffith, R. G. (1983). The administrativeissues: An ethical and legal perspec-tive. In S. Axelrod & J. Apsche (Eds.),The effects of punishment on humanbehavior (pp. 317–338). New York:Academic Press.

Grossi, T. A. (1998). Using a self-operatedauditory prompting system to improvethe work performance of two employ-ees with severe disabilities. Journal ofThe Association for Persons with Se-vere Handicaps, 23, 149–154.

Grossi, T. A., & Heward, W. L. (1998).Using self-evaluation to improve thework productivity of trainees in acommunity-based restaurant trainingprogram. Education and Training inMental Retardation and DevelopmentalDisabilities, 33, 248–263.

Grossi, T. A., Kimball, J. W., & Heward,W. L. (1994). What did you say? Usingreview of tape-recorded interactions toincrease social acknowledgments by

698

Bibliography

trainees in a community-based voca-tional program. Research in Develop-mental Disabilities, 15, 457–472.

Grunsell, J., & Carter, M. (2002). The be-havior change interruption strategy:Generalization to out-of-routine con-texts. Education and Training in Men-tal Retardation and DevelopmentalDisabilities, 37 (4), 378–390.

Guilford, J. P. (1965). Fundamental statis-tics in psychology and education. NewYork: McGraw-Hill.

Gumpel, T. P., & Shlomit, D. (2000). Ex-ploring the efficacy of self-regulatorytraining as a possible alternative to so-cial skills training. Behavioral Disor-ders, 25, 131–141.

Gunter, P. L., Venn, M. L., Patrick, J., Miller,K. A., & Kelly, L. (2003). Efficacy ofusing momentary time samples to de-termine on-task behavior of studentswith emotional/behavioral disorders.Education and Treatment of Children,26, 400–412.

Gutowski, S. J., & Stromer, R. (2003). De-layed matching to two-picture samplesby individuals with and without dis-abilities: An analysis of the role ofnaming. Journal of Applied BehaviorAnalysis, 36, 487–505.

Guttman, N., & Kalish, H. (1956). Dis-criminability and generalization. Jour-nal of Experimental Psychology, 51,79–88.

Haagbloom, S. J., Warnick, R., Warnick J.E., Jones, V. K.,Yarbrough, G. L., Rus-sell, T. M., et al. (2002). The 100 mosteminent psychologists of the 20th cen-tury. Review of General Psychology, 6,139–152.

Hackenberg, T. D. (1995). Jacques Loeb,B. F. Skinner, and the legacy of pre-diction and control. The Behavior An-alyst, 18, 225–236.

Hackenberg, T. D., & Axtell, S. A. M.(1993). Humans’ choices in situationsof time-based diminishing returns.Journal of the Experimental Analysisof Behavior, 59, 445–470.

Hagopian, L. P., & Adelinis, J. D. (2001) Re-sponse blocking with and without redi-rection for the treatment of pica.Journal of Applied Behavior Analysis,34, 527–530.

Hagopian, L. P., & Thompson, R. H. (1999).Reinforcement of compliance with res-piratory treatment in a child with cystic fibrosis. Journal of Applied Be-havior Analysis, 32, 233–236.

Hagopian, L. P., Farrell, D. A., & Amari, A.(1996). Treating total liquid refusalwith backward chaining and fading.Journal of Applied Behavior Analysis,29, 573–575.

Hagopian, L. P., Fisher, W. W., Sullivan,M. T., Acquisto, J., & Leblanc, L. A.(1998). Effectiveness of functionalcommunication training with and with-out extinction and punishment: A sum-mary of 21 inpatient cases. Journal of Applied Behavior Analysis, 31,211–235.

Hagopian, L. P., Rush, K. S., Lewin, A. B.,& Long, E. S. (2001). Evaluating thepredictive validity of a single stimulusengagement preference assessment.Journal of Applied Behavior Analysis,34, 475–485.

Hake, D. F., & Azrin, N. H. (1965). Condi-tioned punishment. Journal of the Ex-perimental Analysis of Behavior, 6,297–298.

Hake, D. F., Azrin, N. H., & Oxford, R.(1967). The effects of punishment in-tensity on squirrel monkeys. Journal ofthe Experimental Analysis of Behavior,10, 95–107.

Hall, G. A., & Sundberg, M. L. (1987).Teaching mands by manipulating con-ditioned establishing operations. TheAnalysis of Verbal Behavior, 5, 41–53.

Hall, R. V., & Fox, R. G. (1977). Changing-criterion designs: An alternative appliedbehavior analysis procedure. In B. C.Etzel, J. M. LeBlanc, & D. M. Baer(Eds.), New developments in behav-ioral research: Theory, method, and ap-plication (pp. 151–166). Hillsdale, NJ:Erlbaum.

Hall, R. V., Cristler, C., Cranston, S. S., &Tucker, B. (1970). Teachers and par-ents as researchers using multiple base-line designs. Journal of AppliedBehavior Analysis, 3, 247–255.

Hall, R. V., Delquadri, J. C., & Harris, J.(1977, May). Opportunity to respond:A new focus in the field of applied be-havior analysis. Paper presented at theMidwest Association for BehaviorAnalysis, Chicago, IL.

Hall, R. V., Lund, D., & Jackson, D. (1968).Effects of teacher attention on study be-havior. Journal of Applied BehaviorAnalysis, 1, 1–12.

Hall, R. V., Panyan, M., Rabon, D., & Bro-den. D. (1968). Instructing beginningteachers in reinforcement procedureswhich improve classroom control.

Journal of Applied Behavior Analysis,1, 315–322.

Hall, R., Axelrod, S., Foundopoulos, M.,Shellman, J., Campbell, R. A., &Cranston, S. S. (1971). The effectiveuse of punishment to modify behaviorin the classroom. Educational Technol-ogy, 11 (4), 24–26.

Hall, R.V., Fox, R., Williard, D., Goldsmith,L., Emerson, M., Owen, M., Porcia, E.,& Davis, R. (1970). Modification ofdisrupting and talking-out behaviorwith the teacher as observer and ex-perimenter. Paper presented at theAmerican Educational Research Asso-ciation Convention, Minneapolis, MN.

Hallahan, D. P., Lloyd, J. W., Kosiewicz,M. M., Kauffman, J. M., & Graves,A. W. (1979). Self-monitoring of atten-tion as a treatment for a learning dis-abled boy’s off-task behavior. LearningDisability Quarterly, 2, 24–32.

Hamblin, R. L., Hathaway, C., & Wodarski,J. S. (1971). Group contingencies, peertutoring and accelerating academicachievement. In E. A. Ramp & B. L.Hopkins (Eds.), A new direction for ed-ucation: Behavior analysis (Vol. 1, pp.41–53). Lawrence: University of Kansas.

Hamlet, C. C., Axelrod, S., & Kuerschner,S. (1984). Eye contact as an antecedentto compliant behavior. Journal of Ap-plied Behavior Analysis, 17, 553–557.

Hammer-Kehoe, J. (2002). Yoga. In R. W.Malott & H. Harrison (Eds.), I’ll stopprocrastinating when I get around to it:Plus other cool ways to succeed inschool and life using behavior analy-sis to get your act together (p. 10-5).Kalamazoo, MI: Department of Psy-chology, Western Michigan University.

Hammill, D., & Newcomer, P. L. (1997).Test of language development—3.Austin, TX: Pro-Ed.

Hanley, G. P., Iwata, B. A., & Thompson,R. H. (2001). Reinforcement schedulethinning following treatment with func-tional communication training. Journalof Applied Behavior Analysis, 34, 17–38.

Hanley, G. P., Iwata, B. A., Thompson, R. H.,& Lindberg, J. S. (2000). A componentanalysis of “stereotypy and reinforce-ment” for alternative behavior. Journalof Applied Behavior Analysis, 33,299–308.

Hanley, G. P., Piazza, C. C., Fisher, W. W.,Contrucci, S. A., & Maglieri, K. A.(1997). Evaluation of client preferenceof function-based treatment packages.

699

Bibliography

Journal of Applied Behavior Analysis,30, 459–473.

Hanley, G. P., Piazza, C. C., Keeney, K. M.,Blakeley-Smith, A. B., & Worsdell,A. S. (1998). Effects of wrist weightson self-injurious and adaptive behav-iors. Journal of Applied BehaviorAnalysis, 31, 307–310.

Haring, T. G., & Kennedy, C. H. (1990).Contextual control of problem behav-ior. Journal of Applied Behavior Analy-sis, 23, 235–243.

Harlow, H. R. (1959). Learning set and errorfactor theory. In S. Kock (Ed.),Psychology: A study of science (Vol. 2,pp. 492–537). New York: McGraw-Hill.

Harrell, J. P. (2002). Case study: Taking timeto relax. In R. W. Malott & H. Harrison,I’ll stop procrastinating when I getaround to it: Plus other cool ways to suc-ceed in school and life using behavioranalysis to get your act together (p. 10-2).Kalamazoo, MI: Department of Psy-chology, Western Michigan University.

Harris, F. R., Johnston, M. K., Kelly, C. S., &Wolf, M. M. (1964). Effects of positivesocial reinforcement on regressed crawl-ing of a nursery school child. Journal ofEducational Psychology, 55, 35–41.

Harris, K. R. (1986). Self-monitoring of at-tentional behavior versus self-monitor-ing of productivity: Effects on on-taskbehavior and academic response rateamong learning disabled children.Journal of Applied Behavior Analysis,19, 417–424.

Hart, B., & Risley, T. R. (1975). Incidentalteaching of language in the preschool.Journal of Applied Behavior Analysis,8, 411–420.

Hart, B. M., Allen, K. E., Buell, J. S., Har-ris, F. R., & Wolf, M. M. (1964). Ef-fects of social reinforcement on operantcrying. Journal of Experimental ChildPsychology, 1, 145–153.

Hartmann, D. P. (1974). Forcing square pegsinto round holes: Some comments on“an analysis-of-variance model for the intrasubject replication design”.Journal of Applied Behavior Analysis,7, 635–638.

Hartmann, D. P. (1977). Considerations inthe choice of interobserver reliabilityestimates. Journal of Applied BehaviorAnalysis, 10, 103–116.

Hartmann, D. P., & Hall, R. V. (1976). Thechanging criterion design. Journal ofApplied Behavior Analysis, 9, 527–532.

Hartmann, D. P., Gottman, J. M., Jones, R.R., Gardner, W., Kazdin, A. E., &Vaught, R. S. (1980). Interrupted time-series analysis and its application to behavioral data. Journal of Applied Be-havior Analysis, 13, 543–559.

Harvey, M. T., May, M. E., & Kennedy, C. H.(2004). Nonconcurrent multiple base-line designs and the evaluation of edu-cational systems. Journal of BehavioralEducation, 13, 267–276.

Hawkins, R. P. (1975). Who decided that wasthe problem? Two stages of responsi-bility for applied behavior analysts. InW. S. Wood (Ed.), Issues in evaluatingbehavior modification (pp. 195–214).Champaign, IL: Research Press.

Hawkins, R. P. (1979). The functions of as-sessment. Journal of Applied BehaviorAnalysis, 12, 501–516.

Hawkins, R. P. (1984). What is “meaning-ful” behavior change in a severely/profoundly retarded learner: The viewof a behavior analytic parent. In W. LHeward, T. E. Heron, D. S. Hill, & J.Trap-Porter (Eds.), Focus on behav-ior analysis in education (pp.282–286). Upper Saddle River, NJ:Prentice-Hall/Merrill.

Hawkins, R. P. (1986). Selection of targetbehaviors. In R. O. Nelson & S. C.Hayes (Eds.), Conceptual foundationsof behavioral assessment (pp. 331–385).New York: Guilford Press.

Hawkins, R. P. (1991). Is social validity whatwe are interested in? Journal of AppliedBehavior Analysis, 24, 205–213.

Hawkins, R. P., & Anderson, C. M. (2002).On the distinction between science andpractice: A reply to Thyer and Adkins.The Behavior Analyst, 26, 115–119.

Hawkins, R. P., & Dobes, R. W. (1977). Be-havioral definitions in applied behav-ior analysis: Explicit or implicit? In B. C. Etzel, J. M. LeBlanc, & D. M.Baer (Eds.), New developments in be-havioral research: Theory, method, andapplication (pp. 167–188). Hillsdale,NJ: Erlbaum.

Hawkins, R. P., & Dotson, V. A. (1975). Re-liability scores that delude: An Alice inWonderland trip through the mislead-ing characteristics of interobserveragreement scores in interval recording.In E. Ramp & G. Semp (Eds.), Be-havior analysis: Areas of research andapplication (pp. 359–376). Upper Sad-dle River, NJ: Prentice Hall.

Hawkins, R. P., & Hursh, D. E. (1992). Lev-els of research for clinical practice: Itisn’t as hard as you think. West VirginiaJournal of Psychological Research andPractice, 1, 61–71.

Hawkins, R. P., Mathews, J. R., & Hamdan,L. (1999). Measuring behavioral healthoutcomes: A practical guide. NewYork: Kluwer Academic/Plenum.

Hayes, L. J., Adams, M. A., & Rydeen, K. L.(1994). Ethics, choice, and value. In L. J. Hayes et al. (Eds.), Ethical issuesin developmental disabilities (pp.11–39). Reno, NV: Context Press.

Hayes, S. C. (1991). The limits of techno-logical talk. Journal of Applied Behav-ior Analysis, 24, 417–420.

Hayes, S. C. (Ed.). (1989). Rule-governedbehavior: Cognition, contingencies,and instructional control. Reno, NV:Context Press.

Hayes, S. C., & Cavior, N. (1977). Multipletracking and the reactivity of self-monitoring: I. Negative behaviors.Behavior Therapy, 8, 819–831.

Hayes, S. C., & Cavior, N. (1980). Multipletracking and the reactivity of self-monitoring: II. Positive behaviors. Be-havioral Assessment, 2, 238–296.

Hayes, S. C., & Hayes, L. J. (1993). Appliedimplications of current JEAB researchon derived relations and delayed rein-forcement. Journal of Applied BehaviorAnalysis, 26, 507–511.

Hayes, S. C., Rincover, A., & Solnick, J. V.(1980). The technical drift of appliedbehavior analysis. Journal of AppliedBehavior Analysis, 13, 275–285.

Hayes, S. C., Rosenfarb, I., Wulfert, E.,Munt, E. D., Korn, D., & Zettle, R. D.(1985). Self-reinforcement effects: Anartifact of social standard setting?Journal of Applied Behavior Analysis,18, 201–214.

Hayes, S. C., Zettle, R. D., & Rosenfarb, I.(1989). Rule-following. In S. C. Hayes(Ed.), Rule-governed behavior: Cogni-tion, contingencies, and instructionalcontrol (pp. 191–220). New York:Plenum Press.

Haynes, S. N., & Horn, W. F. (1982). Reac-tivity in behavioral observation: Amethodological and conceptual critique.Behavioral Assessment, 4, 369–385.

Health Insurance Portability and Account-ability Act (HIPAA). (1996). Wash-ington, DC: Office for Civil Rights.Retrieved December 14, 2003, from

700

Bibliography

http://aspe.hhs.gov/admnsimp/pl104191.htm>http://aspe.hhs.gov/admnsimp/pl104191.htm.

Heckaman, K. A., Alber, S. R., Hooper, S.,& Heward, W. L. (1998). A compari-son of least-to-most prompts and pro-gressive time delay on the disruptivebehavior of students with autism.Journal of Behavior Education, 8,171–201.

Heflin, L. J., & Simpson, R. L. (2002). Un-derstanding intervention controver-sies. In B. Scheuermann & J. Weber(Eds.), Autism: Teaching does make adifference (pp. 248–277). Belmont,CA: Wadsworth.

Helwig, J. (1973). Effects of manipulatingan antecedent event on mathematics re-sponse rate. Unpublished manuscript,Ohio State University, Columbus, OH.

Heron, T. E., & Harris, K. C. (2001). Theeducational consultant: Helping pro-fessionals, parents, and students in in-clusive classrooms (4th ed.). Austin,TX: Pro-Ed.

Heron, T. E., & Heward, W. L. (1988).Ecological assessment: Implicationsfor teachers of learning disabled stu-dents. Learning Disability Quarterly,11, 224–232.

Heron, T. E., Heward, W. L., Cooke, N. L., &Hill, D. S. (1983). Evaluation of a class-wide peer tutoring system: First gradersteach each other sight words. Educationand Treatment of Children, 6, 137–152.

Heron, T. E., Hippler, B. J., & Tincani, M. J.(2003). How to help students completeclasswork and homework assignments.Austin, TX: Pro-Ed.

Heron, T. E., Tincani, M. J., Peterson, S. M., &Miller, A. D. (2005). Plato’s allegory ofthe cave revisited. Disciples of the lightappeal to the pied pipers and prisoners inthe darkness. In W. L. Heward, T. E.Heron, N.A. Neef, S. M. Peterson, D. M.Sainato, G. Cartledge, R. Gardner, III,L.D.Peterson,S.B.Hersh,&J.C.Dardig(Eds.), Focus on behavior analysis in ed-ucation: Achievements, challenges, andopportunities (pp. 267–282), Upper Sad-dle River, NJ: Merrill/Prentice Hall.

Herr, S. S., O’Sullivan, J. L., & Dinerstein,R. D. (1999). Consent to extraordinaryinterventions. In R. D. Dinerstein, S. S.Herr, & J. L. O’Sullivan (Eds.), A guideto consent (pp. 111–122). WashingtonDC: American Association on MentalRetardation.

Herrnstein, R. J. (1961). Relative and ab-solute strength of a response as a func-tion of frequency of reinforcement.Journal of the Experimental Analysisof Behavior 4, 267–272.

Herrnstein, R. J. (1970). On the law of ef-fect. Journal of the ExperimentalAnalysis of Behavior 13, 243–266.

Hersen, M., & Barlow, D. H. (1976). Singlecase experimental designs: Strategiesfor studying behavior change. NewYork: Pergamon Press.

Heward, W. L. (1978, May). The delayedmultiple baseline design. Paper pre-sented at the Fourth Annual Conven-tion of the Association for BehaviorAnalysis, Chicago.

Heward, W. L. (1980). A formula for indi-vidualizing initial criteria for reinforce-ment. Exceptional Teacher, 1 (9), 7, 9.

Heward, W. L. (1994). Three “low-tech”strategies for increasing the frequency ofactive student response during group in-struction. In R. Gardner, D. M. Sainato,J. O. Cooper, T. E. Heron, W. L. Heward,J. Eshleman, & T. A. Grossi (Eds.),Behavior analysis in education: Focuson measurably superior instruction (pp.283–320). Monterey, CA: Brooks/Cole.

Heward, W. L. (2003). Ten faulty notionsabout teaching and learning that hinderthe effectiveness of special education.The Journal of Special Education, 36(4), 186–205.

Heward, W. L. (2005). Reasons applied be-havior analysis is good for educationand why those reasons have been in-sufficient. In W. L. Heward, T. E. Heron,N. A. Neef, S. M. Peterson, D. M.Sainato, G. Cartledge, R. Gardner, III,L. D. Peterson, S. B. Hersh, & J. C.Dardig (Eds.), Focus on behavioranalysis in education: Achievements,challenges, and opportunities (pp.316–348). Upper Saddle River, NJ:Merrill/Prentice Hall.

Heward, W. L. (2006). Exceptional children:An introduction to special education(8th ed.). Upper Saddle River, NJ: Mer-rill/Prentice Hall.

Heward, W. L., & Cooper, J. O. (1992). Rad-ical behaviorism: A productive andneeded philosophy for education.Journal of Behavioral Education, 2,345–365.

Heward, W. L., & Eachus, H. T. (1979). Ac-quisition of adjectives and adverbs insentences written by hearing impaired

and aphasic children. Journal of Ap-plied Behavior Analysis, 12, 391–400.

Heward, W. L., & Silvestri, S. M. (2005a).The neutralization of special education.In J. W. Jacobson, J. A. Mulick, & R. M.Foxx (Eds.), Fads: Dubious and im-probable treatments for developmentaldisabilities. Mahwah NJ: Erlbaum.

Heward, W. L., & Silvestri, S. M. (2005b).Antecedent. In G. Sugai & R. Horner(Eds.), Encyclopedia of behavior modi-fication and cognitive behavior therapy,Vol. 3: Educational applications (pp.1135–1137). Thousand Oaks, CA: Sage.

Heward, W. L., Dardig, J. C., & Rossett, A.(1979). Working with parents of hand-icapped children. Upper Saddle River,NJ: Merrill/Prentice Hall.

Heward, W. L., Heron, T. E., Gardner, III,R., & Prayzer, R. (1991). Two strate-gies for improving students’ writingskills. In G. Stoner, M. R. Shinn, & H. M. Walker (Eds.), A school psy-chologist’s interventions for regulareducation (pp. 379–398). Washington,DC: National Association of SchoolPsychologists.

Heward, W. L., Heron, T. E., Neef, N. A., Pe-terson, S. M., Sainato, D. M., Cartledge,G., Gardner, III, R., Peterson, L. D.,Hersh, S. B., & Dardig, J. C. (Eds.).(2005). Focus on behavior analysis ineducation: Achievements, challenges,and opportunities. Upper Saddle River,NJ: Prentice Hall/Merrill.

Hewett, F. M. (1968). The emotionally dis-turbed child in the classroom. NewYork: McGraw-Hill.

Higbee, T. S., Carr, J. E., & Harrison, C. D.(1999). The effects of pictorial versustangible stimuli in stimulus preferenceassessments. Research in Developmen-tal Disabilities, 20, 63–72.

Higbee, T. S., Carr, J. E., & Harrison, C. D.(2000). Further evaluation of themultiple-stimulus preference assess-ment. Research in Developmental Dis-abilities, 21, 61–73.

Higgins, J. W., Williams, R. L., & McLaugh-lin, T. F. (2001). The effects of a tokeneconomy employing instructional con-sequences for a third-grade studentwith learning disabilities: A data-basedcase study. Education and Treatment ofChildren, 24 (1), 99–106.

Himle, M. B., Miltenberger, R. G., Flessner,C., & Gatheridge, B. (2004). Teachingsafety skills to children to prevent gun

701

Bibliography

play. Journal of Applied BehaviorAnalysis, 1, 1–9.

Himle, M. B., Miltenberger, R. G., Gath-eridge, B., & Flessner, C., (2004). Anevaluation of two procedures for train-ing skills to prevent gun play in chil-dren. Pediatrics, 113, 70–77.

Hineline, P. N. (1977). Negative reinforce-ment and avoidance. In W. K. Honig &J. E. R. Staddon (Eds.), Handbook ofoperant behavior (pp. 364–414). UpperSaddle River, NJ: Prentice Hall.

Hineline, P. N. (1992). A self-interpretivebehavior analysis. American Psychol-ogist, 47, 1274–1286.

Hobbs, T. R., & Holt, M. M. (1976). The ef-fects of token reinforcement on the behavior of delinquents in cottage set-tings. Journal of Applied BehaviorAnalysis, 9, 189–198.

Hoch, H., McComas, J. J., Johnson, L,Faranda, N., & Guenther, S. L. (2002).The effects of magnitude and qualityof reinforcement on choice respondingduring play activities. Journal of Ap-plied Behavior Analysis, 35, 171–181.

Hoch, H., McComas, J. J., Thompson, A. L.,& Paone, D. (2002). Concurrent rein-forcement schedules: Behavior changeand maintenance without extinction.Journal of Applied Behavior Analysis,35, 155–169.

Holcombe, A., Wolery, M., & Snyder, E.(1994). Effects of two levels of proce-dural fidelity with constant time delaywith children’s learning. Journal of Be-havioral Education, 4, 49–73.

Holcombe, A., Wolery, M., Werts, M. G.,& Hrenkevich, P. (1993). Effects ofinstructive feedback on future learn-ing. Journal of Behavioral Education,3, 259–285.

Holland, J. G. (1978). Behaviorism: Part ofthe problem or part of the solution?Journal of Applied Behavior Analysis,11, 163–174.

Holland, J. G., & Skinner, B. F. (1961). Theanalysis of behavior: A program for self-instruction. New York: McGraw-Hill.

Holman, J., Goetz, E. M., & Baer, D. M.(1977). The training of creativity as anoperant and an examination of its gen-eralization characteristics. In B. C.Etzel, J. M. LeBlanc, & D. M. Baer(Eds.), New developments in behav-ioral research: Theory, method, andpractice (pp. 441–471). Hillsdale, NJ:Erlbaum.

Holmes, G., Cautela, J., Simpson, M.,Motes, P., & Gold, J. (1998). Factorstructure of the School ReinforcementSurvey Schedule: School is more thangrades. Journal of Behavioral Educa-tion, 8, 131–140.

Holth, P. (2003), Generalized imitation andgeneralized matching to sample. TheBehavior Analyst, 26, 155–158.

Holz, W. C., & Azrin, N. H. (1961). Dis-criminative properties of punishment.Journal of the Experimental Analysisof Behavior, 4, 225–232.

Holz, W. C., & Azrin, N. H. (1962). Recov-ery during punishment by intensenoise. Journal of the ExperimentalAnalysis of Behavior, 6, 407–412.

Holz, W. C., Azrin, N. H., & Ayllon, T.(1963). Elimination of behavior ofmental patients by response-producedextinction. Journal of the Experimen-tal Analysis of Behavior, 6, 407–412.

Homme, L., Csanyi, A. P., Gonzales, M. A.,& Rechs, J. R. (1970). How to use con-tingency contracting in the classroom.Champaign, IL: Research Press.

Honig, W. K. (Ed.). (1966). Operant behav-ior: Areas of research and application.New York: Appleton-Century-Crofts.

Hoover, H. D., Hieronymus,A. N., Dunbar, S.B. Frisbie, D. A., & Switch (1996). IowaTests of Basic Skills. Chicago: Riverside.

Hopkins, B. L. (1995). Applied behavioranalysis and statistical process control?Journal of Applied Behavior Analysis,28, 379–386.

Horner, R. D., & Baer, D. M. (1978). Mul-tiple-probe technique: A variation onthe multiple baseline design. Journalof Applied Behavior Analysis, 11,189–196.

Horner, R. D., & Keilitz, I. (1975). Trainingmentally retarded adolescents to brushtheir teeth. Journal of Applied BehaviorAnalysis, 8, 301–309.

Horner, R. H. (2002). On the status ofknowledge for using punishment: Acommentary. Journal of Applied Be-havior Analysis, 35, 465–467.

Horner, R. H., & McDonald, R. S. (1982).Comparison of single instance and gen-eral case instruction in teaching of ageneralized vocational skill. Journal ofthe Association for Persons with SevereHandicaps, 7 (3), 7–20.

Horner, R. H., Carr, E. G., Halle, J., McGee,G., Odom, S., & Wolery, M. (2005).The use of single-subject research to

identify evidence-based practice in spe-cial education. Exceptional Children,71, 165–179.

Horner, R. H., Day, M., Sprague, J., O’Brien,M., & Heathfield, L. (1991). Inter-spersed requests: A nonaversive pro-cedure for reducing aggression andself-injury during instruction. Journal ofApplied Behavior Analysis, 24, 265–278.

Horner, R. H., Dunlap, G., & Koegel, R. L.(1988). Generalization and mainte-nance: Life-style changes in appliedsettings. Baltimore: Paul H. Brookes.

Horner, R. H., Eberhard, J. M., & Sheehan,M. R. (1986). Teaching generalizedtable bussing: The importance of neg-ative teaching examples. BehaviorModification, 10, 457–471.

Horner, R. H., Sprague, J., & Wilcox, B.(1982). Constructing general case pro-grams for community activities. In B.Wilcox & G. T. Bellamy (Eds.), Designof high school programs for severelyhandicapped students (pp. 61–98). Bal-timore: Paul H. Brookes.

Horner, R. H., Williams, J. A., & Steveley,J. D. (1987). Acquisition of generalizedtelephone use by students with moder-ate and severe mental retardation.Research in Developmental Disabili-ties, 8, 229–247.

Houten, R. V., & Rolider, A. (1990). The useof color mediation techniques to teachnumber identification and single digitmultiplication problems to childrenwith learning problems. Education andTreatment of Children, 13, 216–225.

Howell, K. W. (1998). Curriculum-basedevaluation: Teaching and decisionmaking (3rd ed.). Monterey, CA:Brooks/Cole.

Hughes, C. (1992). Teaching self-instruc-tion using multiple exemplars to pro-duce generalized problem-solving byindividuals with severe mental retar-dation. Journal on Mental Retardation,97, 302–314.

Hughes, C. (1997). Self-instruction. In M.Agran (Ed.), Self-directed learning:Teaching self-determination skills (pp.144–170). Pacific Grove, CA: Brooks/Cole.

Hughes, C., & Lloyd, J. W. (1993). Ananalysis of self-management. Journalof Behavioral Education, 3, 405–424.

Hughes, C., & Rusch, F. R. (1989). Teachingsupported employees with severe men-tal retardation to solve problems.

702

Bibliography

Journal of Applied Behavior Analysis,22, 365–372.

Hughes, C., Harmer, M. L., Killian, D. J.,& Niarhos, F. (1995). The effects ofmultiple-exemplar self-instructionaltraining on high school students’ gen-eralized conversational interactions.Journal of Applied Behavior Analysis,28, 201–218.

Hume, K. M., & Crossman, J. (1992). Mu-sical reinforcement of practice behav-iors among competitive swimmers.Journal of Applied Behavior Analysis,25, 665–670.

Humphrey, L. L., Karoly, P., & Kirschen-baum, D. S. (1978). Self-managementin the classroom: Self-imposed re-sponse cost versus self-reward. Behav-ior Therapy, 9, 592–601.

Hundert, J., & Bucher, B. (1978). Pupils’self-scored arithmetic performance: Apractical procedure for maintaining ac-curacy. Journal of Applied BehaviorAnalysis, 11, 304.

Hunt, P., & Goetz, L. (1988). Teaching spon-taneous communication in natural set-tings using interrupted behavior chains.Topics in Language Disorders, 9, 58–71.

Hurley, A. D. N., & O’Sullivan, J. L. (1999).Informed consent for health care. In R. D. Dinerstein, S. S. Herr, & J. L.O’Sullivan (Eds.), A guide to consent(pp. 39–55). Washington DC: Ameri-can Association on Mental Retardation.

Hutchinson, R. R. (1977). By-products ofaversive control In W. K. Honig & J. E.R. Staddon (Eds.), Handbook of oper-ant behavior (pp. 415–431). Upper Sad-dle River, NJ: Prentice Hall.

Huybers, S., Van Houten, R., & Malenfant, J.E. L. (2004). Reducing conflicts betweenmotor vehicles and pedestrians: The sep-arate and combined effects of pavementmarkings and a sign prompt. Journal ofApplied Behavior Analysis, 37, 445–456.

Individuals with Disabilities Education Actof 1997, P. L. 105–17, 20 U.S.C. para.1400 et seq.

Irvin, D. S., Thompson, T. J., Turner, W. D.,& Williams, D. E. (1998). Utilizing re-sponse effort to reduce chronic handmouthing. Journal of Applied Behav-ior Analysis, 31, 375–385.

Isaacs, W., Thomas, I., & Goldiamond, I.(1960). Application of operant condi-tioning to reinstate verbal behavior inpsychotics. Journal of Speech andHearing Disorders, 25, 8–12.

Iwata, B. A. (1987). Negative reinforcementin applied behavior analysis: An emerg-ing technology. Journal of Applied Be-havior Analysis, 20, 361–378.

Iwata, B. A. (1988). The development andadoption of controversial default tech-nologies. The Behavior Analyst, 11,149–157.

Iwata, B. A. (1991). Applied behavioranalysis as technological science.Journal of Applied Behavior Analysis,24, 421–424.

Iwata, B. A. (2006). On the distinction be-tween positive and negative reinforce-ment. The Behavior Analyst, 29,121–123.

Iwata, B. A., & DeLeon, I. (1996). The func-tional analysis screening tool. Gaines-ville, FL: The Florida Center onSelf-Injury, The University of Florida.

Iwata, B. A., & Michael, J. L. (1994). Ap-plied implications of theory and re-search on the nature of reinforcement.Journal of Applied Behavior Analysis,27, 183–193.

Iwata, B. A., Dorsey, M. F., Slifer, K. J., Bau-man, K. E., & Richman, G. S. (1994).Toward a functional analysis of self-in-jury. Journal of Applied BehaviorAnalysis, 27, 197–209. (Reprinted fromAnalysis and Intervention in Develop-mental Disabilities, 2, 3–20, 1982).

Iwata, B. A., Pace, G. M., Cowdery, G. E.,& Miltenberger, R. G. (1994). Whatmakes extinction work: An analysis ofprocedural form and function. Journalof Applied Behavior Analysis, 27,131–144.

Iwata, B. A., Pace, G. M., Dorsey, M. F.,Zarcone, J. R., Vollmer, T. R., Smith,R. G., Rodgers, T. A., Lerman, D. C.,Shore, B. A., Mazaleski, J. L., Goh, H.,Cowdery, G. E., Kalsher, M. J., &Willis, K. D. (1994). The functions of self-injurious behavior: An experimental-epidemiological analy-sis. Journal of Applied BehaviorAnalysis, 27, 215–240.

Iwata, B. A., Pace, G. M., Kissel, R. C., Nau,P. A., & Farber, J. M. (1990). The Self-Injury Trauma (SIT) Scale: A methodfor quantifying surface tissue damagecaused by self-injurious behavior.Journal of Applied Behavior Analysis,23, 99–110.

Iwata, B. A., Smith, R. G., & Michael, J.(2000). Current research on the influ-ence of establishing operations on be-

havior in applied settings. Journal of Ap-plied Behavior Analysis, 33, 411–418.

Iwata, B. A., Wallace, M. D., Kahng, S.,Lindberg, J. S., Roscoe, E. M., Con-ners, J., Hanley, G. P., Thompson, R. H.,& Worsdell, A. S. (2000). Skill acqui-sition in the implementation of func-tional analysis methodology. Journalof Applied Behavior Analysis, 33,181–194.

Jacobs, H. E., Fairbanks, D., Poche, C. E., &Bailey, J. S. (1982). Multiple incentivesin encouraging car pool formation ona university campus. Journal of AppliedBehavior Analysis, 15, 141–149.

Jacobson, J. M., Bushell, D., & Risley, T.(1969). Switching requirements in aHead Start classroom. Journal of Ap-plied Behavior Analysis, 2, 43–47.

Jacobson, J. W., Foxx, R. M., & Mulick,J. A. (Eds.). (2005). Controversial ther-apies for developmental disabilities:Fads, fashion, and science in profes-sional practice. Hillsdale, NJ: Erlbaum.

Jason, L. A., & Liotta, R. F. (1982). Reduc-tion of cigarette smoking in a univer-sity cafeteria. Journal of AppliedBehavior Analysis, 15, 573–577.

Johnson, B. M., Miltenberger, R. G., Egemo-Helm, K. R., Jostad, C., Flessner, C. A.,& Gatheridge, B. (2005). Evaluation ofbehavior skills training for teaching ab-duction prevention skills to young chil-dren. Journal of Applied BehaviorAnalysis, 38, 67–78.

Johnson, B. M., Miltenberger, R. G., Knud-son, P., Egemo-Helm, K., Kelso, P.,Jostad, C., & Langley, L. (2006). A pre-liminary evaluation of two behavioralskills training procedures for teachingabduction-prevention skills to schoolchildren. Journal of Applied BehaviorAnalysis, 39, 25–34.

Johnson, K. R., & Layng, T. V. J. (1992).Breaking the structuralist barrier: Lit-eracy and numeracy with fluency.American Psychologist, 47, 1475–1490.

Johnson, K. R., & Layng, T. V. J. (1994).The Morningside model of generativeinstruction. In R. Gardner, D. M., III,Sainato, J. O., Cooper, T. E., Heron,W. L., Heward, J., Eshleman, & T. A.Grossi (Eds.), Behavior analysis in ed-ucation: Focus on measurably superiorinstruction (pp. 173–197). PacificGrove, CA: Brooks/Cole.

Johnson, T. (1973). Addition and subtractionmath program with stimulus shaping

703

Bibliography

and stimulus fading. Produced pursuantto a grant from the Ohio Department ofEducation, BEH Act, P.L. 91-203, TitleVI-G; OE G-0-714438(604). J. E. Fisher& J. O. Cooper, project codirectors.

Johnston, J. M. (1979). On the relation be-tween generalization and generality.The Behavior Analyst, 2, 1–6.

Johnston, J. M. (1991). We need a newmodel of technology. Journal of Ap-plied Behavior Analysis, 24, 425–427.

Johnston, J. M., & Pennypacker, H. S. (1980).Strategies and tactics for Human Be-havioral Research. Hillsdale, NJ: Erl-baum.

Johnston, J. M., & Pennypacker, H. S.(1993a). Strategies and tactics forhuman behavioral research (2nd ed.).Hillsdale, NJ: Erlbaum.

Johnston, J. M., & Pennypacker, H. S.(1993b). Readings for Strategies andtactics of behavioral research (2nd ed.).Hillsdale, NJ: Erlbaum.

Johnston, M. K., Kelly, C. S., Harris, F. R.,& Wolf, M. M. (1966). An applicationof reinforcement principles to the de-velopment of motor skills of a youngchild. Child Development, 37, 370–387.

Johnston, R. J., & McLaughlin, T. F. (1982).The effects of free time on assignmentcompletion and accuracy in arithmetic:A case study. Education and Treatmentof Children, 5, 33–40.

Jones, F. H., & Miller, W. H. (1974). The ef-fective use of negative attention for re-ducing group disruption in specialelementary school classrooms. Psyc-hological Record, 24, 435–448.

Jones, F. H., Fremouw, W., & Carples, S.(1977). Pyramid training of elementaryschool teachers to use a classroom man-agement “skill package.” Journal of Ap-plied Behavior Analysis, 10, 239–254.

Jones, R. R., Vaught, R. S., & Weinrott, M.(1977). Time-series analysis in operantresearch. Journal of Applied BehaviorAnalysis, 10, 151–166.

Journal of Applied Behavior Analysis.(2000). Manuscript preparation check-list. Journal of Applied Behavior Analy-sis, 33, 399. Lawrence, KS: Society forthe Experimental Analysis of Behavior.

Kachanoff, R., Leveille, R., McLelland, H.,& Wayner, M. J. (1973). Schedule in-duced behavior in humans. Physiologyand Behavior, 11, 395–398.

Kadushin, A. (1972). The social work in-terview. New York: Columbia Univer-sity Press.

Kahng, S. W., & Iwata, B. A. (1998). Com-puterized systems for collecting real-time observational data. Journal ofApplied Behavior Analysis, 31, 253–261.

Kahng, S. W., & Iwata, B. A. (2000). Com-puter systems for collecting real-timeobservational data. In T. Thompson, D.Felce, & F. J. Symons (Eds.), Behav-ioral observation: Technology and ap-plications in developmental disabilities(pp. 35–45). Baltimore: Paul H. Brookes.

Kahng, S. W., Iwata, B. A., DeLeon, I. G., &Wallace, M. D. (2000). A comparisonof procedures for programming non-contingent reinforcement schedules.Journal of Applied Behavior Analysis,33, 223–231.

Kahng, S. W., Iwata, B. A., Fischer, S. M.,Page, T. J., Treadwell, K. R. H.,Williams, D. E., & Smith, R. G. (1998).Temporal distributions of problems be-havior based on scatter plot analysis.Journal of Applied Behavior Analysis,31, 593–604.

Kahng, S. W., Iwata, B. A., Thompson, R. H.,& Hanley, G. P. (2000). A method foridentifying satiation versus extinctioneffects under noncontingent reinforce-ment schedules. Journal of Applied Be-havior Analysis, 33, 419–432.

Kahng, S. W., Tarbox, J., & Wilke, A.(2001). Use of multicomponent treat-ment for food refusal. Journal of Ap-plied Behavior Analysis, 34, 93–96.

Kahng, S., & Iwata, B. A. (1999). Corre-spondence between outcomes of briefand extended functional analyses.Journal of Applied Behavior Analysis,32, 149–159.

Kahng, S., Iwata, B. A., Fischer, S. M.,Page, T. J., Treadwell, K. R. H.,Williams, D. E., & Smith, R. G.(1998). Temporal distributions of prob-lem behavior based on scatter plotanalysis. Journal of Applied BehaviorAnalysis, 31, 593–604.

Kahng, S., Iwata, B. A., & Lewin, A. B.(2002). Behavioral treatment of self-in-jury, 1964 to 2000. American Journal ofMental Retardation, 107 (3), 212–221.

Kame’enui, E. (2002, September). Begin-ning reading failure and the quantifi-cation of risk: Behavior as the supremeindex. Presentation to the Focus on Be-havior Analysis in Education Confer-ence, Columbus, OH.

Kanfer, F. H. (1976). The many faces of self-control, or behavior modificationchanges its focus. Paper presented at

the Fifth International Banff Confer-ence, Banff, Alberta, Canada.

Kanfer, F. H., & Karoly P. (1972). Self-control: A behavioristic excursion intothe lion’s den. Behavior Therapy, 3,398–416.

Kantor, J. R. (1959). Interbehavioral psy-chology. Granville, OH: Principia Press.

Katzenberg, A. C. (1975). How to drawgraphs. Kalamazoo, MI: Behaviordelia.

Kauffman, J. M. (2005). Characteristics ofemotional and behavioral disorders ofchildren and youth (8th ed.). UpperSaddle River, NJ: Merrill/Prentice Hall.

Kaufman, K. F., & O’Leary, K. D. (1972).Reward, cost, and self-evaluation pro-cedures for disruptive adolescents in a psychiatric hospital school. Journal ofApplied Behavior Analysis, 5, 293–309.

Kazdin, A. E. (1973). The effects of vicari-ous reinforcement on attentive behaviorin the classroom. Journal of AppliedBehavior Analysis, 6, 77–78.

Kazdin, A. E. (1977). Artifact, bias, andcomplexity of assessment: The ABCsof reliability. Journal of Applied Be-havior Analysis, 10, 141–150.

Kazdin, A. E. (1978). History of behaviormodification. Austin: TX: Pro-Ed.

Kazdin, A. E. (1979). Unobtrusive measuresin behavioral assessment. Journal of Ap-plied Behavior Analysis, 12, 713–724.

Kazdin, A. E. (1980). Acceptability of alter-native treatments for deviant child be-havior. Journal of Applied BehaviorAnalysis, 13, 259–273.

Kazdin, A. E. (1982). Single case researchdesigns: Methods for clinical and ap-plied settings. Boston: Allyn and Bacon.

Kazdin, A. E. (1982). Observer effects: Re-activity of direct observation. New Di-rections for Methodology of Social andBehavioral Science, 14, 5–19.

Kazdin, A. E. (2001). Behavior modificationin applied settings (6th ed.). Belmont,CA: Wadsworth.

Kazdin, A. E., & Hartmann, D. P. (1978).The simultaneous-treatment design.Behavior Therapy, 9, 912–922.

Kee, M., Hill, S. M., & Weist, M. D. (1999).School-based behavior management ofcursing, hitting, and spitting in a girlwith profound retardation. Educationand Treatment of Children, 22 (2),171–178.

Kehle, T. J., Bray, M. A., Theodore, L. A.,Jenson, W. R., & Clark, E. (2000). Amulti-component intervention de-signed to reduce disruptive classroom

704

Bibliography

behavior. Psychology in the Schools,37 (5), 475–481.

Keller, C. L., Brady, M. P., & Taylor, R. L.(2005). Using self-evaluation to im-prove student teacher interns’ use ofspecific praise. Education and Train-ing in Mental Retardation and Devel-opmental Disabilities, 40, 368–376.

Keller, F. S. (1941). Light aversion in the whiterat. Psychological Record, 4, 235–250.

Keller, F. S. (1982). Pedagogue’s progress.Lawrence, KS: TRI Publications.

Keller, F. S. (1990). Burrhus Frederic Skin-ner (1904–1990) (a thank you). Jour-nal of Applied Behavior Analysis, 23,404–407.

Keller, F. S., & Schoenfeld, W. M.(1950/1995). Principles of psychology.Acton, MA: Copley Publishing Group.

Keller, F. S., & Schoenfeld, W. N. (1995).Principles of psychology. Acton, MA:Copley Publishing Group. (Reprintedfrom Principles of psychology: A sys-tematic text in the science of behav-ior. New York: Appleton-Century-Crofts, 1950)

Kelley, M. E., Piazza, C. C., Fisher, W. W.,& Oberdorff, A. J. (2003). Acquisitionof cup drinking using previously re-fused foods as positive and negative re-inforcement. Journal of AppliedBehavior Analysis, 36, 89–93.

Kelley, M. L., Jarvie, G. J., Middlebrook,J. L., McNeer, M. F., & Drabman, R. S.(1984). Decreasing burned children’spain behavior: Impacting the traumaof hydrotherapy. Journal of AppliedBehavior Analysis, 17, 147–158.

Kellum, K. K., Carr, J. E., & Dozier, C. L.(2001). Response-card instruction andstudent learning in a college class-room. Teaching of Psychology, 28 (2),101–104.

Kelshaw-Levering, K., Sterling-Turner, H.,Henry, J. R., & Skinner, C. H. (2000).Randomized interdependent group con-tingencies: Group reinforcement witha twist. Psychology in the Schools, 37(6), 523–533.

Kennedy, C. H. (1994). Automatic rein-forcement: Oxymoron or hypotheticalconstruct. Journal of Behavioral Edu-cation, 4 (4), 387–395.

Kennedy, C. H. (2005). Single-case designsfor educational research. Boston:Allyn and Bacon.

Kennedy, C. H., & Haring, T. G. (1993).Teaching choice making during socialinteractions to students with profound

multiple disabilities. Journal of AppliedBehavior Analysis, 26, 63–76.

Kennedy, C. H., & Sousa, G. (1995). Func-tional analysis and treatment of eyepoking. Journal of Applied BehaviorAnalysis, 28, 27–37.

Kennedy, C. H., Meyer, K. A., Knowles, T.,& Shukla, S. (2000). Analyzing themultiple functions of stereotypical be-havior for students with autism: Impli-cations for assessment and treatment.Journal of Applied Behavior Analysis,33, 559–571.

Kennedy, G. H., Itkonen, T., & Lindquist,K. (1994). Nodality effects duringequivalence class formation: An exten-sion to sight-word reading and conceptdevelopment. Journal of Applied Be-havior Analysis, 27, 673–683.

Kern, L., Dunlap, G., Clarke, S., & Childs,K. E. (1995). Student-assisted func-tional assessment interview. Diagnost-ique, 19, 29–39.

Kern, L., Koegel, R., & Dunlap, G. (1984).The influence of vigorous versus mildexercise on autistic stereotyped behav-iors. Journal of Autism and Develop-mental Disorders, 14, 57–67.

Kerr, M. M., & Nelson, C. M. (2002).Strategies for addressing behaviorproblems in the classroom (4th ed.).Upper Saddle River, NJ: Merrill/Prentice Hall.

Killu, K., Sainato, D. M., Davis, C. A., Os-pelt, H., & Paul, J. N. (1998). Effectsof high-probability request sequenceson preschoolers’ compliance and dis-ruptive behavior. Journal of BehavioralEducation, 8, 347–368.

Kimball, J. W. (2002). Behavior-analytic in-struction for children with autism: Phi-losophy matters. Focus on Autism andOther Developmental Disabilities, 17(2), 66–75.

Kimball, J. W., & Heward, W. L. (1993). Asynthesis of contemplation, prediction,and control. American Psychologist,48, 587–588.

Kirby, K. C., & Bickel, W. K. (1988). Towardan explicit analysis of generalization: Astimulus control interpretation. The Be-havior Analyst, 11, 115–129.

Kirby, K. C., Fowler, S. A., & Baer, D. M.(1991). Reactivity in self-recording:Obtrusiveness of recording procedureand peer comments. Journal of AppliedBehavior Analysis, 24, 487–498.

Kladopoulos, C. N., & McComas, J. J.(2001). The effects of form training on

foul-shooting performance in membersof a women’s college basketball team.Journal of Applied Behavior Analysis,34, 329–332.

Klatt, K. P., Sherman, J. A., & Sheldon, J. B.(2000). Effects of deprivation on en-gagement in preferred activities by persons with developmental disabili-ties. Journal of Applied Behavior Analy-sis, 33, 495–506.

Kneedler, R. D., & Hallahan, D. P. (1981).Self-monitoring of on-task behaviorwith learning disabled children: Cur-rent studies and directions. ExceptionalEducation Quarterly, 2 (3), 73–82.

Knittel, D. (2002). Case study: A profes-sional guitarist on comeback road. InR. W. Malott & H. Harrison (Eds.), I’llstop procrastinating when I get aroundto it: Plus other cool ways to succeed inschool and life using behavior analysisto get your act together (pp. 8-5–8-6).Kalamazoo, MI: Department of Psy-chology, Western Michigan University.

Kodak, T., Grow, L., & Northrup, J. (2004).Functional analysis and treatment ofelopement for a child with attentiondeficit hyperactivity disorder. Journal ofApplied Behavior Analysis, 37, 229–232.

Kodak, T., Miltenberger, R. G., & Roma-niuk, C. (2003). The effects of differ-ential negative reinforcement of otherbehavior and noncontingent escape oncompliance. Journal of Applied Be-havior Analysis, 36, 379–382.

Koegal, R. L., & Rincover, A. (1977). Re-search on the differences betweengeneralization and maintenance inextra-therapy responding. Journal ofApplied Behavior Analysis, 10, 1–12.

Koegel, L. K., Carter, C. M., & Koegel,R. L. (2003). Teaching children withautism self-initiations as a pivotal re-sponse. Topics in Language Disorders,23 (2), 134–145.

Koegel, L. K., Koegel, R. L., Hurley, C., &Frea, W. D. (1992). Improving socialskills and disruptive behavior in chil-dren with autism through self-manage-ment. Journal of Applied BehaviorAnalysis, 25, 341–353.

Koegel, R. L., & Frea, W. (1993). Treatmentof social behavior in autism through themodification of pivotal social skills.Journal of Applied Behavior Analysis,26, 369–377.

Koegel, R. L., & Koegel, L. K. (1988). Gen-eralized responsivity and pivotal be-haviors. In R. H. Horner, G. Dunlap, &

705

Bibliography

R. L. Koegel (Eds.), Generalizationand maintenance: Life-style changes inapplied settings (pp. 41–66). Balti-more; Paul H. Brookes.

Koegel, R. L., & Koegel, L. K. (1990). Ex-tended reductions in stereoptypic be-havior of students with autism througha self-management treatment package.Journal of Applied Behavior Analysis,23, 119–127.

Koegel, R. L., & Williams, J. A. (1980). Di-rect versus indirect response-reinforcerrelationships in teaching autistic chil-dren. Journal of Abnormal Child Psy-chology, 8, 537–547.

Koegel, R. L., Koegel, L. K., & Schreibman,L. (1991). Assessing and training parentsin teaching pivotal behaviors. In R. J.Prinz (Ed.), Advances in behavioral as-sessment of children and families (Vol.5, pp. 65–82). London: Jessica Kingsley.

Koehler, L. J., Iwata, B. A., Roscoe, E. M.,Rolider, N. U., & O’Steen, L. E.(2005). Effects of stimulus variation onthe reinforcing capability of nonpre-ferred stimuli. Journal of Applied Be-havior Analysis, 38, 469–484.

Koenig, C. H., & Kunzelmann, H. P. (1980).Classroom learning screening. Colum-bus, OH: Charles E. Merrill.

Kohler, F. W., & Greenwood, C. R. (1986).Toward a technology of generalization:The identification of natural contin-gencies of reinforcement. The Behav-ior Analyst, 9, 19–26.

Kohler, F. W., Strain, P. S., Maretsky, S., &Decesare, L. (1990). Promoting posi-tive and supportive interactions be-tween preschoolers: An analysis ofgroup-oriented contingencies. Journalof Early Intervention, 14 (4), 327–341.

Kohn, A. (1997). Students don’t “work”—they learn. Education Week, Septem-ber 3, 1997. Retrieved January 1, 2004,from www.alfiekohn.org/teaching/edweek/sdwtl.htm.

Komaki, J. L. (1998). When performanceimprovement is the goal: A new set ofcriteria for criteria. Journal of AppliedBehavior Analysis, 31, 263–280.

Konarski, E. A., Jr., Crowell, C. R., & Dug-gan, L. M. (1985). The use of responsedeprivation to increase the academic per-formance of EMR students. Applied Re-search in Mental Retardation, 6, 15–31.

Konarski, E. A., Jr., Crowell, C. R., John-son, M. R., & Whitman T. L. (1982).Response deprivation, reinforcement,and instrumental academic perfor-

mance in an EMR classroom. BehaviorTherapy, 13, 94–102.

Konarski, E. A., Jr., Johnson, M. R., Crow-ell, C. R., & Whitman T. L. (1980). Re-sponse deprivation and reinforcementin applied settings: A preliminary re-port. Journal of Applied BehaviorAnalysis, 13, 595–609.

Koocher, G. P., & Keith-Spiegel, P. (1998).Ethics in psychology: Professionalstandards and cases. New York, Ox-ford: Oxford University Press.

Kosiewicz, M. M., Hallahan, D. P., Lloyd,J. W., & Graves, A. W. (1982). Effectsof self-instruction and self-correctionprocedures on handwriting perfor-mance. Learning Disability Quarterly,5 (1), 71–78.

Kostewicz, D. E., Kubina, R. M., Jr., &Cooper, J. O. (2000). Managing ag-gressive thoughts and feelings withdaily counts of non-aggressive thoughtsand feelings: A self-experiment.Journal of Behavior Therapy and Ex-perimental Psychiatry, 31, 177–187.

Kounin, J. (1970). Discipline and groupmanagement in classrooms. New York:Holt, Rinehart & Winston.

Kozloff, M. A. (2005). Fads in general ed-ucation: Fad, fraud, and folly. In J. W.Jacobson, R. M. Foxx, & J. A. Mulick(Eds.), Controversial therapies in de-velopmental disabilities: Fads, fash-ion, and science in professionalpractice (pp. 159–174). Hillsdale, NJ:Erlbaum.

Krantz, P. J., & McClannahan, L. E. (1993).Teaching children with autism to initi-ate to peers: Effects of a script-fadingprocedure. Journal of Applied Behav-ior Analysis, 26, 121–132.

Krantz, P. J., & McClannahan, L. E. (1998).Social interaction skills for childrenwith autism: A script-fading procedurefor beginning readers. Journal of Ap-plied Behavior Analysis, 31, 191–202.

Krasner, L. A., & Ullmann, L. P. (Eds.).(1965). Research in behavior modifi-cation: New developments and impli-cations. New York: Holt, Rinehart &Winston.

Kubina, R. M., Jr. (2005). The relationsamong fluency, rate building, and prac-tice: A response to Doughty, Chase, andO’Shields (2004). The Behavior Ana-lyst, 28, 73–76.

Kubina, R. M., & Cooper, J. O. (2001).Changing learning channels: An effi-cient strategy to facilitate instruction

and learning. Intervention in Schooland Clinic, 35, 161–166.

Kuhn, S. A. C., Lerman, D. C., & Vorndran,C. M. (2003). Pyramidal training forfamilies of children with problem be-havior. Journal of Applied BehaviorAnalysis, 36, 77–88.

La Greca, A. M., & Schuman, W. B. (1995).Adherence to prescribed medical regi-mens. In M. C. Roberts (Ed.), Hand-book of pediatric psychology (2nd ed.,pp. 55–83). New York: Guildford.

LaBlanc, L. A., Coates, A. M., Daneshvar,S. Charlop-Christy, Morris, C., & Lan-caster, B. M. (2003). Journal of AppliedBehavior Analysis, 36, 253–257.

Lalli, J. S., Browder, D. M., Mace, F. C., &Brown, D. K. (1993). Teacher use ofdescriptive analysis data to implementinterventions to decrease students’problem behaviors. Journal of AppliedBehavior Analysis, 26, 227–238.

Lalli, J. S., Casey, S., & Kates, K. (1995).Reducing escape behavior and increas-ing task completion with functionalcommunication training, extinction,and response chaining. Journal of Ap-plied Behavior Analysis, 28, 261–268.

Lalli, J. S., Livezey, K., & Kates, D. (1996).Functional analysis and treatment ofeye poking with response blocking.Journal of Applied Behavior Analysis,29, 129–132.

Lalli, J. S., Mace, F. C., Livezey, K., & Kates,K. (1998). Assessment of stimulus gen-eralization gradients in the treatment ofself-injurious behavior. Journal of Ap-plied Behavior Analysis, 31, 479–483.

Lalli, J. S., Zanolli, K., & Wohn, T. (1994).Using extinction to promote responsevariability. Journal of Applied BehaviorAnalysis, 27, 735–736.

Lambert, M. C., Cartledge, G., Lo, Y., &Heward, W. L. (2006). Effects of re-sponse cards on disruptive behavior andparticipation by fourth-grade studentsduring math lessons in an urban school.Journal of Positive Behavioral Inter-ventions, 8, 88–99.

Lambert, N., Nihira, K., & Leland, H.(1993). Adaptive Behavior Scale—School (2nd ed.). Austin, TX: Pro-Ed.

Landry, L., & McGreevy, P. (1984). Thepaper clip counter (PCC): An inexpen-

Kubina, R. M., Haertel, M. W., & Cooper,J. O. (1994). Reducing negative innerbehavior of senior citizens: The one-minute counting procedure. Journal ofPrecision Teaching, 11 (2), 28–35.

706

Bibliography

sive and reliable device for collectingbehavior frequencies. Journal of Pre-cision Teaching, 5, 11–13.

Lane, S. D., & Critchfield, T. S. (1998). Clas-sification of vowels and consonants byindividuals with moderate mental re-tardation: Development of arbitrary re-lations via match-to-sample trainingwith compound stimuli. Journal of Ap-plied Behavior Analysis, 31, 21–41.

Laraway, S., Snycerski, S., Michael, J., &Poling, A. (2001). The abative effect:A new term to describe the action ofantecedents that reduce operant re-sponding. The Analysis of Verbal Be-havior, 18, 101–104.

Larowe, L. N., Tucker, R. D., & McGuire,J. M. (1980). Lunchroom noise con-trol using feedback and group re-inforcement. Journal of SchoolPsychology, 18, 51–57.

Lasiter, P. S. (1979). Influence of contingentresponding on schedule-induced activ-ity in human subjects. Physiology andBehavior, 22, 239–243.

Lassman, K. A., Jolivette, K., & Wehby, J. H.(1999). “My teacher said I did goodwork today!”: Using collaborative be-havioral contracting. Teaching Excep-tional Children, 31 (4), 12–18.

Lattal, K. A. (1969). Contingency manage-ment of toothbrushing behavior in asummer camp for children. Journal ofApplied Behavior Analysis, 2, 195–198.

Lattal, K. A. (1995). Contingency and be-havior analysis. The Behavior Analyst,18, 209–224.

Lattal, K. A. (Ed.). (1992). Special issue:Reflections on B. F. Skinner and psy-chology. American Psychologist, 47,1269–1533.

Lattal, K. A., & Griffin, M. A. (1972). Pun-ishment contrast during free operant re-sponding. Journal of the ExperimentalAnalysis of Behavior, 18, 509–516.

Lattal, K. A., & Lattal, A. D. (2006). And yet . . .: Further comments on distin-guishing positive and negative reinforce-ment. The Behavior Analyst, 29, 129–134.

Lattal, K. A., & Neef, N. A. (1996). Recentreinforcement-schedule research and ap-plied behavior analysis. Journal of Ap-plied Behavior Analysis, 29, 213–220.

Lattal, K. A., & Shahan, T. A. (1997). Dif-fering views of contingencies: Howcontiguous. The Behavior Analyst, 20,149–154.

LaVigna, G. W., & Donnellen, A. M. (1986).Alternatives to punishment: Solving be-

havior problems with nonaversivestrategies. New York: Irvington.

Layng, T. V. J., & Andronis, P. T. (1984). To-ward a functional analysis of delusionalspeech and hallucinatory behavior. TheBehavior Analyst, 7, 139–156.

Lee, C., & Tindal, G. A. (1994). Self-record-ing and goal-setting: Task and mathproductivity of low-achieving Koreanelementary school students. Journal ofBehavioral Education, 4, 459–479.

Lee, R., McComas, J. J., & Jawor, J. (2002).The effects of differential and lag rein-forcement schedules on varied verbalresponding by individuals with autism.Journal of Applied Behavior Analysis,35, 391–402.

Lee, V. L. (1988). Beyond behaviorism.Hillsdale, NJ: Erlbaum.

Leitenberg, H. (1973). The use of single-case methodology in psychotherapy research. Journal of Abnormal Psy-chology, 82, 87–101.

Lenz, M., Singh, N., & Hewett, A. (1991).Overcorrection as an academic reme-diation procedure. Behavior Modifica-tion, 15, 64–73.

Lerman, D. C. (2003). From the laboratoryto community application: Transla-tional research in behavior analysis.Journal of Applied Behavior Analysis,36, 415–419.

Lerman, D. C., & Iwata, B. A. (1993). De-scriptive and experimental analysis ofvariables maintaining self-injurious be-havior. Journal of Applied BehaviorAnalysis, 26, 293–319.

Lerman, D. C., & Iwata, B. A. (1995).Prevalence of the extinction burst andits attenuation during treatment. Jour-nal of Applied Behavior Analysis, 28,93–94.

Lerman, D. C., & Iwata, B. A. (1996a). De-veloping a technology for the use of op-erant extinction in clinical settings: Anexamination of basic and applied re-search. Journal of Applied BehaviorAnalysis, 29, 345–382.

Lerman, D. C., & Iwata, B. A. (1996b). A methodology for distinguishing between extinction and punishment effects associated with response block-ing. Journal of Applied Behavior Analy-sis, 29, 231–234.

Lerman, D. C., & Vorndran, C. M. (2002).On the status of knowledge for usingpunishment: Implications for treatingbehavior disorders. Journal of AppliedBehavior Analysis, 35, 431–464.

Lerman, D. C., Iwata, B. A., & Wallace,M. D. (1999). Side effects of extinction:Prevalence of bursting and aggressionduring the treatment of self-injuriousbehavior. Journal of Applied BehaviorAnalysis, 32, 1–8.

Lerman, D. C., Iwata, B. A., Smith, R. G.,Zarcone, J. R., & Vollmer, T. R. (1994).Transfer of behavioral function as acontributing factor in treatment relapse.Journal of Applied Behavior Analysis,27, 357–370.

Lerman, D. C., Iwata, B. A., Zarcone, J. R.,& Ringdahl, J. (1994). Assessment ofstereotypic and self-injurious behavioras adjunctive responses. Journal of Ap-plied Behavior Analysis, 27, 715–728.

Lerman, D. C., Kelley, M. E., Vorndran,C. M., & Van Camp, C. M. (2003). Col-lateral effects of response blocking dur-ing the treatment of stereotypicbehavior. Journal of Applied BehaviorAnalysis, 36, 119–123.

Lerman, D. C., Kelley, M. E., Vorndran,C. M., Kuhn, S. A. C., & LaRue, Jr.,R. H. (2002). Reinforcement magnitudeand responding during treatment withdifferential reinforcement. Journal ofApplied Behavior Analysis, 35, 29–48.

Levondoski, L. S., & Cartledge, G. (2000).Self-monitoring for elementary schoolchildren with serious emotional distur-bances: Classroom applications for increased academic responding. Be-havioral Disorders, 25, 211–224.

Lewis, T. J., Powers, L. J., Kelk, M. J., &Newcomer, L. L. (2002). Reducingproblem behaviors on the playground:An intervention of the application ofschool-wide positive behavior sup-ports. Psychology in the Schools, 39(2), 181–190.

Lewis, T., Scott, T. & Sugai, G. (1994). Theproblem behavior questionnaire: Ateacher-based instrument to developfunctional hypotheses of problem

Lerman, D. C., Iwata, B. A., Shore, B. A.,& DeLeon, I. G. (1997). Effects of in-termittent punishment on self-injuriousbehavior: An evaluation of schedulethinning. Journal of Applied BehaviorAnalysis, 30, 187–201.

Lerman, D. C., Iwata, B. A., Shore, B. A.,& Kahng, S. W. (1996). Respondingmaintained by intermittent reinforce-ment: Implications for the use of ex-tinction with problem behavior inclinical settings. Journal of AppliedBehavior Analysis, 29, 153–171.

707

Bibliography

behavior in general education class-rooms. Diagnostique, 19, 103–115.

Lindberg, J. S., Iwata, B. A., Kahng, S. W.,& DeLeon, I. G. (1999). DRO contin-gencies: Analysis of variable-momen-tary schedules. Journal of AppliedBehavior Analysis, 32, 123–136.

Lindberg, J. S., Iwata, B. A., Roscoe, E. M.,Worsdell, A. S., & Hanley, G. P. (2003).Treatment efficacy of noncontingent re-inforcement during brief and extendedapplication. Journal of Applied Behav-ior Analysis, 36, 1–19.

Lindsley, O. R. (1956). Operant conditioningmethods applied to research in chronicschizophrenia. Psychiatric ResearchReports, 5, 118–139.

Lindsley, O. R. (1960). Characteristics of thebehavior of chronic psychotics as re-vealed by free-operant conditioningmethods. Diseases of the Nervous Sys-tem (Monograph Supplement), 21,66–78.

Lindsley, O. R. (1968). A reliable wristcounter for recording behavior rates.Journal of Applied Behavior Analy-sis, 1, 77–78.

Lindsley, O. R. (1971). An interview. TeachingExceptional Children, 3, 114–119.

Lindsley, O. R. (1985). Quantified trends inthe results of behavior analysis. Presi-dential address at the Eleventh AnnualConvention of the Association for Be-havior Analysis, Columbus, OH.

Lindsley, O. R. (1990). Precision teaching:By teachers for children. Teaching Ex-ceptional Children, 22, 10–15.

Lindsley, O. R. (1992). Precision teaching:Discoveries and effects. Journal of Ap-plied Behavior Analysis, 25, 51–57.

Lindsley, O. R. (1996). The four free-oper-ant freedoms.The Behavior Analyst,19, 199–210.

Linehan, M. (1977). Issues in behavioral in-terviewing. In J. D. Cone & R. P.Hawkins (Eds.), Behavioral assessment:New directions in clinical psychology(pp. 30–51). New York: Bruner/Mazel.

Linscheid, T. R, Iwata, B. A., Ricketts, R. W.,Williams, D. E., & Griffin, J. C. (1990).Clinical evaluation of the self-injuriousbehavior inhibiting system (SIBIS).Journal of Applied Behavior Analysis,23, 53–78.

Linscheid, T. R., & Meinhold, P. (1990). Thecontroversy over aversives: Basic oper-ant research and the side effects of pun-ishment. In A. C. Repp & N. N. Singh(Eds.), Perspectives on the use of non-

aversive and aversive interventions forpersons with developmental disabilities(pp. 59–72). Sycamore, IL: Sycamore.

Linscheid, T. R., & Reichenbach, H. (2002).Multiple factors in the long-term ef-fectiveness of contingent electric shocktreatment for self-injurious behavior: Acase example. Research in Develop-mental Disabilities, 23, 161–177.

Linscheid, T. R., Iwata, B. A., Ricketts, R. W.,Williams, D. E., & Griffin, J. C. (1990).Clinical evaluation of SIBIS: The self-injurious behavior inhibiting system.Journal of Applied Behavior Analysis,23, 53–78.

Linscheid, T. R., Pejeau C., Cohen S., &Footo-Lenz, M. (1994). Positive sideeffects in the treatment of SIB using theSelf-Injurious Behavior Inhibiting Sys-tem (SIBIS): Implications for operantand biochemical explanations of SIB.Research in Developmental Disabili-ties, 15, 81–90.

Lipinski, D. P., Black, J. L., Nelson, R. O.,& Ciminero, A. R. (1975). Influence ofmotivational variables on the reactivityand reliability of self-recording.Journal of Consulting and ClinicalPsychology, 43, 637–646.

Litow, L., & Pumroy, D. K. (1975). A briefreview of classroom group-orientedcontingencies. Journal of Applied Be-havior Analysis, 3, 341–347.

Lloyd, J. W., Bateman, D. F., Landrum,T. J., & Hallahan, D. P. (1989). Self-recording of attention versus produc-tivity. Journal of Applied BehaviorAnalysis. 22, 315–323.

Lloyd, J. W., Eberhardt, M. J., Drake, G. P.,Jr. (1996). Group versus individual re-inforcement contingencies with thecontext of group study conditions.Journal of Applied Behavior Analysis,29, 189–200.

Lo, Y. (2003). Functional assessment andindividualized intervention plans: In-creasing the behavior adjustment ofurban learners in general and specialeducation settings. Unpublished doc-toral dissertation. Columbus, OH: TheOhio State University.

Logan, K. R., & Gast, D. L. (2001). Con-ducting preference assessments and re-inforcer testing for individuals withprofound multiple disabilities: Issuesand procedures. Exceptionality, 9 (3),123–134.

Logan, K. R., Jacobs, H. A., Gast, D. L.,Smith, P. D., Daniel, J., & Rawls, J.

(2001). Preferences and reinforcers forstudents with profound multiple dis-abilities: Can we identify them?Journal of Developmental and Physi-cal Disabilities, 13, 97–122.

Long, E. S., Miltenberger, R. G., Ellingson,S. A., & Ott, S. M. (1999). Augmentingsimplified habit reversal in the treat-ment of oral-digital habits exhibited byindividuals with mental retardation.Journal of Applied Behavior Analysis,32, 353–365.

Lovaas, O. I. (1977). The autistic child: Lan-guage development through behaviormodification. New York: Irvington.

Lovitt, T. C. (1973). Self-management pro-jects with children with behavioral disabilities. Journal of Learning Dis-abilities, 6, 138–150.

Lovitt, T. C., & Curtiss, K. A. (1969). Aca-demic response rates as a function ofteacher- and self-imposed contingen-cies. Journal of Applied BehaviorAnalysis, 2, 49–53.

Lowenkron, B. (2004). Meaning: A verbalbehavior account. The Analysis of Ver-bal Behavior, 20, 77–97.

Luce, S. C., & Hall, R. V. (1981). Contin-gent exercise: A procedure used withdifferential reinforcement to reducebizarre verbal behavior. Education andTreatment of Children, 4, 309–327.

Luce, S. C., Delquadri, J., & Hall, R. V.(1980). Contingent exercise: A mild butpowerful procedure for suppressing in-appropriate verbal and aggressive be-havior. Journal of Applied BehaviorAnalysis, 13, 583–594.

Luciano, C. (1986). Acquisition, mainte-nance, and generalization of produc-tive intraverbal behavior throughtransfer of stimulus control procedures.Applied Research in Mental Retarda-tion, 7, 1–20.

Ludwig, R. L. (2004). Smiley faces and spin-ners: Effects of self-monitoring of productivity with an indiscriminablecontingency of reinforcement on the on-task behavior and academic produc-tivity by kindergarteners during inde-pendent seatwork. Unpublished master’sthesis, The Ohio State University.

Luiselli, J. K. (1984). Controlling disruptivebehaviors of an autistic child: Parent-mediated contingency management inthe home setting. Education and Treat-ment of Children, 3, 195–203.

Lynch, D. C., & Cuvo, A. J. (1995). Stimu-lus equivalence instruction of fraction-

708

Bibliography

decimal realations. Journal of AppliedBehavior Analysis, 28, 115–126.

Lyon, C. S., & Lagarde, R. (1997). Tokensfor success: Using the graduated rein-forcement system. Teaching Excep-tional Children, 29 (6), 52–57.

Maag, J. W., Reid, R., & DiGangi, S. A.(1993). Differential effects of self-monitoring attention, accuracy, andproductivity. Journal of Applied Be-havior Analysis, 26, 329–344.

Mabry, J. H., (1994). Review of R. A. Har-ris’ Linguistic wars. The Analysis ofVerbal Behavior, 12, 79–86.

Mabry, J. H., (1995). Review of Pinker’s Thelanguage instinct. The Analysis of Ver-bal Behavior, 12, 87–96.

MacCorquodale, K. (1970). On Chomsky’sreview of Skinner’s Verbal behavior.Journal of the Experimental Analysisof Behavior, 13, 83–99.

MacCorquodale, K., & Meehl, P. (1948). Ona distinction between hypothetical con-structs and intervening variables. Psyc-hological Record, 55, 95–107.

MacDuff, G.S. Krantz, P. J., & McClannahan,L. E. (1993). Teaching children withautism to use photographic activityschedules: Maintenance and generaliza-tion of complex response chains. Journalof Applied Behavior Analysis, 26, 89–97.

Mace, F. C. (1996). In pursuit of general be-havioral relations. Journal of AppliedBehavior Analysis, 29, 557–563.

Mace, F. C., & Belfiore, P. (1990). Behavioralmomentum in the treatment of escape-motivated stereotypy. Journal of AppliedBehavior Analysis, 23, 507–514.

Mace, F. C., Hock, M. L., Lalli, J. S., West,B. J., Belfiore, P., Pinter, E., & Brown,D. K. (1988). Behavioral momentumin the treatment of noncompliance.Journal of Applied Behavior Analysis,21, 123–141.

Mace, F. C., Page, T. J., Ivancic, M. T., &O’Brien, S. (1986). Effectiveness ofbrief time-out with and without con-tingency delay: A comparative analy-sis. Journal of Applied BehaviorAnalysis, 19, 79–86.

MacNeil, J., & Thomas, M. R. (1976).Treatment of obsessive-compulsivehairpulling (trichotillomania) by be-havioral and cognitive contingency manipulation. Journal of BehaviorTherapy and Experimental Psychiatry,7, 391–392.

Madden, G. J., Chase, P. N., & Joyce, J. H.(1998). Making sense of sensitivity in

the human operant literature. The Be-havior Analyst, 21, 1–12.

Madsen, C. H., Becker, W. C., Thomas, D. R.,Koser, L., & Plager, E. (1970). An analy-sis of the reinforcing function of “sitdown” commands. In R. K. Parker (Ed.),Readings in educational psychology (pp.71–82). Boston: Allyn & Bacon.

Maglieri, K. A., DeLeon, I. G., Rodriguez-Catter, V., & Sevin, B. M. (2000).Treatment of covert food stealing in anindividual with Prader-Willi Syndrome.Journal of Applied Behavior Analysis,33, 615–618.

Maheady, L., Mallete, B., Harper, G. F., &Saca, K. (1991). Heads together: Apeer-mediated option for improving theacademic achievement of heteroge-neous learning groups. Remedial andSpecial Education, 12 (2), 25–33.

Mahoney, M. J. (1971). The self-manage-ment of covert behavior: A case study.Behavior Therapy, 2, 575–578.

Mahoney, M. J. (1976). Terminal terminol-ogy. Journal of Applied BehaviorAnalysis, 9, 515–517.

Malesky, B. M. (1974). Behavior record-ing as treatment. Behavior Therapy,5, 107–111.

Maloney, K. B., & Hopkins, B. L. (1973). Themodification of sentence structure andits relationship to subjective judgmentsof creativity in writing. Journal of Ap-plied Behavior Analysis, 6, 425–433.

Malott, R. W. (1981). Notes from a radicalbehaviorist. Kalamazoo, MI: Author.

Malott, R. W. (1984). Rule-governed be-havior, self-management, and the de-velopmentally disabled: A theoreticalanalysis. Analysis and Intervention inDevelopmental Disabilities, 6, 53–68.

Malott, R. W. (1988). Rule-governed be-havior and behavioral anthropology.The Behavior Analyst, 11, 181–203.

Malott, R. W. (1989). The achievement ofevasive goals: Control by rules de-scribing contingencies that are not di-rect acting. In S. C. Hayes (Ed.), Rule-governed behavior: Cognition, contin-gencies, and instructional control (pp.269–322). Reno, NV: Context Press.

Malott, R. W. (2005a). Self-management. InM. Hersen & J. Rosqvist, (Eds.), En-cyclopedia of behavior modificationand cognitive behavior therapy (Vol-ume I: Adult Clinical Applications) (pp.516–521). Newbury Park, CA: Sage.

Malott, R. W. (2005b). Behavioral systemsanalysis and higher education. In W. L.

Heward, T. E. Heron, N. A. Neef, S. M.Peterson, D. M. Sainato, G. Cartledge,R. Gardner III, L. D. Peterson, S. B.Hersh, & J. C. Dardig (Eds.), Focus on behavior analysis in education:Achievements, challenges, and oppor-tunities (pp. 211–236). Upper SaddleRiver, NJ: Merrill/Prentice Hall.

Malott, R. W., & Garcia, M. E. (1991). Therole of private events in rule-governedbehavior. In L. J. Hayes & P. Chase(Eds.), Dialogues on verbal behavior(pp. 237–254). Reno, NV: Context Press.

Malott, R. W., & Harrison, H. (2002). I’llstop procrastinating when I get aroundto it: Plus other cool ways to succeed inschool and life using behavior analy-sis to get your act together. Kalamazoo,MI: Department of Psychology, West-ern Michigan University.

Malott, R. W., & Suarez, E. A. (2004). Ele-mentary principles of behavior (5th ed.).Upper Saddle River, NJ: Prentice Hall.

Malott, R. W., & Trojan Suarez, E. A.(2004). Elementary principles of be-havior (5th ed.). Upper Saddle River,NJ: Prentice Hall.

Malott, R. W., General, D. A., & Snapper,V. B. (1973). Issues in the analysis of be-havior. Kalamazoo, MI: Behaviordelia.

Malott, R. W., Tillema, M., & Glenn, S.(1978). Behavior analysis and behaviormodification: An introduction. Kala-mazoo, MI: Behaviordelia.

Mank, D. M., & Horner, R. H. (1987). Self-recruited feedback: A cost-effectiveprocedure for maintaining behavior.Research in Developmental Disabili-ties, 8, 91–112.

March, R., Horner, R. H., Lewis-Palmer, T.,Brown, D., Crone, D., Todd, A. W. etal. (2000). Functional AssessmentChecklist for Teachers and Staff(FACTS). Eugene, OR: University ofOregon, Department of Educationaland Community Supports.

Marckel, J. M., Neef, N. A., & Ferreri, S. J.(2006). A preliminary analysis ofteaching improvisation with the pictureexchange communication system tochildren with autism. Journal of Ap-plied Behavior Analysis, 39, 109–115.

Marcus, B. A., & Vollmer, T. R. (1995). Ef-fects of differential negative reinforce-ment on disruption and compliance.Journal of Applied Behavior Analysis,28, 229–230.

Marholin, D., II, & Steinman, W. (1977).Stimulus control in the classroom as a

709

Bibliography

function of the behavior reinforced.Journal of Applied Behavior Analysis,10, 465–478.

Marholin, D., Touchette, P. E., & Stuart,R. M. (1979). Withdrawal of chronicchlorpromazine medication: An exper-imental analysis. Journal of AppliedBehavior Analysis, 12, 150–171.

Markle, S. M. (1962). Good frames and bad:A grammar of frame writing (2nd ed.).New York: Wiley.

Markwardt, F. C. (2005). Peabody IndividualAchievement Test. Circle Pines, MN:American Guidance Service.

Marmolejo, E. K., Wilder, D. A., & Bradley,L. (2004). A preliminary analysis of theeffects of response cards on studentperformance and participation in anupper division university course.Journal of Applied Behavior Analysis,37, 405–410.

Marr, J. (2003). Empiricism. In K. A. Lattal& P. C. Chase (Eds.), Behavior theoryand philosophy (pp. 63–82). New York:Kluwer/Plenum.

Marshall, A. E., & Heward, W. L. (1979).Teaching self-management to incar-cerated youth. Behavioral Disorders,4, 215–226.

Marshall, K. J., Lloyd, J. W., & Hallahan,D. P. (1993). Effects of training to in-crease self-monitoring accuracy. Journalof Behavioral Education, 3, 445–459.

Martella, R., Leonard, I. J., Marchand-Martella, N. E., & Agran, M. (1993).Self-monitoring negative statements.Journal of Behavioral Education, 3,77–86.

Martens, B. K., Lochner, D. G., & Kelly, S. Q.(1992). The effects of variable-intervalreinforcement on academic engagement:A demonstration of matching theory.Journal of Applied Behavior Analysis,25, 143–151.

Martens, B. K., Witt, J. C., Elliott, S. N., &Darveaux, D. (1985). Teacher judg-ments concerning the acceptability ofschool-based interventions.Professional Psychology: Research andPractice, 16, 191–198.

Martin, G., & Pear, J. (2003). Behavior mod-ification: What it is and how to do it(7th ed.). Upper Saddle River, NJ:Prentice Hall.

Martinez-Diaz, J. A. (2003). Raising the bar.Presidential address presented at the an-nual conference of the Florida Associ-ation for Behavior Analysis, St.Petersburg.

Mastellone, M. (1974). Aversion therapy: Anew use of the old rubberband. Journalof Behavior Therapy and Experimen-tal Psychiatry, 5, 311–312.

Matson, J. L., & Taras, M. E., (1989). A 20-year review of punishment and alter-native methods to treat problembehaviors in developmentally delayedpersons. Research in DevelopmentalDisabilities, 10, 85–104.

Mattaini, M. A. (1995). Contingency dia-grams as teaching tools. The BehaviorAnalyst, 18, 93–98.

Maurice, C. (1993). Let me hear your voice:A family’s triumph over autism. NewYork: Fawcett Columbine.

Maurice, C. (2006). The autism wars. In W. L.Heward (Ed.), Exceptional children: Anintroduction to special education (8thed., pp. 291–293). Upper Saddle River,NJ: Merrill/Prentice Hall.

Maxwell, J. C. (2003). There’s no such thingas “business” ethics: There’s only onerule for decision making. New York:Warner Business Books: A TimeWarner Company.

Mayer, G. R., Sulzer, B., & Cody, J. J.(1968). The use of punishment in mod-ifying student behavior. Journal of Spe-cial Education, 2, 323–328.

Mayfield, K. H., & Chase, P. N. (2002). Theeffects of cumulative practice on math-ematics problem solving. Journal of Ap-plied Behavior Analysis, 35, 105–123.

Mayhew, G., & Harris, F. (1979). Decreas-ing self-injurious behavior. BehaviorModification, 3, 322–326.

Mazaleski, J. L., Iwata, B. A., Rodgers, T. A.,Vollmer, T. R., & Zarcone, J. R. (1994).Protective equipments as treatment forstereotypic hand mouthing: Sensory ex-tinction or punishment effects? Journalof Applied Behavior Analysis, 27,345–355.

McAllister, L. W., Stachowiak, J. G., Baer,D. M., & Conderman, L. (1969). Theapplication of operant conditioningtechniques in a secondary school class-room. Journal of Applied BehaviorAnalysis, 2, 277–285.

McCain, L. J., & McCleary, R. (1979). Thestatistical analysis of the simple inter-rupted time series quasi-experiment. InT. D. Cook & D. T. Campbell (Eds.),

Quasi-experimentation: Design andanalysis issues for field settings.Chicago: Rand McNally.

McClannahan, L. E., & Krantz, P. J. (1999).Activity schedules for children withautism: Teaching independent behav-ior. Bethesda, MD: Woodbine House.

McClannahan, L. E., McGee, G. G., Mac-Duff, G. S., & Krantz, P. J. (1990). As-sessing and improving child care: Aperson appearance index for childrenwith autism. Journal of Applied Be-havior Analysis, 23, 469–482.

McConnell, M. E. (1999). Self-monitoring,cueing, recording, and managing:Teaching students to manage their ownbehavior. Teaching Exceptional Chil-dren, 32 (2), 14–21.

McCord, B. E., Iwata, B. A., Galensky, T. L.,Ellingson, S. A., & Thomson, R. J.(2001). Functional analysis and treat-ment of problems behavior evoked bynoise. Journal of Applied BehaviorAnalysis, 34, 447–462.

McCullough, J. P., Cornell, J. E., McDaniel,M. H., & Mueller, R. K. (1974). Uti-lization of the simultaneous treatmentdesign to improve student behavior ina first-grade classroom. Journal ofConsulting and Clinical Psychology,42, 288–292.

McEntee, J. E., & Saunders, R. R. (1997).A response-restriction analysis ofstereotypy in adolescents with mentalretardation: Implications for appliedbehavior analysis. Journal of AppliedBehavior Analysis, 30, 485–506.

McEvoy, M. A., & Brady, M. P. (1988).Contingent access to play materials asan academic motivator for autistic andbehavior disordered children. Edu-cation and Treatment of Children, 11,5–18.

McFall, R. M. (1977). Parameters of self-monitoring. In R. B. Stuart (Ed.), Be-havioral self-management (pp. 196–214).New York: Bruner/Mazel.

McGee, G. G., Krantz, P. J., & McClan-nahan, L. E. (1985). The facilitativeeffects of incidental teaching on preposition use by autistic children.Journal of Applied Behavior Analy-sis, 18, 17–31.

McGee, G. G., Morrier, M., & Daly, T.(1999). An incidental teaching ap-proach to early intervention for tod-dlers with autism. Journal of theAssociation for Persons with SevereHandicaps, 24, 133–146.

Martens, B. K., Hiralall, A. S., & Bradley,T. A. (1997). A note to teacher: Im-proving student behavior through goalsetting and feedback. School Psychol-ogy Quarterly, 12, 33–41.

710

Bibliography

McGill, P. (1999). Establishing operations:Implications for assessment, treatment,and prevention of problem behavior.Journal of Applied Behavior Analysis,32, 393–418.

McGinnis, E. (1984). Teaching social skillsto behaviorally disordered youth. In J. K. Grosenick, S. L. Huntze, E. McGinnis, & C. R. Smith (Eds.),Social/affective interventions in behav-iorally disordered youth (pp. 87–112).De Moines, IA: Department of PublicInstruction.

McGonigle, J. J., Rojahn, J., Dixon, J., &Strain, P. S. (1987). Multiple treatmentinterference in the alternating treatmentsdesign as a function of the intercompo-nent interval length. Journal of AppliedBehavior Analysis, 20, 171–178.

McGuffin, M. E., Martz, S. A., & Heron,T. E. (1997). The effects of self-correction versus traditional spelling onthe spelling performance and mainte-nance of third grade students. Journalof Behavioral Education, 7, 463–476.

McGuire, M. T., Wing, R. R., Klem, M. L.,& Hill, J. O. (1999). Behavioral strate-gies of individuals who have main-tained long-term weight losses. ObesityResearch, 7, 334–341.

McIlvane, W. J., & Dube, W. V. (1992).Stimulus control shaping and stimuluscontrol topographies. The Behavior An-alyst, 15, 89–94.

McIlvane, W. J., Dube, W. V., Green, G., &Serna, R. W. (1993). Programmingconceptual and communication skilldevelopment: A methodological stim-ulus class analysis. In A. P. Kaiser &D. B. Gray (Eds.), Enhancing chil-dren’s communication (Vol. 2, pp.243–285). Baltimore: Brookes.

McKerchar, P. M., & Thompson, R. H.(2004). A descriptive analysis of po-tential reinforcement contingencies inthe preschool classroom. Journal ofApplied Behavior Analysis, 21, 157.

McLaughlin, T., & Malaby, J. (1972). Re-ducing and measuring inappropriateverbalizations in a token classroom.Journal of Applied Behavior Analysis,5, 329–333.

McNeish, J., Heron, T. E., & Okyere, B.(1992). Effects of self-correction on thespelling performance of junior high stu-dents with learning disabilities. Journalof Behavioral Education, 2, 17–27.

McPherson, A., Bonem, M., Green, G., &Osborne, J. G. (1984). A citation analy-

sis of the influence on research of Skin-ner’s Verbal behavior. The BehaviorAnalyst, 7, 157–167.

McWilliams, R., Nietupski, J., & Hamre-Nietupski, S. (1990). Teaching complexactivities to students with moderatehandicaps through the forward chain-ing of shorter total cycle response se-quences. Education and Training inMental Retardation, 25 (3), 292–298.

Mechling, L. C., & Gast, D. L. (1997). Com-bination audio/visual self-promptingsystem for teaching chained tasks tostudents with intellectual disabilities.Education and Training in Mental Re-tardation and Developmental Disabil-ities, 32, 138–153.

Meichenbaum, D., & Goodman, J. (1971).The developmental control of operantmotor responding by verbal operants.Journal of Experimental Child Psy-chology, 7, 553–565.

Mercatoris, M., & Craighead, W. E. (1974).Effects of nonparticipant observationon teacher and pupil classroom behav-ior. Journal of Educational Psychology,66, 512–519.

Meyer, L. H., & Evans, I. M. (1989).Nonaversive intervention for behaviorproblems: A manual for home and com-munity. Baltimore: Paul H. Brookes.

Michael, J. (1974). Statistical inference forindividual organism research: Mixedblessing or curse? Journal of AppliedBehavior Analysis, 7, 647–653.

Michael, J. (1975). Positive and negative re-inforcement, a distinction that is nolonger necessary; or a better way to talkabout bad things. Behaviorism, 3, 33–44.

Michael, J. (1980). Flight from behavioranalysis. The Behavior Analyst, 3, 1–22.

Michael, J. (1982). Distinguishing betweendiscriminative and motivational func-tions of stimuli. Journal of the Experi-mental Analysis of Behavior, 37,149–155.

Michael, J. (1982). Skinner’s elementary ver-bal relations: Some new categories. TheAnalysis of Verbal Behavior, 1, 1–4.

Michael, J. (1984). Verbal behavior. Journalof the Experimental Analysis of Behav-ior, 42, 363–376.

Michael, J. (1988). Establishing operationsand the mand. The Analysis of VerbalBehavior, 6, 3–9.

Michael, J. (1991). Verbal behavior: Ob-jectives, exams, and exam answers.Kalamazoo, MI: Western MichiganUniversity.

Michael, J. (1992). Introduction I. In Verbalbehavior by B. F. Skinner (Reprintededition). Cambridge, MA: B. F. SkinnerFoundation.

Michael, J. (1993). Concepts and principlesof behavior analysis. Kalamazoo, MI:Society for the Advancement of Be-havior Analysis.

Michael, J. (1993). Establishing operations.The Behavior Analyst, 16, 191–206.

Michael, J. (1995). What every student ofbehavior analysis ought to learn: A sys-tem for classifying the multiple effectsof behavioral variables. The BehaviorAnalyst, 18, 273–284.

Michael, J. (2000). Implications and refine-ments of the establishing operationconcept. Journal of Applied BehaviorAnalysis, 33, 401–410.

Michael, J. (2003). The multiple control ofverbal behavior. Invited tutorial pre-sented at the 29th Annual Conventionof the Association for Behavior Analy-sis, San Francisco, CA.

Michael, J. (2004). Concepts and principlesof behavior analysis (rev. ed.) Kalama-zoo, MI: Society for the Advancementof Behavior Analysis.

Michael, J. (2006). Comment on Baron andGalizio. The Behavior Analyst, 29,117–119.

Michael, J., & Shafer, E. (1995). State nota-tion for teaching about behavioral pro-cedures. The Behavior Analyst, 18,123–140.

Michael, J., & Sundberg, M. L. (2003, May23). Skinner’s analysis of verbal behav-ior: Beyond the elementary verbal oper-ants. Workshop conducted at the 29thAnnual Convention of the Association forBehavior Analysis, San Francisco, CA.

Miguel, C. F., Carr, J. E., & Michael, J.(2002). Effects of stimulus-stimuluspairing procedure on the vocal behav-ior of children diagnosed with autism.The Analysis of Verbal Behavior, 18,3–13.

Millenson, J. R. (1967). Principles of behav-ioral analysis. New York: Macmillan.

Miller, A. D., Hall, S. W., & Heward, W. L.(1995). Effects of sequential 1-minutetime trials with and without inter-trialfeedback and self-correction on generaland special education students’ fluencywith math facts. Journal of BehavioralEducation, 5, 319–345.

Miller, D. L., & Kelley, M. L. (1994). Theuse of goal setting and contingencycontracting for improving children’s

711

Bibliography

homework performance. Journal of Ap-plied Behavior Analysis, 27, 73–84.

Miller, D. L., & Stark, L. J. (1994). Contin-gency contracting for improving ad-herence in pediatric populations.Journal of the American Medical As-sociation, 271 (1), 81–83.

Miller, N., & Dollard, J. (1941). Sociallearning and imitation. New Haven,CT: Yale University Press.

Miller, N., & Neuringer, A. (2000). Rein-forcing variability in adolescents withautism. Journal of Applied BehaviorAnalysis, 33, 151–165.

Miltenberger, R. (2004). Behavior modifi-cation: Principles and procedures (3rded.). Belmont, CA: Wadsworth/Thom-son Learning.

Miltenberger, R. G. (2001). Behavior mod-ification: Principles and procedures(2nd ed.). Belmont, CA: Wadsworth/Thomson Learning.

Miltenberger, R. G., & Fuqua, R. W.(1981). Overcorrection: A review andcritical analysis. The Behavioral Ana-lyst, 4, 123–141.

Miltenberger, R. G., Flessner, C., Gath-eridge, B., Johnson, B., Satterlund, M.,& Egemo, K. (2004). Evaluation of be-havior skills training to prevent gunplay in children. Journal of Applied Be-havior Analysis, 37, 513–516.

Miltenberger, R. G., Fuqua, R. W., & Woods,D. W. (1998). Applying behavior analy-sis to clinical problems: Review andanalysis of habit reversal. Journal of Ap-plied Behavior Analysis, 31, 447–469.

Miltenberger, R. G., Gatheridge, B., Satter-lund, M., Egemo-Helm, K. R., Johnson,B. M., Jostad, C., Kelso, P., & Flessner,C. A. (2005). Teaching safety skills tochildren to prevent gun play: An evalu-ation of in situ training. Journal of Ap-plied Behavior Analysis, 38, 395–398.

Miltenberger, R. G., Rapp, J., & Long, E.(1999). A low-tech method for conduct-ing real time recording. Journal of Ap-plied Behavior Analysis, 32, 119–120.

Mineka, S. (1975). Some new perspectiveson conditioned hunger. Journal of Ex-perimental Psychology: Animal Be-havior Processes, 104, 143–148.

Mischel, H. N., Ebbesen, E. B., & Zeiss,A. R. (1972). Cognitive and attentionalmechanisms in delay of gratification.Journal of Personality and Social Psy-chology, 16, 204–218.

Mischel, W., & Gilligan, C. (1964). Delayof gratification, motivation for the pro-

hibited gratification, and responses totemptation. Journal of Abnormal andSocial Psychology, 69, 411–417.

Mitchell, R. J., Schuster, J. W., Collis, B. C.,& Gassaway, L. J. (2000). Teaching vo-cational skills with a faded auditoryprompting system. Education andTraining in Mental Retardation and De-velopmental Disabilities, 35, 415–427.

Mitchem, K. J., & Young, K. R. (2001).Adapting self-management programsfor classwide use: Acceptability, feasi-bility, and effectiveness. Remedial andSpecial Education, 22, 75–88.

Mitchem, K. J.,Young, K. R., West, R. P., &Benyo, J. (2001). CWPASM: A class-wide peer-assisted self-managementprogram for general education class-rooms. Education and Treatment ofChildren, 24, 3–14.

Molè, P. (2003). Ockham’s razor cuts bothways: The uses and abuses of sim-plicity in scientific theories. Skeptic,10 (1), 40–47.

Moore, J. (1980). On behaviorism and pri-vate events. Psychological Record,30, 459–475.

Moore, J. (1984). On behaviorism, knowl-edge, and causal explanation. Psycho-logical Record, 34, 73–97.

Moore, J. (1985). Some historical and con-ceptual relations among logical posi-tivism, operationism, and behaviorism.The Behavior Analyst, 8, 53–63.

Moore, J. (1995). Radical behaviorism andthe subjective-objective distinction. TheBehavior Analyst, 18, 33–49.

Moore, J. (2000). Thinking about thinkingand feeling about feeling. The BehaviorAnalyst, 23 (1), 45–56.

Moore, J. (2003). Behavior analysis, men-talism, and the path to social justice.The Behavior Analyst, 26, 181–193.

Moore, J. W., Mueller, M. M., Dubard, M.,Roberts, D. S., & Sterling-Turner, H. E.(2002). The influence of therapist at-tention on self-injury during a tangiblecondition. Journal of Applied BehaviorAnalysis, 35, 283–286.

Moore, J., & Cooper, J. O. (2003). Someproposed relations among the domainsof behavior analysis. The Behavior An-alyst, 26, 69–84.

Moore, J., & Shook, G. L. (2001). Certifi-cation, accreditation and quality con-trol in behavior analysis. The BehaviorAnalyst, 24, 45–55.

Moore, R., & Goldiamond, I. (1964). Error-less establishment of visual discrimi-

nation using fading procedures. Journalof the Experimental Analysis of Behav-ior, 7, 269–272.

Morales v. Turman, 364 F. Supp. 166 (E.D.Tx. 1973).

Morgan, D., Young, K. R., & Goldstein, S.(1983). Teaching behaviorally disor-dered students to increase teacher at-tention and praise in mainstreamedclassrooms. Behavioral Disorders, 8,265–273.

Morgan, Q. E. (1978). Comparison of two“Good Behavior Game” group contin-gencies on the spelling accuracy offourth-grade students. Unpublishedmaster’s thesis, The Ohio State Uni-versity, Columbus.

Morris, E. K. (1991). Deconstructing “tech-nological to a fault”. Journal of AppliedBehavior Analysis, 24, 411–416.

Morris, E. K., & Smith, N. G. (2003). Bib-liographic processes and products, anda bibliography of the published primary-source works of B. F. Skinner. The Be-havior Analyst, 26, 41–67.

Morris, R. J. (1985). Behavior modifica-tion with exceptional children: Prin-ciples and practices. Glenview, IL:Scott, Foresman.

Morse, W. H., & Kelleher, R. T. (1977). De-terminants of reinforcement and pun-ishment. In W. K. Honig & J. E. R.Staddon (Eds.), Handbook of operantbehavior (pp. 174–200). Upper SaddleRiver, NJ: Prentice Hall.

Morton, W. L., Heward, W. L., & Alber, S. R.(1998). When to self-correct? A com-parison of two procedures on spellingperformance. Journal of BehavioralEducation, 8, 321–335.

Mowrer, O. H. (1950). Learning theory andpersonality dynamics. New York: TheRonald Press Company.

Moxley, R. A. (1990). On the relationshipbetween speech and writing with im-plications for behavioral approaches toteaching literacy. The Analysis of Ver-bal Behavior, 8, 127–140.

Moxley, R. A. (1998). Treatment-only designsand student self-recording strategies forpublic school teachers. Education andTreatment of Children, 21, 37–61.

Moxley, R. A. (2004). Pragmatic selection-ism: The philosophy of behavioranalysis. The Behavior Analyst Today,5, 108–125.

Moxley, R. A., Lutz, P. A., Ahlborn, P.,Boley, N., & Armstrong, L. (1995).Self-recording word counts of freewrit-

712

Bibliography

ing in grades 1-4. Education and Treat-ment of Children, 18, 138–157.

Mudford, O. C. (1995). Review of the gen-tle teaching data. American Journal onMental Retardation, 99, 345–355.

Mueller, M. M., Moore, J. W., Doggett,R. A., & Tingstrom, D. H. (2000). Theeffectiveness of contingency-specificprompts in controlling bathroom graf-fiti. Journal of Applied Behavior Analy-sis, 33, 89–92.

Mueller, M. M., Piazza, C. C., Moore, J.W., Kelley, M. E., Bethke, S. A.,Pruett, A. E., Oberdorff, A. J., &Layer, S. A. (2003). Training parentsto implement pediatric feeding proto-cols. Journal of Applied BehaviorAnalysis, 36, 545–562.

Mueller, M., Moore, J., Doggett, R. A., &Tingstrom, D. (2000). The effectivenessof contingency-specific and contin-gency nonspecific prompts in control-ling bathroom graffiti. Journal ofApplied Behavior Analysis, 33, 89–92.

Murphy, E. S., McSweeny, F. K., Smith,R. G., & McComas, J. J. (2003). Dy-namic changes in reinforcer effective-ness: Theoretical, methodological, andpractical implications for applied re-search. Journal of Applied BehaviorAnalysis, 36, 421–438.

Murphy, R. J., Ruprecht, M. J., Baggio, P., &Nunes, D. L. (1979). The use of mildpunishment in combination with rein-forcement of alternate behaviors to re-duce the self-injurious behavior of a profoundly retarded individual.AAESPH Review, 4, 187–195.

Musser, E. H., Bray, M. A., Kehle, T. J., &Jenson, W. R. (2001). Reducing dis-ruptive behaviors in students with seri-ous emotional disturbance. SchoolPsychology Review, 30 (2), 294–304.

Myer, J. S. (1971). Some effects of non-contingent aversive stimulation. In R. F. Brush (Ed.), Aversive condition-ing and learning (pp. 469–536). NY:Academic Press.

Myles, B. S., Moran, M. R., Ormsbee, C. K.,& Downing, J. A. (1992). Guidelinesfor establishing and maintaining tokeneconomies. Intervention in School andClinic, 27 (3), 164–169.

Nakano,Y. (2004). Toward the establishmentof behavioral ethics: Ethical principlesof behavior analysis in the era of em-pirically supported treatment (EST).Japanese Journal of Behavior Analy-sis, 19 (1), 18–51.

Narayan, J. S., Heward, W. L., Gardner R.,III, Courson, F. H., & Omness, C.(1990). Using response cards to in-crease student participation in an ele-mentary classroom. Journal of AppliedBehavior Analysis, 23, 483–490.

National Association of School Psycholo-gists. (2000). Professional conductmanual: Principles for professionalethics and guidelines for the provisionof school psychological services.Bethesda, MD: NASP Publications.

National Association of Social Workers.(1996). The NASW code of ethics.Washington, DC: Author.

National Commission for the Protection ofHuman Subjects of Biomedical and Be-havioral Research. (1979). The BelmontReport: Ethical principles and guide-lines for the protection of human sub-jects of research. Washington, DC:Department of Health, Education, andWelfare. Retrieved November 11, 2003,from http://ohsr.od.nih.gov/mpa/belmont.php3.

National Reading Panel (2000). Teaching chil-dren to read: An evidence-based assess-ment of the scientific research literatureon reading and its implications for read-ing instruction: Reports of the subgroups.(NIH Pub No. 00-4754). Bethesda, MD:National Institute of Child Health andHuman Development. [Available at:www.nichd.nih.gov.publications/nrp.report/htm]

National Reading Panel. www.nationalreadingpanel.org. Retrieved November 29,2005, from www.nationalreadingpanel.org

Neef, N. A., Bicard, D. F., & Endo, S.(2001). Assessment of impulsivity andthe development of self-control by stu-dents with attention deficit hyperac-tivity disorder. Journal of AppliedBehavior Analysis, 34, 397–408.

Neef, N. A., Bicard, D. F., Endo, S., Coury,D. L., & Aman, M. G. (2005). Evalua-tion of pharmacological treatment of im-pulsivity by students with attentiondeficit hyperactivity disorder. Journal ofApplied Behavior Analysis, 38, 135–146.

Neef, N. A., Iwata, B. A., & Page, T. J. (1980).The effects of interspersal training versushigh density reinforcement on spellingacquisition and retention. Journal of Ap-plied Behavior Analysis, 13, 153–158.

Neef, N. A., Lensbower, J., Hockersmith, I.,DePalma, V., & Gray, K. (1990). Invivo versus stimulation training: An in-

teractional analysis of range and typeof training exemplars. Journal of Ap-plied Behavior Analysis, 23, 447–458.

Neef, N. A., Mace, F. C., & Shade, D.(1993). Impulsivity in students with se-rious emotional disturbance: The inter-active effects of reinforcer rate, delay,and quality. Journal of Applied Behav-ior Analysis, 26, 37–52.

Neef, N. A., Mace, F. C., Shea, M. C., &Shade, D. (1992). Effects of rein-forcer rate and reinforcer quality ontime allocation: Extensions of match-ing theory to educational settings.Journal of Applied Behavior Analy-sis, 25, 691–699.

Neef, N. A., Markel, J., Ferreri, S., Bicard,D. F., Endo, S., Aman, M. G., Miller,K. M., Jung, S., Nist, L., & Armstrong,N. (2005). Effects of modeling versusinstructions on sensitivity to reinforce-ment schedules. Journal of Applied Be-havior Analysis, 38, 23–37.

Neef, N. A., Parrish, J. M., Hannigan, K. F.,Page, T. J., & Iwata, B. A. (1990).Teaching self-catheterization skills tochildren with neurogenic bladder com-plications. Journal of Applied Behav-ior Analysis, 22, 237–243.

Neisser, U. (1976). Cognition and reality.San Francisco: Freeman.

Nelson, R. O., & Hayes, S. C. (1981). The-oretical explanations for reactivity inself-monitoring. Behavior Modifica-tion, 5, 3–14.

Neuringer, A. (1993). Reinforced variationand selection. Animal Learning andBehavior, 21, 83–91.

Neuringer, A. (2004). Reinforced variabil-ity in animals and people: Implicationfor adaptive action. American Psychol-ogist, 59, 891–906.

Nevin, J. A. (1998). Choice and momentum.In W. O’Donohue (Ed.), Learning andbehavior therapy (pp. 230–251). Boston:Allyn and Bacon.

Newman, B., Buffington, D. M., Hemmes,N. S., & Rosen, D. (1996). Answeringobjections to self-management and re-lated concepts. Behavior and Social Is-sues, 6, 85–95.

Newman, B., Reinecke, D. R., & Meinberg,D. (2000). Self-management of variedresponding in children with autism.Behavioral Interventions, 15, 145–151.

Newman, R. S., & Golding, L. (1990). Chil-dren’s reluctance to seek help withschool work. Journal of EducationalPsychology, 82, 92–100.

713

Bibliography

Newstrom, J., McLaughlin, T. F., &Sweeney, W. J. (1999). The effects ofcontingency contracting to improve themechanics of written language with amiddle school student with behaviordisorders. Child & Family BehaviorTherapy, 21 (1), 39–48.

Newton, J. T., & Sturmey, P. (1991). TheMotivation Assessment Scale: Inter-rater reliability and internal consistencyin a British sample. Journal of MentalDeficiency Research, 35, 472–474.

Nihira, K., Leland, H., & Lambert, N. K.(1993). Adaptive Behavior Scale—Res-idential and Community (2nd ed.).Austin, TX: Pro-Ed.

Ninness, H. A. C., Fuerst, J., & Rutherford,R. D. (1991). Effects of self-manage-ment training and reinforcement on thetransfer of improved conduct in the ab-sence of supervision. Journal of Ap-plied Behavior Analysis, 24, 499–508.

Noell, G. H., VanDerHeyden, A. M., Gatti,S. L., & Whitmarsh, E. L. (2001).Functional assessment of the effects ofescape and attention on students’ com-pliance during instruction. School Psy-chology Quarterly, 16, 253–269.

Nolan, J. D. (1968). Self-control proceduresin the modification of smoking behav-ior. Journal of Consulting and ClinicalPsychology, 32, 92–93.

North, S. T., & Iwata, B. A. (2005). Moti-vational influences on performancemaintained by food reinforcement.Journal of Applied Behavior Analysis,38, 317–333.

Northup, J. (2000). Further evaluation of theaccuracy of reinforcer surveys: A sys-tematic replication. Journal of AppliedBehavior Analysis, 33, 335–338.

Northup, J., George, T., Jones, K., Brous-sard, C., & Vollmer, T. R. (1996). Acomparison of reinforcer assessmentmethods: The utility of verbal and pictorial choice procedures. Journalof Applied Behavior Analysis, 29,201–212.

Northup, J., Vollmer, T. R., & Serrett, K.(1993). Publication trends in 25 yearsof the Journal of Applied BehaviorAnalysis. Journal of Applied BehaviorAnalysis, 26, 527–537.

Northup, J., Wacker, D., Sasso, G., Steege,M., Cigrand, K., Cook, J., & DeRaad,A. (1991). A brief functional analysisof aggressive and alternative behaviorin an outclinic setting. Journal of Ap-plied Behavior Analysis, 24, 509–522.

Novak, G. (1996). Developmental psychol-ogy: Dynamical systems and behavioranalysis. Reno, NV: Context Press.

O’Brien, F. (1968). Sequential contrast ef-fects with human subjects. Journal ofthe Experimental Analysis of Behavior,11, 537–542.

O’Brien, S., & Karsh, K. G. (1990). Treatmentacceptability, consumer, therapist, andsociety. In A. C. Repp & N. N. Singh(Eds.), Perspectives on the use of non-aversive and aversive interventions forpersons with developmental disabilities(pp. 503–516). Sycamore, IL: Sycamore.

O’Donnell, J. (2001). The discriminativestimulus for punishment or SDp. TheBehavior Analyst, 24, 261–262.

O’Leary, K. D. (1977). Teaching self-man-agement skills to children. In D. Upper(Ed.), Perspectives in behavior therapy.Kalamazoo, MI: Behaviordelia.

O’Leary, K. D., & O’Leary, S. G. (Eds.).(1972). Classroom management: Thesuccessful use of behavior modifica-tion. New York: Pergamon.

O’Leary, K. D., Kaufman, K. F., Kass, R. E.,& Drabman, R. S. (1970). The effectsof loud and soft reprimands on the behavior of disruptive students.Exceptional Children, 37, 145–155.

O’Leary, S. G., & Dubey, D. R. (1979). Ap-plications of self-control procedures bychildren: A review. Journal of AppliedBehavior Analysis, 12, 449–465.

O’Neill, R. E. Horner, R. H., Albin, R. W.,Sprague, J. R., Storey, K., & Newton,J. S. (1997). Functional assessment forproblem behavior: A practical hand-book (2nd ed.). Pacific Grove, CA:Brooks/Cole.

O’Reilly, M. F. (1995). Functional analysisand treatment of escape-maintained ag-gression correlated with sleep depriva-tion. Journal of Applied BehaviorAnalysis, 28, 225–226.

O’Reilly, M., Green, G., & Braunling-McMorrow, D. (1990). Self-administeredwritten prompts to teach home accidentprevention skills to adults with braininjuries. Journal of Applied BehaviorAnalysis, 23, 431–446.

O’Sullivan, J. L. (1999). Adult guardianshipand alternatives. In R. D. Dinerstein,S. S. Herr, & J. L. O’Sullivan (Eds.), Aguide to consent (pp. 7–37). Washing-ton DC: American Association on Men-tal Retardation.

Odom, S. L., Hoyson, M., Jamieson, B., &Strain, P. S. (1985). Increasing handi-

capped preschoolers’ peer social inter-actions: Cross-setting and componentanalysis. Journal of Applied BehaviorAnalysis, 18, 3–16.

Oliver, C. O., Oxener, G., Hearn, M., & Hall,S. (2001). Effects of social proximity onmultiple aggressive behaviors. Journalof Applied Behavior Analysis, 34, 85–88.

Ollendick, T. H., Matson, J. L., Esvelt-Dawson, K., & Shapiro, E. S. (1980).Increasing spelling achievement: Ananalysis of treatment procedures utiliz-ing an alternating treatments design.Journal of Applied Behavior Analysis,13, 645–654.

Ollendick, T., Matson, J., Esveldt-Dawson,K., & Shapiro, E. (1980). An initial investigation into the parameters ofovercorrection. Psychological Reports,39, 1139–1142.

Olympia, D. W., Sheridan, S. M., Jenson,W. R., & Andrews, D. (1994). Usingstudent-managed interventions to increase homework completion and accuracy. Journal of Applied BehaviorAnalysis, 27, 85–99.

Ortiz, K. R., & Carr, J. E. (2000). Multiple-stimulus preference assessments: Acomparison of free-operant and re-stricted-operant formats. Behavioral In-terventions, 15, 345–353.

Osborne, J. G. (1969). Free-time as a rein-forcer in the management of classroombehavior. Journal of Applied BehaviorAnalysis, 2, 113–118.

Osgood, C. E. (1953). Method and theory inexperimental psychology. New York:Oxford University Press.

Osnes, P. G., Guevremont, D. C., & Stokes,T. F. (1984). If I say I’ll talk more, thenI will: Correspondence training to in-crease peer-directed talk by sociallywithdrawn children. Behavior Modifi-cation, 10, 287–299.

Osnes, P. G., & Lieblein, T. (2003). An ex-plicit technology of generalization. TheBehavior Analyst Today, 3, 364–374.

Overton, T. (2006). Assessing learners withspecial needs: An applied approach(5th ed.). Upper Saddle River, NJ:Prentice Hall.

Owens, R. E. (2001). Language develop-ment: An introduction (5th ed.).Boston: Allyn & Bacon.

Pace, G. M., & Troyer, E. A. (2000). The ef-fects of a vitamin supplement on thepica of a child with severe mental re-tardation. Journal of Applied BehaviorAnalysis, 33, 619–622.

714

Bibliography

Pace, G. M., Ivancic, M. T., Edwards, G. L.,Iwata, B. A., & Page, T. J. (1985). As-sessment of stimulus preference and re-inforcer value with profoundly retardedindividuals. Journal of Applied Behav-ior Analysis, 18, 249–255.

Paclawskyj, T. R., & Vollmer, T. R. (1995).Reinforcer assessment for children withdevelopmental disabilities and visualimpairments. Journal of Applied Be-havior Analysis, 28, 219–224.

Paclawskyj, T., Matson, J., Rush, K., Smalls,Y., & Vollmer, T. (2000). Questions aboutbehavioral function (QABF): Behavioralchecklist for functional assessment ofaberrant behavior. Research in Develop-mental Disabilities, 21, 223–229.

Page, T. J., & Iwata, B. A. (1986). Interob-server agreement: History, theory, andcurrent methods. In A. Poling & R. W.Fuqua (Eds.), Research methods in ap-plied behavior analysis (pp. 92–126).New York: Plenum Press.

Palmer, D. C. (1991). A behavioral inter-pretation of memory. In L. J. Hayes &P. N. Chase (Eds.), Dialogues on verbalbehavior (pp. 261–279). Reno NV:Context Press.

Palmer, D. C. (1996). Achieving parity:The role of automatic reinforcement.Journal of the Experimental Analysisof Behavior, 65, 289–290.

Palmer, D. C. (1998). On Skinner’s rejectionof S-R psychology. The Behavior Ana-lyst, 21, 93–96.

Palmer, D. C. (1998). The speaker as lis-tener: The interpretations of structuralregularities in verbal behavior. TheAnalysis of Verbal Behavior, 15, 3–16.

Panyan, M., Boozer, H., & Morris, N.(1970). Feedback to attendants as a re-inforcer for applying operant tech-niques. Journal of Applied BehaviorAnalysis, 3, 1–4.

Parker, L. H., Cataldo, M. F., Bourland, G.,Emurian, C. S., Corbin, R. J., & Page,J. M. (1984). Operant treatment of oro-facial dysfunction in neuromusculardisorders. Journal of Applied BehaviorAnalysis, 17, 413–427.

Parrott, L. J. (1984). Listening and under-standing. The Behavior Analyst, 7,29–39.

Parsons, M. B., Reid, D. H., Reynolds, J., &Bumgarner, M. (1990). Effects of cho-sen versus assigned jobs on the workperformance of persons with severehandicaps. Journal of Applied Behav-ior Analysis, 23, 253–258.

Parsonson, B. S. (2003). Visual analysis ofgraphs: Seeing is believing. In K. S.Budd & T. Stokes (Eds.), A small matterof proof: The legacy of Donald M. Baer(pp. 35–51). Reno, NV: Context Press.

Parsonson, B. S., & Baer, D. M. (1978). Theanalysis and presentation of graphicdata. In T. R. Kratochwill (Ed.), Singlesubject research: Strategies for evalu-ating change (pp. 101–165). New York:Academic Press.

Parsonson, B. S., & Baer, D. M. (1986). Thegraphic analysis of data. In A. Poling& R. W. Fuqua (Eds.), Research meth-ods in applied behavior analysis (pp.157–186). New York: Plenum Press.

Parsonson, B. S., & Baer, D. M. (1992). Thevisual analysis of graphic data, and cur-rent research into the stimuli control-ling it. In T. R. Kratochwill & J. R.Levin (Eds.), Single subject researchdesign and analysis: New directions forpsychology and education (pp. 15–40).New York: Academic Press.

Partington, J. W., & Bailey, J. S. (1993).Teaching intraverbal behavior to pre-school children. The Analysis of VerbalBehavior, 11, 9–18.

Partington, J. W., & Sundberg, M. L. (1998).The assessment of basic language andlearning skills (The ABLLS). PleasantHill, CA: Behavior Analysts, Inc.

Patel, M. R., Piazza, C. C., Kelly, M. L.,Ochsner, C. A., & Santana, C. M.(2001). Using a fading procedure to in-crease fluid consumption in a childwith feeding problems. Journal of Ap-plied Behavior Analysis, 34, 357–360.

Patel, M. R., Piazza, C. C., Martinez, C. J.,Volkert, V. M., & Santana, C. M.(2002). An evaluation of two differen-tial reinforcement procedures with es-cape extinction to treat food refusal.Journal of Applied Behavior Analysis,35, 363–374.

Patterson, G. R. (1982). Coercive familyprocess. Eugene, OR: Castalia.

Patterson, G. R., Reid, J. B., & Dishion,T. J. (1992). Antisocial boys. Vol. 4: Asocial interactional approach. Eugene,OR: Castalia.

Pavlov, I. P. (1927). Conditioned reflexes:An investigation of the physiologicalactivity of the cerebral cortex (W. H.Grant, Trans.). London: Oxford Uni-versity Press.

Pavlov, I. P. (1927/1960). Conditioned re-flexes (G. V. Anrep, Trans.). NewYork: Dover.

Pelios, L., Morren, J., Tesch, D., & Axelrod,S. (1999). The impact of functionalanalysis methodology on treatmentchoice for self-injurious and aggressivebehavior. Journal of Applied BehaviorAnalysis, 32, 185–195.

Pennypacker, H. S. (1981). On behavioralanalysis. The Behavior Analyst, 3,159–161.

Pennypacker, H. S. (1994). A selectionistview of the future of behavior analysisin education. In R. Gardner, III, D. M.Sainato, J. O. Cooper, T. E. Heron, W.L. Heward, J. Eshleman, & T. A. Grossi(Eds.), Behavior analysis in education:Focus on measurably superior instruc-tion (pp. 11–18). Monterey, CA:Brooks/Cole.

Pennypacker, H. S., & Hench, L. L. (1997).Making behavioral technology transfer-able. The Behavior Analyst, 20, 97–108.

Pennypacker, H. S., Gutierrez, A., & Linds-ley, O. R. (2003). Handbook of the Stan-dard Celeration Chart. Gainesville, FL:Xerographics.

Pennypacker, H. S., Koenig, C., & Linds-ley, O. (1972). Handbook of the Stan-dard Behavior Chart. Kansas City:Precision Media.

Peters, M., & Heron, T. E. (1993). When thebest is not good enough: An examina-tion of best practice. Journal of SpecialEducation, 26 (4), 371–385.

Peters, R., & Davies, K. (1981). Effects ofself-instructional training on cognitiveimpulsivity of mentally retarded ado-lescents. American Journal of MentalDeficiency, 85, 377–382.

Peterson, I., Homer, A. L., & Wonderlich,S. A. (1982). The integrity of indepen-dent variables in behavior analysis.Journal of Applied Behavior Analysis,15, 477–492.

Peterson, N. (1978). An introduction to ver-bal behavior. Grand Rapids, MI: Be-havior Associates, Inc.

Peterson, S. M., Neef, N. A., Van Norman,R., & Ferreri, S. J. (2005). Choice mak-ing in educational settings. In W. L.Heward, T. E. Heron, N. A. Neef, S. M.Peterson, D. M. Sainato, G. Cartledge,R. Gardner, III, L. D. Peterson, S. B.

Pelaez-Nogueras, M., Gewirtz, J. L., Field,T., Cigales, M., Malphurs, J., Clasky,S., & Sanchez, A. (1996). Infants’preference for touch stimulation inface-to-face interactions. Journal ofApplied Developmental Psychology,17, 199–213.

715

Bibliography

Hersh, & J. C. Dardig (Eds.), Focus on behavior analysis in education:Achievements, challenges, and oppor-tunities (pp. 125–136). Upper SaddleRiver, NJ: Merrill/Prentice Hall.

Pfadt, A., & Wheeler, D. J. (1995). Usingstatistical process control to make data-based clinical decisions. Journal of Ap-plied Behavior Analysis, 28, 349–370.

Phillips, E. L., Phillips, E. A., Fixen, D. L.,& Wolf, M. M. (1971). AchievementPlace: Modification of the behaviors ofpredelinquent boys with a token econ-omy. Journal of Applied BehaviorAnalysis, 4, 45–59.

Piaget, J. (1952). The origins of intelligencein children. (M. Cook, Trans.). NewYork: International University Press.

Piazza, C. C., & Fisher, W. (1991). A fadedbedtime with response cost protocol fortreatment of multiple-sleep problemsin children. Journal of Applied Behav-ior Analysis, 24, 129–140.

Piazza, C. C., Bowman, L. G., Contrucci,S. A., Delia, M. D., Adelinis, J. D., &Goh, H-L. (1999). An evaluation of theproperties of attention as reinforcementfor destructive and appropriate behav-ior. Journal of Applied Behavior Analy-sis, 32, 437–449.

Piazza, C. C., Fisher, W. W., Hagopian,L. P., Bowman, L. G., & Toole, L.(1996). Using a choice assessment topredict reinforcer effectiveness.Journal of Applied Behavior Analy-sis, 29, 1–9.

Piazza, C. C., Moses, D. R., & Fisher, W. W.(1996). Differential reinforcement of al-ternative behavior and demand fadingin the treatment of escape maintaineddestructive behavior. Journal of AppliedBehavior Analysis, 29, 569–572.

Piazza, C. C., Roane, H. S., Kenney, K. M.,Boney, B. R., & Abt, K. A. (2002).Varying response effort in the treatmentof pica maintained by automatic rein-forcement. Journal of Applied BehaviorAnalysis, 35, 233–246.

Pierce, K. L., & Schreibman, L. (1994).Teaching daily living skills to childrenwith autism in unsupervised settingsthrough pictorial self-management.Journal of Applied Behavior Analysis,27, 471–481.

Pierce, W. D., & Epling, W. F. (1999).Behavior analysis and learning (2nded.). Upper Saddle River, NJ: PrenticeHall/Merrill.

Pindiprolu, S. S., Peterson, S. M. P., Rule,S., & Lignuaris/Kraft, B. (2003). Usingweb-mediated experiential case-basedinstruction to teach functional behav-ioral assessment skills. Teacher Edu-cation in Special Education, 26, 1–16.

Pinker, S. (1994). The language instinct.New York: Harper Perennial.

Pinkston, E. M., Reese, N. M., Leblanc,J. M., & Baer, D. M. (1973). Inde-pendent control of a preschool child’s aggression and peer interaction bycontingent teacher attention. Journalof Applied Behavior Analysis, 6,115–124.

Plazza, C. C., Bowman, L. G., Contrucci, S.A., Delia, M. D., Adelinis, J. D., &Goh, H.-L. (1999). An evaluation of theproperties of attention as reinforcementfor destructive and appropriate behav-ior. Journal of Applied BehavioralAnalysis, 32, 437–449.

Poche, C., Brouwer, R., & Swearingen, M.(1981). Teaching self-protection toyoung children. Journal of Applied Be-havior Analysis, 14, 169–176.

Poling, A., & Normand, M. (1999). Noncon-tingent reinforcement: An inappropriatedescription of time-based schedules thatreduce behavior. Journal of Applied Be-havior Analysis, 32, 237–238.

Poling, A., & Ryan, C. (1982). Differential-reinforcement-of-other-behavior sched-ules: Therapeutic applications. BehaviorModification, 6, 3–21.

Poling, A., Methot, L. L., & LeSage, M. G.(Eds.). (1995). Fundamentals of be-havior analytic research. New York:Plenum Press.

Poplin, J., & Skinner, C. (2003). Enhancingacademic performance in a classroomserving students with serious emotionaldisturbance: Interdependent group con-tingencies with randomly selectedcomponents. School Psychology Re-view, 32 (2), 282–296.

Post, M., Storey, K., & Karabin, M. (2002).Cool headphones for effective prompts:Supporting students and adults in workand community environments. TeachingExceptional Children, 34, 60–65.

Potts, L., Eshleman, J. W., & Cooper, J. O.(1993). Ogden R. Lindsley and the his-torical development of Precision Teach-ing. The Behavior Analyst, 16 (2),177–189.

Poulson, C. L. (1983). Differential rein-forcement of other-than-vocalization

as a control procedure in the condi-tioning of infant vocalization rate.Journal of Experimental Child Psy-chology, 36, 471–489.

Powell, J., & Azrin, N. (1968). Behavioralengineering: Postural control by aportable operant apparatus. Journal ofApplied Behavior Analysis, 1, 63–71.

Powell, J., Martindale, B., & Kulp, S.(1975). An evaluation of time-samplemeasures of behavior. Journal of Ap-plied Behavior Analysis, 8, 463–469.

Powell, J., Martindale, B., Kulp, S., Martin-dale, A., & Bauman, R. (1977). Takinga closer look: Time sampling and mea-surement error. Journal of Applied Be-havior Analysis, 10, 325–332.

Powell, T. H., & Powell, I. Q. (1982). Theuse and abuse of using the timeout procedure for disruptive pupils. ThePointer, 26, 18–22.

Powers, R. B., Osborne, J. G., & Anderson,E. G. (1973). Positive reinforcement oflitter removal in the natural environ-ment. Journal of Applied BehaviorAnalysis, 6, 579–586.

Premack, D. (1959). Toward empirical be-havioral laws: I. Positive reinforcement.Psychological Review, 66, 219–233.

President’s Council on Physical Fitness.www.fitness.gov.

Progar, P. R., North, S. T., Bruce, S. S., Di-novi, B. J., Nau, P. A., Eberman, E. M.,Bailey, J. R., Jr., & Nussbaum, C. N.(2001). Putative behavioral history ef-fects and aggression maintained by es-cape from therapists. Journal of AppliedBehavioral Analysis, 34, 69–72.

Pryor, K. (1999). Don’t shoot the dog! Thenew art of teaching and training (rev.ed.). New York: Bantam Books.

Pryor, K. (2005). Clicker trained flight in-struction. Retrieved April 10, 2005, fromhttp://clickertraining.com/training/humans/job/flight_training.

Pryor, K., Haag, R., & O’Reilly, J. (1969).The creative porpoise: Training for novelbehavior. Journal of the ExperimentalAnalysis of Behavior, 12, 653–661.

Pryor, K., & Norris, K. S. (1991). Dolphinsocieties: Discoveries and puzzles.Berkeley: University of California Press.

Rachlin, H. (1970). The science of self-control. Cambridge: Harvard Univer-sity Press.

Rachlin, H. (1974). Self-control. Behavior-ism, 2, 94–107.

716

Bibliography

Rachlin, H. (1977). Introduction to modernbehaviorism (2nd ed.). San Francisco:W. H. Freeman.

Rachlin, H. (1995). Self-control. Cambridge:Harvard University Press.

Rapp, J. T., Miltenberger, R. G., & Long,E. S. (1998). Augmenting simplifiedhabit reversal with an awareness en-hancement device. Journal of AppliedBehavior Analysis, 31, 665–668.

Rapp, J. T., Miltenberger, R. G., Galensky,T. L., Ellingson, S. A., & Long, E. S.(1999). A functional analysis of hairpulling. Journal of Applied BehaviorAnalysis, 32, 329–337.

Rasey, H. W., & Iversen, I. H. (1993). Anexperimental acquisition of maladap-tive behaviors by shaping. Journal ofBehavior Therapy and ExperimentalPsychiatry, 24, 37–43.

Readdick, C. A., & Chapman, P. L. (2000).Young children’s perceptions of timeout. Journal of Research in ChildhoodEducation, 15 (1), 81–87.

Reese, E. P. (1966). The analysis of humanoperant behavior. Dubuque, IA: Brown.

Rehfeldt, R. A., & Chambers, M. C. (2003).Functional analysis and treatment ofverbal perseverations displayed by anadult with autism. Journal of AppliedBehavior Analysis, 36, 259–261.

Reich, W. T. (1988). Experiential ethics asa foundation for dialogue betweenhealth communications and health-careethics. Journal of Applied Communi-cation Research, 16, 16–28.

Reid, D. H., Parsons, M. B., Green, C. W.,& Browning, L. B. (2001). Increasingone aspect of self-determinationamong adults with severe multipledisabilities in supported work. Jou-rnal of Applied Behavioral Analysis,34, 341–344.

Reid, D. H., Parsons, M. B., Phillips, J. F., &Green, C. W. (1993). Reduction of self-injurious hand mouthing using re-sponse blocking. Journal of AppliedBehavior Analysis, 26, 139–140.

Reid, R., & Harris, K. R. (1993). Self-monitoring attention versus self-monitoring of performance: Effects onattention and academic performance.Exceptional Children, 60, 29–40.

Reimers, T. M., & Wacker, D. P. (1988). Par-ents’ ratings of the acceptability of be-havior treatment recommendationsmade in an outpatient clinic: A pre-liminary analysis of the influence of

treatment effectiveness. BehavioralDisorders, 14, 7–15.

Reitman, D., & Drabman, R. S. (1999).Multifaceted uses of a simple timeoutrecord in the treatment of a noncom-pliant 8-year-old boy. Education and Treatment of Children, 22 (2),136–145.

Reitman, D., & Gross, A. M. (1996). De-layed outcomes and rule-governed be-havior among “noncompliant” and“compliant” boys: A replication and ex-tension. The Analysis of Verbal Behav-ior, 13, 65–77.

Repp, A. C., & Horner, R. H. (Eds.).(1999). Functional analysis of prob-lem behavior: From effective assess-ment to effective support. Belmont,CA: Wadsworth.

Repp, A. C., & Karsh, K. G. (1994). Laptopcomputer system for data recording andcontextual analyses. In T. Thompson &D. B. Gray (Eds.), Destructive behav-ior in developmental disabilities: Di-agnosis and treatment (pp. 83–101).Thousand Oaks, CA: Sage.

Repp, A. C., & Singh, N. N. (Eds.). (1990).Perspectives on the use of nonaversiveand aversive interventions for personswith developmental disabilities. Syca-more, IL: Sycamore.

Repp, A. C., Barton, L. E., & Brulle, A. R.(1983). A comparison of two proce-dures for programming the differentialreinforcement of other behaviors.Journal of Applied Behavior Analysis,16, 435–445.

Repp, A. C., Dietz, D. E. D., Boles, S. M.,Dietz, S. M., & Repp, C. F. (1976). Dif-ferences among common methods forcalculating interobserver agreement.Journal of Applied Behavior Analysis,9, 109–113.

Repp, A. C., Harman, M. L., Felce, D.,Vanacker, R., & Karsh, K. L. (1989).Conducting behavioral assessments oncomputer collected data. BehavioralAssessment, 2, 249–268.

Repp, A. C., Karsh, K. G., Johnson, J. W., &VanLaarhoven, T. (1994). A compari-son of multiple versus single examplesof the correct stimulus on task acquisi-tion and generalization by persons withdevelopmental disabiliteis. Journal ofBehavioral Education, 6, 213–230.

Rescorla, R. (1988). Pavlovian conditioning:It’s not what you think it is. AmericanPsychologist, 43, 151–160.

Reynolds, G. S. (1961). Behavioral contrast.Journal of the Experimental Analysisof Behavior, 4, 57–71.

Reynolds, G. S. (1968). A primer of operantconditioning. Glenview, IL: Scott,Foresman.

Reynolds, G. S. (1975). A primer of operantconditioning (Rev. ed.). Glenview, IL:Scott, Foresman.

Reynolds, N. J., & Risley, T. R. (1968). Therole of social and material reinforcersin increasing talking of a disadvantagedpreschool child. Journal of Applied Be-havior Analysis, 1, 253–262.

Rhode, G., Morgan, D. P., & Young, K. R.(1983). Generalization and mainte-nance of treatment gains of behav-iorally handicapped students fromresource rooms to regular classroomsusing self-evaluation procedures.Journal of Applied Behavior Analysis,16, 171–188.

Richman, D. M., Berg, W. K., Wacker, D. P.,Stephens, T., Rankin, B., & Kilroy, J.(1997). Using pretreatment assess-ments to enhance and evaluate existingtreatment packages. Journal of AppliedBehavior Analysis, 30, 709–712.

Richman, D. M., Wacker, D. P., Asmus, J. M.,Casey, S. D., & Andelman, M. (1999).Further analysis of problem behavior inresponse class hierarchies. Journal ofBehavior Analysis, 32, 269–283.

Ricketts, R. W., Goza, A. B., & Matese, M.(1993). A 4-year follow-up of treatmentof self-injury. Journal of BehaviorTherapy and Experimental Psychiatry,24 (1), 57–62.

Rincover, A. (1978). Sensory extinction: Aprocedure for eliminating self-stimula-tory behavior in psychotic children.Journal of Abnormal Child Psychology,6, 299–310.

Rincover, A. (1981). How to use sensory ex-tinction. Austin, TX: Pro-Ed.

Rincover, A., & Koegel, R. L. (1975). Set-ting generality and stimulus control inautistic children. Journal of AppliedBehavior Analysis, 8, 235–246.

Rincover, A., & Newsom, C. D. (1985). Therelative motivational properties of sen-sory reinforcement with psychotic chil-dren. Journal of Experimental ChildPsychology, 24, 312–323.

Rincover, A., Cook, R., Peoples, A., &Packard, D. (1979). Sensory extinctionand sensory reinforcement principlesfor programming multiple adaptive

717

Bibliography

behavior change. Journal of AppliedBehavior Analysis, 12, 221–233.

Rindfuss, J. B., Al-Attrash, M., Morrison, H.,& Heward, W. L. (1998, May). Usingguided notes and response cards to im-prove quiz and exam scores in an eighthgrade American history class. Paperpresented at 24th Annual Convention ofthe Association for Behavior Analysis,Orlando, FL.

Ringdahl, J. E., Kitsukawa, K., Andelman,M. S., Call, N., Winborn, L. C., Bar-retto, A., & Reed, G. K. (2002). Differ-ential reinforcement with and withoutinstructional fading. Journal of AppliedBehavior Analysis, 35, 291–294.

Ringdahl, J. E., Vollmer, T. R., Borrero,J. C., & Connell, J. E. (2001). Fixed-time schedule effects as a function ofbaseline reinforcement rate. Journal ofApplied Behavior Analysis, 34, 1–15.

Ringdahl, J. E., Vollmer, T. R., Marcus,B. A., & Roane, H. S (1997). An ana-logue evaluation of environmental enrichment: The role of stimulus pref-erence. Journal of Applied BehaviorAnalysis, 30, 203–216.

Riordan, M. M., Iwata, B. A., Finney, J. W.,Wohl, M. K., & Stanley, A. E. (1984).Behavioral assessment and treatmentof chronic food refusal in handicappedchildren. Journal of Applied BehaviorAnalysis, 17, 327–341.

Risley, T. (1996). Get a life! In L. KernKoegel, R. L. Koegel, & G. Dunlap(Eds.), Positive behavioral support (pp.425–437). Baltimore: Paul H. Brookes.

Risley, T. (2005). Montrose M. Wolf(1935–2004). Journal of Applied Be-havior Analysis, 38, 279–287.

Risley, T. R. (1968). The effects and side ef-fects of punishing the autistic behav-iors of a deviant child. Journal ofApplied Behavior Analysis, 1, 21–34.

Risley, T. R. (1969, April). Behavior modi-fication: An experimental-therapeuticendeavor. Paper presented at the BanffInternational Conference on BehaviorModification, Banff, Alberta, Canada.

Risley, T. R. (1997). Montrose M. Wolf: Theorigin of the dimensions of applied be-havior analysis. Journal of Applied Be-havior Analysis, 30, 377–381.

Risley, T. R. (2005). Montrose M. Wolf(1935–2004). Journal of Applied Be-havior Analysis, 38, 279–287.

Risley, T. R., & Hart, B. (1968). Developingcorrespondence between the non-verbal

and verbal behavior of preschool chil-dren. Journal of Applied BehaviorAnalysis, 1, 267–281.

Roane, H. S., Fisher, W. W., & McDonough,E. M. (2003). Progressing from pro-grammatic to discovery research: Acase example with the overjustificationeffect. Journal of Applied BehaviorAnalysis, 36, 23–36.

Roane, H. S., Kelly, M. L., & Fisher, W. W.(2003). The effects of noncontingentaccess to food on the rate of objectmouthing across three settings.Journal of Applied Behavior Analysis,36, 579–582.

Roane, H. S., Lerman, D. C., & Vorndran,C. M. (2001). Assessing reinforcersunder progressive schedule require-ments. Journal of Applied BehaviorAnalysis, 34, 145–167.

Roane, H. S., Vollmer, T. R., Ringdahl, J. E.,& Marcus, B. A. (1998). Evaluation ofa brief stimulus preference assessment.Journal of Applied Behavior Analysis,31, 605–620.

Roberts-Pennell, D., & Sigafoos, J. (1999).Teaching young children with devel-opmental disabilities to request moreplay using the behavior chain inter-ruption strategy. Journal of AppliedResearch in Intellectual Disabilities,12, 100–112.

Robin, A. L., Armel, S., & O’Leary, K. D.,(1975). The effects of self-instructionon writing deficiencies. Behavior Ther-apy, 6, 178–187.

Robin, A., Schneider, M., & Dolnick, M.,(1976). The turtle technique: An ex-tended case study of self-control in theclassroom. Psychology in the Schools,13, 449–453.

Robinson, P. W., Newby, T. J., & Gansell,S. L. (1981). A token system for a classof underachieving hyperactive children.Journal of Applied Behavior Analysis,14, 307–315.

Rodgers, T. A., & Iwata, B. A. (1991). Ananalysis of error-correction proceduresduring discrimination training.Journal of Applied Behavior Analysis,24, 775–781.

Rohn, D. (2002). Case study: Improvingguitar skills. In R. W. Malott & H. Har-rison, I’ll stop procrastinating when Iget around to it: Plus other cool waysto succeed in school and life using behavior analysis to get your act together (p. 8-4). Kalamazoo, MI: De-

partment of Psychology, WesternMichigan University.

Rolider, A., & Van Houten, R. (1984). Theeffects of DRO alone and DRO plusreprimands on the undesirable behav-ior of three children in home settings.Education and Treatment of Children,7, 17–31.

Rolider, A., & Van Houten, R. (1984). Train-ing parents to use extinction to elimi-nate nighttime crying by graduallyincreasing the criteria for ignoring cry-ing. Education and Treatment of Chil-dren, 7, 119–124.

Rolider, A., & Van Houten, R. (1985). Sup-pressing tantrum behavior in publicplaces through the use of delayed pun-ishment mediated by audio recordings.Behavior Therapy, 16, 181–194.

Rolider, A. , & Van Houten, R. (1993). Theinterpersonal treatment model. In R.Van Houten & S. Axelrod (Eds.),Behavior analysis and treatment (pp.127–168). New York: Plenum Press.

Romanczyk, R. G. (1977). Intermittent pun-ishment of self-stimulation: Effective-ness during application and extinction.Journal of Consulting and ClinicalPsychology, 45, 53–60.

Romaniuk, C., Miltenberger, R., Conyers,C., Jenner, N., Jurgens, M., & Ringen-berg, C. (2002). The influence of ac-tivity choice on problem behaviorsmaintained by escape versus attention.Journal of Applied Behavioral Analy-sis, 35, 349–362.

Romaniuk, C., Miltenberger, R., Conyers,C., Jenner, N., Roscoe, E. M., Iwata,B. A., & Goh, H.-L. (1998). A com-parison of noncontingent reinforcementand sensory extinction as treatments forself-injurious behavior. Journal of Ap-plied Behavior Analysis, 31, 635–646.

Romeo, F. F. (1998). The negative effectsof using a group contingency systemof classroom management. Journal of Instructional Psychology, 25 (2),130–133.

Romer, L. T., Cullinan, T., & Schoenberg,B. (1994). General case training of re-questing: A demonstration and analy-sis. Education and Training in MentalRetardation, 29, 57–68.

Rosales-Ruiz, J., & Baer, D. M. (1997). Be-havioral cusps: A developmental andpragmatic concept for behavior analy-sis. Journal of Applied Behavior Analy-sis, 30, 533–544.

718

Bibliography

Roscoe, E. M., Iwata, B. A., & Goh, H.-L.(1998). A comparison of noncontingentreinforcement and sensory extinctionas treatments for self-injurious behav-ior. Journal of Applied Behavior Analy-sis, 31, 635–646.

Roscoe, E. M., Iwata, B. A., & Kahng, S.(1999). Relative versus absolute rein-forcement effects: Implications forpreference assessments. Journal of Ap-plied Behavior Analysis, 32, 479–493.

Rose, J. C., De Souza, D. G., & Hanna, E. S.(1996). Teaching reading and spelling:Exclusion and stimulus equivalence.Journal of Applied Behavior Analysis,29, 451–469.

Rose, T. L. (1978). The functional relation-ship between artificial food colors andhyperactivity. Journal of Applied Be-havior Analysis, 11, 439–446.

Ross, C., & Neuringer, A. (2002). Reinforce-ment of variations and repetitions alongthree independent response dimensions.Behavioral Processes, 57, 199–209.

Rozensky, R. H. (1974). The effect of tim-ing of self-monitoring behavior on re-ducing cigarette consumption. Journalof Consulting and Clinical Psychology,5, 301–307.

Rusch, F. R., & Kazdin, A. E. (1981). To-ward a methodology of withdrawal de-signs for the assessment of responsemaintenance. Journal of Applied Be-havior Analysis, 14, 131–140.

Russell, B., & Whitehead A. N.(1910–1913). Principia mathematica.Cambridge, MA: University Press.

Ruth, W. J. (1996). Goal setting and behav-ioral contracting for students with emo-tional and behavioral difficulties:Analysis of daily, weekly, and totalgoal attainment. Psychology in theSchools, 33, 153–158.

Ryan, C. S., & Hemmes, N. S. (2005). Ef-fects of the contingency for homeworksubmission on homework submissionand quiz performance in a collegecourse. Journal of Applied BehaviorAnalysis, 38, 79–88.

Ryan, S., Ormond, T., Imwold, C., & Ro-tunda, R. J. (2002). The effects of apublic address system on the off-taskbehavior of elementary physical edu-cation students. Journal of Applied Be-havior Analysis, 35, 305–308.

Sagan, C. (1996). The demon-hauntedworld: Science as a candle in the dark.New York: Ballantine.

Saigh, P. A., & Umar, A. M. (1983). The ef-fects of a good behavior game on thedisruptive behavior of Sudanese ele-mentary school students. Journal of Ap-plied Behavior Analysis, 16, 339–344.

Sainato, D. M., Strain, P. S., Lefebvre, D.,& Rapp, N. (1990). Effects of self-eval-uation on the independent work skillsof preschool children with disabilities.Exceptional Children, 56, 540–549.

Sajwaj, T., Culver, P., Hall, C., & Lehr, L.(1972). Three simple punishment tech-niques for the control of classroom dis-ruptions. In G. Semb (Ed.), Behavioranalysis and education. Lawrence:University of Kansas.

Saksida, L. M., Raymond, S. M., & Touret-zky, D.S. (1997). Shaping robot behav-ior using principles from instrumentalconditioning. Robotics and AutonomousSystems, 22, 231–249.

Salend, S. J. (1984b). Integrity of treatmentin special education research. MentalRetardation, 22, 309–315.

Salend, S. J., Ellis, L. L., & Reynolds, C. J.(1989). Using self-instruction to teachvocational skills to individuals whoare severely retarded. Education andTraining of the Mentally Retarded, 24,248–254.

Salvy, S.-J., Mulick, J. A., Butter, E., Bartlett,R. K., & Linscheid, T. R. (2004). Con-tingent electric shock (SIBIS) and aconditioned punisher eliminate severehead banging in a preschool child. Be-havioral Interventions, 19, 59–72.

Salzinger, K. (1978). Language behavior. InA. C. Catania & T. A. Brigham (Eds.),Handbook of applied behavior analy-sis: Social and instructional processes(pp. 275–321). New York: Irvington.

Santogrossi, D. A., O’Leary, K. D., Ro-manczyk, R. G., & Kaufman, K. F.(1973). Self-evaluation by adolescentsin a psychiatric hospital school tokenprogram. Journal of Applied BehaviorAnalysis, 6, 277–287.

Sarakoff, R. A., & Strumey, P. (2004). Theeffects of behavioral skills training onstaff implementation of discrete-trialteaching. Journal of Applied BehaviorAnalysis, 37, 535–538.

Saraokoff, R. A., Taylor, B. A., & Poulson,C. L. (2001). Teaching children withautism to engage in conversational ex-changes: Script fading with embeddedtextual stimuli. Journal of Applied Be-havior Analysis, 34, 81–84.

Sasso, G. M., Reimers, T. M., Cooper, L. J.,Wacker, D., Berg, W., Steege, M.,Kelly, L., & Allaire, A. (1992). Use ofdescriptive and experimental analysisto identify the functional properties ofaberrant behavior in school settings.Journal of Applied Behavior Analysis,25, 809–821.

Saudargas, R. A., & Bunn, R. D. (1989). Ahand-held computer system for class-room observation. Journal of SpecialEducation, 9, 200–206.

Saudargas, R. A., & Zanolli, K. (1990). Mo-mentary time sampling as an estimateof percentage time: A field validation.Journal of Applied Behavior Analysis,23, 533–537.

Saunders, M. D., Saunders, J. L., & Saun-ders, R. R. (1994). Data collection withbar code technology. In T. Thompson& D. B. Gray (Eds.), Destructive be-havior in developmental disabilities:Diagnosis and treatment (pp. 102–116).Thousand Oaks, CA: Sage.

Savage, T. (1998). Shaping: The link be-tween rats and robots. Connection Sci-ence, 10 (3/4), 321–340.

Savage-Rumbaugh, E. S. (1984). Verbal be-havior at the procedural level in thechimpanzee. Journal of the Experimen-tal Analysis of Behavior, 41, 223–250.

Saville, B. K., Beal, S. A., & Buskist, W.(2002). Essential readings for graduatestudents in behavior analysis: A surveyof the JEAB and JABA Boards of Edi-tors. The Behavior Analyst, 25, 29–35.

Schepis, M. M., Reid, D. H., Behrmann,M. M., & Sutton, K. A. (1998). In-creasing communicative interactions ofyoung children with autism using avoice output communication aid andnaturalistic teaching. Journal of Ap-plied Behavior Analysis, 31, 561–578.

Scheuermann, B., & Webber, J. (1996).Level systems: Problems and solutions.Beyond Behavior, 7, 12–17.

Schleien, S. J., Wehman, P., & Kiernan, J.(1981). Teaching leisure skills to severelyhandicapped adults: An age-appropriatedarts game. Journal of Applied Behav-ior Analysis, 14, 513–519.

Schlinger, H., & Blakely, E. (1987).Function-altering effects of contingency-specifying stimuli. The Behavior Ana-lyst, 10, 41–45.

Schoenberger, T. (1990). Understanding andthe listener: Conflicting views. TheAnalysis of Verbal Behavior, 8, 141–150.

719

Bibliography

Schoenberger, T. (1991). Verbal under-standing: Integrating the conceptualanalyses of Skinner, Ryle, and Wittgen-stein. The Analysis of Verbal Behavior,9, 145–151.

Schoenfeld, W. N. (1995). “Reinforcement”in behavior theory. The Behavior Ana-lyst, 18, 173–185.

Schumm, J. S., Vaughn, D., Haager, D., Mc-Dowell, J., Rothlein, L., & Saumell, L.(1995). General education teacher plan-ning: What can students with learningdisabilities expect? Exceptional Chil-dren, 61, 335–352.

Schuster, J. W., Griffen, A. K., & Wolery, M.(1992). Comparison of simultaneousprompting and constant time delay pro-cedures in teaching sight words to ele-mentary students with moderate mentalretardation. Journal of Behavioral Ed-ucation, 7, 305–325.

Schwartz, B. (1974). On going back to na-ture: A review of Seligman and Hager’sBiological Boundaries of Learning.Journal of the Experimental Analysisof Behavior, 21, 183–198.

Schwartz, I. S., & Baer, D. M. (1991). Socialvalidity assessments: Is current prac-tice state of the art? Journal of AppliedBehavior Analysis, 24, 189–204.

Schwarz, M. L., & Hawkins, R. P. (1970).Application of delayed reinforcementprocedures to the behavior of an ele-mentary school child. Journal of Ap-plied Behavior Analysis, 3, 85–96.

Schweitzer, J. B., & Sulzer-Azaroff, B.(1988). Self-control: Teaching toler-ance for delay in impulsive children.Journal of the Experimental Analysisof Behavior, 50, 173–186.

Scott, D., Scott, L. M., & Goldwater, B.(1997). A performance improvementprogram for an international-level trackand field athlete. Journal of Applied Be-havior Analysis, 30, 573–575.

Seymour, F. W., & Stokes, T. F. (1976). Self-recording in training girls to increasework rate and evoke staff praise in aninstitution for offenders. Journal of Ap-plied Behavior Analysis, 9, 41–54.

Seymour, M. A. (2002). Case study: A re-tired athlete runs down comebackroad. In R. W. Malott & H. Harrison,I’ll stop procrastinating when I getaround to it: Plus other cool ways tosucceed in school and life using be-havior analysis to get your act to-

gether (p. 7-12). Kalamazoo, MI: De-partment of Psychology, WesternMichigan University.

Shahan, T. A., & Chase, P. N. (2002). Novelty,stimulus control, and operant variabil-ity. The Behavior Analyst, 25, 175–190.

Shermer, S. (1997). Why people believe weirdthings. New York: W. H. Freeman.

Shimamune, S., & Jitsumori, M. (1999). Ef-fects of grammar instruction and flu-ency training on the learning of the anda by native speakers of Japanese. TheAnalysis of Verbal Behavior, 16, 3–16.

Shimmel, S. (1977). Anger and its controlin Greco-Roman and modern psychol-ogy. Psychiatry, 42, 320–327.

Shimmel, S. (1979). Free will, guilt, andself-control in rabbinic Judaism andcontemporary psychology. Judaism,26, 418–429.

Shimoff, E., & Catania, A. C. (1995). Usingcomputers to teach behavior analysis.The Behavior Analyst, 18, 307–316.

Shirley, M. J., Iwata, B. A., Kahng, S. W.,Mazaleski, J. L., & Lerman, D. C.(1997). Does functional communica-tion training compete with ongoingcontingencies of reinforcement? Ananalysis during response acquisitionand maintenance. Journal of AppliedBehavior Analysis, 30, 93–104.

Shook, G. L. (1993). The professional cre-dential in behavior analysis. The Be-havior Analyst, 16, 87–101.

Shook, G. L., & Favell, J. E. (1996). Identi-fying qualified professionals in behav-ior analysis. In C. Maurice, G. Green,& S. C. Luce (Eds.), Behavioral inter-vention for young children with autism:A manual for parents and profession-als (pp. 221–229). Austin, TX: Pro-Ed.

Shook, G. L., & Neisworth, J. (2005). En-suring appropriate qualifications for ap-plied behavior analyst professionals:The Behavior Analyst CertificationBoard. Exceptionality 13 (1), 3–10.

Shook, G. L., Johnston, J. M., & Mel-lichamp, F. (2004). Determining es-sential content for applied behavioranalyst practitioners. The Behavior An-alyst, 27, 67–94.

Shook, G. L., Rosales, S. A., & Glenn, S.(2002). Certification and training of be-havior analyst professionals. BehaviorModification, 26 (1), 27–48.

Shore, B. A., Iwata, B. A., DeLeon, I. G.,Kahng, S., & Smith, R. G. (1997). An

analysis of reinforcer substitutabilityusing object manipulation and self-injury as competing responses. Jour-nal of Applied Behavior Analysis, 30,439–449.

Sideridis, G. D., & Greenwood, C. R.(1996). Evaluating treatment effects insingle-subject behavioral experimentsusing quality-control charts. Journal ofBehavioral Education, 6, 203–211.

Sidman, M. (1960). Tactics of scientific re-search. New York: Basic Books.

Sidman, M. (1960/1988). Tactics of scien-tific research: Evaluating experimentaldata in psychology. New York: BasicBooks/Boston: Authors Cooperative(reprinted).

Sidman, M. (1971). Reading and auditory-visual equivalences. Journal of Speechand Hearing Research, 14, 5–13.

Sidman, M. (1994). Equivalence relationsand behavior: A research story. Boston:Author’s Cooperative.

Sidman, M. (2000). Applied behavior analy-sis: Back to basics. Behaviorology, 5(1), 15–37.

Sidman, M. (2002). Notes from the beginningof time. The Behavior Analyst, 25, 3–13.

Sidman, M. (2006). The distinction betweenpositive and negative reinforcement:Some additional considerations. TheBehavior Analyst, 29, 135–139.

Sidman, M., & Cresson, O., Jr. (1973).Reading and crossmodal transfer ofstimulus equivalences in severe retar-dation. American Journal of MentalDeficiency, 77, 515–523.

Sidman, M., & Stoddard, L. T. (1967). Theeffectiveness of fading in programminga simultaneous form discrimination forretarded children. Journal of the Exper-imental Analysis of Behavior, 10, 3–15.

Sidman, M., & Tailby, W. (1982). Condi-tional discrimination vs. matching-to-sample: An expansion of the testingparadigm. Journal of the ExperimentalAnalysis of Behavior, 37, 5–22.

Sigafoos, J., Doss, S., & Reichle, J. (1989).Developing mand and tact repertoireswith persons with severe developmen-tal disabilities with graphic symbols.Research in Developmental Disabili-ties, 11, 165–176.

Sigafoos, J., Kerr, M., & Roberts, D.(1994). Interrater reliability of the Mo-tivation Assessment Scale: Failure toreplicate with aggressive behavior.

720

Bibliography

Research in Developmental Disabili-ties, 15, 333–342.

Silvestri, S. M. (2004). The effects of self-scoring on teachers’positive statementsduring classroom instruction. Unpub-lished doctoral dissertation. Columbus,OH: The Ohio State University.

Silvestri, S. M. (2005). How to make a graphusing Microsoft Excel. Unpublishedmanuscript. Columbus, OH: The OhioState University.

Simek, T. C., O’Brien, R. M., & Figlerski,L. B. (1994). Contracting and chainingto improve the performance of a col-lege golf team: Improvement and dete-rioration. Perceptual and Motor Skills,78 (3), 1099.

Simpson, M. J. A., & Simpson, A. E.(1977). One-zero and scan method forsampling behavior. Animal Behavior,25, 726–731.

Singer, G., Singer, J., & Horner, R. (1987).Using pretask requests to increase theprobability of compliance for studentswith severe disabilities. Journal of theAssociation for Persons with SevereHandicaps, 12, 287–291.

Singh, J., & Singh, N. N. (1985). Compari-son of word-supply and word-analysiserror-correction procedures on oralreading by mentally retarded children.American Journal of Mental Defi-ciency, 90, 64–70.

Singh, N. N. (1990). Effects of two error-correction procedures on oral read-ing errors. Behavior Modification, 11,165–181.

Singh, N. N., & Katz, R. C. (1985). On themodification of acceptability ratings foralternative child treatments. BehaviorModification, 9, 375–386.

Singh, N. N., & Singh, J. (1984). Antecedentcontrol of oral reading errors and self-corrections by mentally retarded chil-dren. Journal of Applied BehaviorAnalysis, 17, 111–119.

Singh, N. N., & Singh, J. (1986). Increasingoral reading proficiency: A comparativeanalyst off drill and positive practiceovercorrection procedures. BehaviorModification, 10, 115–130.

Singh, N. N., & Winton, A. S. (1985). Con-trolling pica by components of an over-correction procedure. American Journalof Mental Deficiency, 90, 40–45.

Singh, N. N., Dawson, M. J., & Manning, P.(1981). Effects of spaced responding

DRI on the stereotyped behavior of pro-foundly retarded persons. Journal of Ap-plied Behavior Analysis, 14, 521–526.

Singh, N. N., Dawson, M. J., & Manning,P. (1981). The effects of physical re-straint on self-injurious behavior.Journal of Mental Deficiency Re-search, 25, 207–216.

Singh, N. N., Singh, J., & Winton, A. S.(1984). Positive practice overcorrectionof oral reading errors. Behavior Modi-fication, 8, 23–37.

Skiba, R., & Raison, J. (1990). Relationshipbetween the use of timeout and acade-mic achievement. Exceptional Chil-dren, 57 (1), 36–46.

Skinner, B. F. (1938). The behavior of or-ganisms. New York: Appleton-Century-Crofts.

Skinner, B. F. (1938/1966). The behaviorof organisms: An experimental analy-sis. New York: Appleton-Century.(Copyright renewed in 1966 by the B. F. Skinner Foundation, Cambridge,MA.)

Skinner, B. F. (1948). Walden two. NewYork: Macmillan.

Skinner, B. F. (1948). Superstition in the pi-geon. Journal of Experimental Psy-chology, 38, 168–172.

Skinner, B. F. (1953). Science and humanbehavior. New York: MacMillan.

Skinner, B. F. (1956). A case history in sci-entific method. American Psychologist,11, 221–233.

Skinner, B. F. (1957). Verbal behavior. NewYork: Appleton-Century-Crofts.

Skinner, B. F. (1966). Operant behavior. In W. K. Honig (Ed.), Operant behavior:Areas of research and application (pp.12–32). New York: Appleton-Century-Crofts.

Skinner, B. F. (1967). B. F. Skinner: An au-tobiography. In E. G. Boring & G.Lindzey (Eds.), A history of psychol-ogy in autobiography (Vol. 5, pp.387–413). New York: Irvington.

Skinner, B. F. (1969). Contingencies of reinforcement: A theoretical analysis.New York: Appleton-Century-Crofts.

Skinner, B. F. (1971). Beyond freedom anddignity. New York: Knopf.

Skinner, B. F. (1974). About behaviorism.New York: Knopf.

Skinner, B. F. (1976). Particulars of my life.Washington Square, NY: New YorkUniversity Press.

Skinner, B. F. (1978). Reflections on behav-iorism and society. Upper Saddle River,NJ: Prentice Hall.

Skinner, B. F. (1979). The shaping of a be-haviorist. Washington Square, NY:New York University Press.

Skinner, B. F. (1981a). Selection by conse-quences. Science, 213, 501–504.

Skinner, B. F. (1981b). How to discoverwhat you have to say—A talk to stu-dents. The Behavior Analyst, 4, 1–7.

Skinner, B. F. (1982). Contrived reinforce-ment. The Behavior Analyst, 5, 3–8.

Skinner, B. F. (1983a). A matter of conse-quences. Washington Square, NY: NewYork University Press.

Skinner, B. F. (1983b). Intellectual self-management in old age. American Psy-chologist, 38, 239–244.

Skinner, B. F. (1989). Recent issues in theanalysis of behavior. Columbus, OH:Merrill.

Skinner, B. F., & Vaughan, M. E. (1983).Enjoy old age: A program of self-management. New York: Norton.

Skinner, C. H., Cashwell, T. H., & Skinner,A. L (2000). Increasing tootling: Theeffects of a peer-mediated group con-tingency program on students’ reportsof peers’prosocial behavior. Psychologyin the Schools, 37 (3), 263–270.

Skinner, C. H., Fletcher, P. A., Wildmon, M.,& Belfiore, P. J. (1996). Improving as-signment preference through inter-spersing additional problems: Briefversus easy problems. Journal of Be-havioral Education, 6, 427–436.

Skinner, C. H., Skinner, C. F., Skinner,A. L., & Cashwell, T. H. (1999).Using interdependent contingencieswith groups of students: Why theprincipal kissed a pig. EducationalAdministration Quarterly, 35 (Suppl.),806–820.

Smith, B. W., & Sugai, G. (2000). A self-management functional assessment-based behavior support plan for amiddle school student with EBD.Journal of Positive Behavior Inter-ventions, 2, 208–217.

Smith, D. H. (1987). Telling stories as a wayof doing ethics. Journal of the FloridaMedical Association, 74, 581–588.

Smith, D. H. (1993). Stories, values, and pa-tient care decisions. In C. Conrad(Ed.), Ethical nexus (pp. 123–148).Norwood, NJ: Ablex.

721

Bibliography

Smith, L. D. (1992). On prediction and con-trol: B. F. Skinner and the technologi-cal ideal of science. AmericanPsychologist, 47, 216–223.

Smith, R. G., & Iwata, B. A. (1997). An-tecedent influences of behavior disor-ders. Journal of Applied BehaviorAnalysis, 30, 343–375.

Smith, R. G., Iwata, B. A., & Shore, B. A.(1995). Effects of subject-versus ex-perimenter-selected reinforcers on thebehavior of individuals with profounddevelopmental disabilities. Journal ofApplied Behavior Analysis, 28, 61–71.

Smith, R. G., Iwata, B. A., Goh, H., & Shore,B. A. (1995). Analysis of establishingoperations for self-injury maintainedby escape. Journal of Applied Behav-ior Analysis, 28, 515–535.

Smith, R. G., Russo, L., & Le, D. D. (1999).Distinguishing between extinction andpunishment effects of response block-ing: A replication. Journal of AppliedBehavior Analysis, 32, 367–370.

Smith, R., Michael, J., & Sundberg, M. L.(1996). Automatic reinforcement andautomatic punishment in infant vocalbehavior. The Analysis of Verbal Be-havior, 13, 39–48.

Smith, S., & Farrell, D. (1993). Level systemuse in special education: Classroom in-tervention with prima facie appeal.Behavioral Disorders, 18 (4), 251–264.

Snell, M. E., & Brown, F. (2000). Instructionof students with severe disabilities (5thed.). Upper Saddle River, NJ: MerrillPrentice Hall.

Snell, M. E., & Brown, F. (2006). Instruc-tion of students with severe disabilities(6th ed.). Upper Saddle River, NJ:Prentice Hall.

Solanto, M. V., Jacobson, M. S., Heller, L.,Golden, N. H., & Hertz, S. (1994). Rateof weight gain of inpatients withanorexia nervosa under two behavioralcontracts. Pediatrics, 93 (6), 989.

Solomon, R. L. (1964). Punishment. Amer-ican Psychologist, 19, 239–253.

Spies, R. A., & Plake, B. S. (Eds.). (2005).Sixteenth mental measurements year-book. Lincoln, NE: Buros Institute ofMental Measurements.

Spooner, F., Spooner, D., & Ulicny, G. R.(1986). Comparisons of modifiedbackward chaining: Backward chain-ing with leaps ahead and reverse chain-ing with leaps ahead. Education andTreatment of Children, 9 (2), 122–134.

Spradlin, J. E. (1966). Environmental fac-tors and the language development ofretarded children. In S. Rosenberg(Ed.), Developments in applied psy-cholinguist research (pp. 261–290).Riverside, NJ: MacMillian.

Spradlin, J. E. (1996). Comments on Ler-man and Iwata (1996). Journal of Ap-plied Behavior Analysis, 29, 383–385.

Spradlin, J. E. (2002). Punishment: A pri-mary response. Journal of Applied Be-havior Analysis, 35, 475–477.

Spradlin, J. E., Cotter, V. W., & Baxley, N.(1973). Establishing a conditional dis-crimination without direct training: Astudy of transfer with retarded adoles-cents. American Journal of Mental De-ficiency, 77, 556–566.

Sprague, J. R., & Horner, R. H. (1984). Theeffects of single instance, multiple in-stance, and general case training ongeneralized vending machine used bymoderately and severely handicappedstudents. Journal of Applied BehaviorAnalysis, 17, 273–278.

Sprague, J. R., & Horner, R. H. (1990).Easy does it: Preventing challengingbehaviors. Teaching Exceptional Chil-dren, 23, 13–15.

Sprague, J. R., & Horner, R. H. (1991). De-termining the acceptability of behaviorsupport plans. In M. Wang, H. Walberg,& M. Reynolds (Eds.), Handbook ofspecial education (pp. 125–142). Ox-ford, London: Pergamon Press.

Sprague, J. R., & Horner, R. H. (1992). Co-variation within functional responseclasses: Implications for treatment of severe problem behavior. Journal ofApplied Behavior Analysis, 25, 735–745.

Sprague, J., & Walker, H. (2000). Early iden-tification and intervention for youthwith antisocial and violent behavior.Exceptional Children, 66, 367–379.

Spreat, S., & Connelly, L. (1996). Reliabil-ity analysis of the Motivation Assess-ment Scale. American Journal onMental Retardation, 100, 528–532.

Staats, A. W., & Staats, C. K. (1963).Complex human behavior: A system-atic extension of learning principles.New York: Holt, Rinehart and Winston.

Stack, L. Z., & Milan, M. A. (1993). Im-proving dietary practices of elderly in-dividuals: The power of prompting,feedback, and social reinforcement.Journal of Applied Behavior Analysis,26, 379–387.

Staddon, J. E. R. (1977). Schedule-inducedbehavior. In W. K. Honig & J. E. R.Staddon (Eds.), Handbook of operantbehavior (pp. 125–152). Upper SaddleRiver, NJ: Prentice Hall.

Stage, S. A., & Quiroz, D. R. (1997). Ameta-analysis of interventions to de-crease disruptive classroom behavior inpublic education settings. School Psy-chology Review, 26, 333–368.

Starin, S., Hemingway, M., & Hartsfield, F.(1993). Credentialing behavior analystsand the Florida behavior analysis certi-fication program. The Behavior Ana-lyst, 16, 153–166.

Steege, M. W., Wacker, D. P., Cigrand, K. C.,Berg, W. K., Novak, C. G., Reimers,T. M., Sasso, G. M., & DeRaad, A.(1990). Use of negative reinforcementin the treatment of self-injurious be-havior. Journal of Applied BehaviorAnalysis, 23, 459–467.

Stephens, K. R., & Hutchison, W. R.(1992). Behavioral personal digital as-sistants: The seventh generation ofcomputing. The Analysis of Verbal Be-havior, 10, 149–156.

Steuart, W. (1993). Effectiveness of arousaland arousal plus overcorrection toreduce nocturnal bruxism. Journal of Behavior Therapy & ExperimentalPsychiatry, 24, 181–185.

Stevenson, H. C., & Fantuzzo, J. W. (1984).Application of the “generalizationmap” to a self-control intervention withschool-aged children. Journal of Ap-plied Behavior Analysis, 17, 203–212.

Stewart, C. A., & Singh, N. N. (1986). Over-correction of spelling deficits in men-tally retarded persons. BehaviorModification, 10, 355–365.

Stitzer, M. L., Bigelow, G. E., Liebson, I. A.,& Hawthorne, J. W. (1982). Contingentreinforcement for benzodiazepine-freeurines: Evaluation of a drug abuse treat-ment intervention. Journal of AppliedBehavior Analysis, 15, 493–503.

Stokes, T. (2003). A genealogy of appliedbehavior analysis. In K. S. Budd & T.Stokes (Eds.), A small matter of proof:The legacy of Donald M. Baer (pp.257–272). Reno, NV: Context Press.

Stokes, T. F., & Baer, D. M. (1976).Preschool peers as mutual generaliza-tion-facilitating agents. Behavior Ther-apy, 7, 599–610.

Stokes, T. F., & Baer, D. M. (1977). An im-plicit technology of generalization.

722

Bibliography

Journal of Applied Behavior Analysis,10, 349–367.

Stokes, T. F., & Osnes, P. G. (1982). Pro-gramming the generalization of chil-dren’s social behavior. In P. S. Strain,M. J. Guralnick, & H. M. Walker(Eds.), Children’s social behavior: De-velopment, assessment, and modifi-cation (pp. 407–443) Orlando, FL:Academic Press.

Stokes, T. F., & Osnes, P. G. (1989). An op-erant pursuit of generalization.Behavior Therapy, 20, 337–355.

Stokes, T. F., Baer, D. M., & Jackson, R. L.(1974). Programming the generaliza-tion of a greeting response in four re-tarded children. Journal of AppliedBehavior Analysis, 7, 599–610.

Stokes, T. F., Fowler, S. A., & Baer, D. M.(1978). Training preschool children torecruit natural communities of rein-forcement. Journal of Applied BehaviorAnalysis, 11, 285–303.

Stolz, S. B. (1978). Ethical issues in be-havior modification. San Francisco:Jossey-Bass.

Strain, P. S., & Joseph, G. E. (2004). A notso good job with “Good job.” Journalof Positive Behavior Interventions, 6(1), 55–59.

Strain, P. S., McConnell, S. R., Carta, J. J.,Fowler, S. A., Neisworth, J. T., & Wol-ery, M. (1992). Behaviorism in earlyintervention. Topics in Early ChildhoodSpeciàl Education, 12, 121–142.

Strain, P. S., Shores, R. E., & Kerr, M. M.(1976). An experimental analysis of“spillover” effects on the social inter-action of behaviorally handicappedpreschool children. Journal of AppliedBehavior Analysis, 9, 31–40.

Striefel, S. (1974). Behavior modification:Teaching a child to imitate. Austin,TX: Pro-Ed.

Stromer, R. (2000). Integrating basic and ap-plied research and the utility of Lattaland Perone’s Handbook of ResearchMethods in Human Operant Behavior.Journal of Applied Behavior Analysis,33, 119–136.

Stromer, R., McComas, J. J., & Rehfeldt,R. A. (2000). Designing interventionsthat include delayed reinforcement: Im-plications of recent laboratory research.Journal of Applied Behavior Analysis,33, 359–371.

Sugai, G. M., & Tindal, G. A. (1993).Effective school consultation: An in-

teractive approach. Pacific Grove, CA:Brooks/Cole.

Sulzer-Azaroff, B., & Mayer, G. R. (1977).Applying behavior-analysis procedureswith children and youth. New York:Holt, Rinehart & Winston.

Sundberg, M. L. (1983). Language. In J. L.Matson, & S. E. Breuning (Eds.),Assessing the mentally retarded (pp.285–310). New York: Grune & Stratton.

Sundberg, M. L. (1991). 301 research top-ics from Skinner’s book Verbal be-havior. The Analysis of VerbalBehavior, 9, 81–96.

Sundberg, M. L. (1993). The application ofestablishing operations. The BehaviorAnalyst, 16, 211–214.

Sundberg, M. L. (1998). Realizing the po-tential of Skinner’s analysis of verbalbehavior. The Analysis of Verbal Be-havior, 15, 143–147.

Sundberg, M. L. (2004). A behavioral analy-sis of motivation and its relation tomand training. In L. W. Williams (Ed.),Developmental disabilities: Etiology,assessment, intervention, and integra-tion. Reno, NV: Context Press.

Sundberg, M. L., & Michael, J. (2001). Thevalue of Skinner’s analysis of verbal be-havior for teaching children with autism.Behavior Modification, 25, 698–724.

Sundberg, M. L., & Partington, J. W. (1998).Teaching language to children withautism or other developmental disabil-ities. Pleasant Hill, CA: Behavior An-alysts, Inc.

Sundberg, M. L., Endicott, K., & Eigenheer,P. (2000). Using intraverbal promptsto establish tacts for children withautism. The Analysis of Verbal Behav-ior, 17, 89–104.

Sundberg, M. L., Loeb, M., Hale, L., &Eigenheer, P. (2002). Contriving estab-lishing operations to teach mands forinformation. The Analysis of Verbal Be-havior, 18, 14–28.

Sundberg, M. L., Michael, J., Partington,J. W., & Sundberg, C. A. (1996). Therole of automatic reinforcement in earlylanguage acquisition. The Analysis ofVerbal Behavior, 13, 21–37.

Sundberg, M. L., San Juan, B., Dawdy, M., &Arguelles, M. (1990). The acquisition oftacts, mands, and intraverbals by indi-viduals with traumatic brain injury. TheAnalysis of Verbal Behavior, 8, 83–99.

Surratt, P.R., Ulrich, R.E., & Hawkins, R. P.(1969). An elementary student as a be-

havioral engineer. Journal of AppliedBehavior Analysis. 85–92.

Sutherland, K. S., Wehby, J. H., & Yoder,P. J. (2002). An examination of the rela-tion between teacher praise and studentswith emotional/behavioral disorders’op-portunities to respond to academic re-quests. Journal of Emotional andBehavioral Disorders, 10, 5–13.

Swanson, H. L., & Sachse-Lee, C. (2000).A meta-analysis of single-subject-design intervention research for stu-dents with LD. Journal of LearningDisabilities, 38, 114–136.

Sweeney, W. J., Salva, E., Cooper, J. O., &Talbert-Johnson, C. (1993). Usingself-evaluation to improve difficult-to-read handwriting of secondary stu-dents. Journal of BehavioralEducation, 3, 427–443.

Symons, F. J., Hoch, J., Dahl, N. A., & Mc-Comas, J. J. (2003). Sequential andmatching analyses of self-injurious be-havior: A case of overmatching in thenatural environment. Journal of Ap-plied Behavior Analysis, 36, 267–270.

Symons, F. J., McDonald, L. M., & Wehby,J. H. (1998). Functional assessment andteacher collected data. Education andTreatment of Children, 21 (2), 135–159.

Szempruch, J., & Jacobson, J. W. (1993).Evaluating the facilitated communica-tions of people with developmental dis-abilities. Research in DevelopmentalDisabilities, 14, 253–264.

Tang, J., Kennedy, C. H., Koppekin, A., &Caruso, M. (2002). Functional analy-sis of stereotypical ear covering in achild with autism. Journal of AppliedBehavior Analysis, 35, 95–98.

Tapp, J. T., & Walden, T. A. (2000). A sys-tem for collecting and analysis of ob-servational data from videotape. In T.Thompson, D. Felce, & F. J. Symons(Eds.), Behavioral observation: Tech-nology and applications in develop-mental disabilities (pp. 61–70).Baltimore: Paul H. Brookes.

Tapp, J. T., & Wehby, J. H. (2000). Obser-vational software for laptop computersand optical bar code readers. In T.Thompson, D. Felce, & F. J. Symons(Eds.), Behavioral observation: Tech-nology and applications in develop-mental disabilities (pp. 71–81).Baltimore: Paul H. Brookes.

Tapp, J. T., Wehby, J. H., & Ellis, D. M.(1995). A multiple option observation

723

Bibliography

system for experimental studies:MOOSES. Behavior Research MethodsInstruments & Computers, 27, 25–31.

Tarbox, R. S. F., Wallace, M. D., &Williams, L. (2003). Assessment andtreatment of elopement: A replicationand extension. Journal of Applied Be-havior Analysis, 36, 239–244.

Tarbox, R. S. F., Williams, W. L., & Friman,P. C. (2004). Extended diaper wearing:Effects on continence in and out of thediaper. Journal of Applied BehaviorAnalysis, 37, 101–105.

Tawney, J., & Gast, D. (1984). Single subjectresearch in special education. Colum-bus, OH: Charles E. Merrill.

Taylor, L. K., & Alber, S. R. (2003). The ef-fects of classwide peer tutoring on thespelling achievement of first graderswith disabilities. The Behavior AnalystToday, 4, 181–189.

Taylor, R. L. (2006). Assessment of excep-tional students: Educational and psy-chological procedures (7th ed.). Boston:Pearson/Allyn and Bacon.

Terrace, H. S. (1963a). Discrimination learn-ing with and without “errors.” Journalof the Experimental Analysis of Behav-ior, 6, 1–27.

Terrace, H. S. (1963b). Errorless transfer ofa discrimination across two continua.Journal of the Experimental Analysisof Behavior, 6, 223–232.

Terris, W., & Barnes, M. (1969). Learnedresistance to punishment and subse-quent responsiveness to the same andnovel punishers. Psychonomic Sci-ence, 15, 49–50.

Test, D. W., & Heward, W. L. (1983). Teach-ing road signs and traffic laws to learn-ing disabled students. ScienceEducation, 64, 129–139.

Test, D. W., & Heward, W. L. (1984). Accu-racy of momentary time sampling: Acomparison of fixed- and variable-interval observation schedules. In W. L.Heward, T. E. Heron, D. S. Hill, and J.Trap-Porter (Eds.), Focus on behavioranalysis in education (pp. 177–194).Columbus, OH: Charles E. Merrill.

Test, D. W., Spooner, F., Keul, P. K., &Grossi, T. (1990). Teaching adoles-cents with severe disability to use thepublic telephone. Behavior Modifica-tion, 14, 157–171.

Thompson, R. H., & Iwata, B. A. (2000).Response acquisition under direct andindirect contingencies of reinforce-

ment. Journal of Applied BehaviorAnalysis, 33, 1–11.

Thompson, R. H., & Iwata, B. A. (2001). Adescriptive analysis of social conse-quences following problem behavior.Journal of Applied Behavior Analysis,34, 169–178.

Thompson, R. H., & Iwata, B. A. (2003). Areview of reinforcement control proce-dures. Journal of Applied BehaviorAnalysis, 38, 257–278.

Thompson, R. H., & Iwata, B. A. (2005). Areview of reinforcement control proce-dures. Journal of Applied BehaviorAnalysis, 38, 257–278.

Thompson, R. H., Fisher, W. W., Piazza,C. C., & Kuhn D. E. (1998). The eval-uation and treatment of aggressionmaintained by attention and automaticreinforcement. Journal of Applied Be-havior Analysis, 31, 103–116.

Thompson, R. H., Iwata, B. A., Conners, J.,& Roscoe, E. M. (1999). Effects of re-inforcement for alternative behaviorduring punishment of self-injury.Journal of Applied Behavior Analysis,32, 317–328.

Thompson, T. J., Braam, S. J., & Fuqua,R. W. (1982). Training and generaliza-tion of laundry skills: A multiple probeevaluation with handicapped persons.Journal of Applied Behavior Analysis,15, 177–182.

Thompson, T., Symons, F. J., & Felce, D.(2000). Principles of behavioral obser-vation. In T. Thompson, F. J. Symons,& D. Felce (Eds.), Behavioral obser-vation: Technology and applications indevelopmental disabilities (pp. 3–16).Baltimore: Paul H. Brookes.

Thomson, C., Holmber, M., & Baer, D. M.(1974). A brief report on a comparisonof time sampling procedures. Journal ofApplied Behavior Analysis, 7, 623–626.

Thoresen, C. E., & Mahoney, M. J. (1974).Behavioral self-control. New York:Holt, Rinehart & Winston.

Timberlake, W., & Allison, J. (1974). Re-sponse deprivation: An empirical ap-proach to instrumental performance.Psychological Review, 81, 146–164.

Tincani, M. (2004). Comparing the PictureExchange Communication System andsign language training for children withautism. Focus on Autism and Other De-velopmental Disabilities, 19, 152–163.

Todd, A. W., Horner, R. W., & Sugai, G.(1999). Self-monitoring and self-

recruited praise: Effects on problem behavior, academic engagement, andwork completion in a typical class-room. Journal of Positive Behavior In-terventions, 1, 66–76, 122.

Todd, J. T., & Morris, E. K. (1983). Mis-conception and miseducation: Presen-tations of radical behaviorism inpsychology textbooks. The BehaviorAnalyst, 6, 153–160.

Todd, J. T., & Morris, E. K. (1992). Casehistories in the great power of steadymisrepresentation. American Psychol-ogist, 47, 1441–1453.

Todd, J. T., & Morris, E. K. (1993). Changeand be ready to change again. AmericanPsychologist, 48, 1158–1159.

Todd, J. T., & Morris, E. K. (Eds.). (1994).Modern perspectives on John B. Wat-son and classical behaviorism. West-port, CT: Greenwood Press.

Touchette, P. E., MacDonald, R. F., &Langer, S. N. (1985). A scatter plot foridentifying stimulus control of problembehavior. Journal of Applied BehaviorAnalysis, 18, 343–351.

Trammel, D. L., Schloss, P. J., & Alper, S.(1994). Using self-recording, evalua-tion, and graphing to increase comple-tion of homework assignments. Journalof Learning Disabilities, 27, 75–81.

Trap, J. J., Milner-Davis, P., Joseph, S., &Cooper, J. O. (1978). The effects of feed-back and consequences on transitionalcursive letter formation. Journal of Ap-plied Behavior Analysis, 11, 381–393.

Trask-Tyler, S. A., Grossi, T. A., & Heward,W. L. (1994). Teaching young adultswith developmental disabilities and vi-sual impairments to use tape-recordedrecipes: Acquisition, generalization, andmaintenance of cooking skills. Journalof Behavioral Education, 4, 283–311.

Tucker, D. J., & Berry, G. W. (1980).Teaching severely multihandicappedstudents to put on their own hearingaids. Journal of Applied BehaviorAnalysis, 13, 65–75.

Tufte, E. R. (1983). The visual display ofquantitative information. Chesire, CT:Graphics Press.

Tufte, E. R. (1990). Envisioning informa-tion. Chesire, CT: Graphics Press.

Turkewitz, H., O’Leary, K. D., & Ironsmith,M. (1975). Generalization and mainte-nance of appropriate behavior throughself-control. Journal of Consulting andClinical Psychology, 43, 577–583.

724

Bibliography

Turnbull, H. R., III., & Turnbull, A. P.(1998). Free appropriate public edu-cation: The law and children with dis-abilities. Denver: Love.

Tustin, R. D. (1994). Preference for re-inforcers under varying schedulearrangements: A behavioral economicanalysis. Journal of Applied BehaviorAnalysis, 28, 61–71.

Twohig, M. P., & Woods, D. W. (2001).Evaluating the duration of the compet-ing response in habit reversal: A para-metric analysis. Journal of AppliedBehavior Analysis, 34, 517–520.

Twyman, J., Johnson, H., Buie, J., & Nel-son, C. M. (1994). The use of a warn-ing procedure to signal a more intrusivetimeout contingency. Behavioral Dis-orders, 19 (4), 243–253.

U. S. Department of Education. (2003).Proven methods: Questions and an-swers on No Child Left Behind. Wash-ington, DC: Author. Retrieved October24, 2005, from www.ed.gov/nclb/methods/whatworks/doing.html.

Ulman, J. D., & Sulzer-Azaroff, B. (1975).Multi-element baseline design in edu-cational research. In E. Ramp & G.Semb (Eds.), Behavior analysis: Areasof research and application (pp.371–391). Upper Saddle River, NJ:Prentice Hall.

Ulrich, R. E., & Azrin, N. H. (1962). Reflexive fighting in response to aver-sive stimulation. Journal of the Ex-perimental Analysis of Behavior, 5,511–520.

Ulrich, R. E., Stachnik, T. & Mabry, J.(Eds.). (1974). Control of human be-havior (Vol. 3), Behavior modificationin education. Glenview, IL: Scott,Foresman and Company.

Ulrich, R. E., Wolff, P. C., & Azrin, N. H.(1962). Shock as an elicitor of intra-and inter-species fighting behavior.Animal Behavior, 12, 14–15.

Umbreit, J., Lane, K., & Dejud, C. (2004).Improving classroom behavior by mod-ifying task difficulty: Effects of in-creasing the difficulty of too-easy tasks.Journal of Positive Behavioral Inter-ventions, 6, 13–20.

Valk, J. E. (2003). The effects of embeddedinstruction within the context of a smallgroup on the acquisition of imitationskills of young children with disabili-ties. Unpublished doctoral dissertation,The Ohio State University.

Van Acker, R., Grant, S. H., & Henry, D.(1996). Teacher and student behavioras a function of risk for aggression.Education and Treatment of Children,19, 316–334.

Van Camp, C. M., Lerman, D. C., Kelley,M. E., Contrucci, S. A., & Vorndran,C. M. (2000). Variable-time reinforce-ment schedules in the treatment of so-cially maintained problem behavior.Journal of Applied Behavior Analysis,33, 545–557.

van den Pol, R. A., Iwata, B. A., Ivancic,M. T., Page, T. J., Neef, N. A., & Whit-ley, F. P. (1981). Teaching the handi-capped to eat in public places:Acquisition, generalization and main-tenance of restaurant skills. Journal ofApplied Behavior Analysis, 14, 61–69.

Van Houten, R. (1979). Social validation: Theevolution of standards of competencyfor target behaviors. Journal of AppliedBehavior Analysis, 12, 581–591.

Van Houten, R. (1993). Use of wrist weightsto reduce self-injury maintained bysensory reinforcement. Journal of Ap-plied Behavior Analysis, 26, 197–203.

Van Houten, R. (1994). The right to effec-tive behavioral treatment. In L. J.Hayes, G. J. Hayes, S. C. Moore, & P. M. Gjezzi (Eds.), Ethical issues in developmental disabilities (pp.103–118). Reno, NV: Context Press.

Van Houten, R., Axelrod, S., Bailey, J. S.,Favell, J. E., Foxx, R. M., Iwata, B. A.,& Lovaas, O. I. (1988). The right to ef-fective behavioral treatment. The Be-havior Analyst, 11, 111–114.

Van Houten, R., & Doleys, D. M. (1983).Are social reprimands effective? In S.Axelrod & J. Apsche (Eds.). The effectsof punishment on human behavior (pp.45–70). New York: Academic Press.

Van Houten, R., & Malenfant, J. E. L.(2004). Effects of a driver enforcementprogram on yielding to pedestrians.Journal of Applied Behavior Analysis,37, 351–363.

Van Houten, R., Malenfant, J. E., Austin,J., & Lebbon, A. (2005). The effectsof a seatbelt-gearshift delay prompt onthe seatbelt use of motorists who donot regularly wear seatbelts. Journalof Applied Behavior Analysis, 38,195–203.

Van Houten, R., Malenfant, L., & Rolider, A.(1985). Increasing driver yielding andpedestrian signaling with prompting,

feedback, and enforcement. Journal ofApplied Behavior Analysis, 18, 103–110.

Van Houten, R., & Nau, P. A. (1981). A com-parison of the effects of posted feed-back and increased police surveillanceon highway speeding. Journal of Ap-plied Behavior Analysis, 14, 261–271.

Van Houten, R., & Nau, P. A. (1983). Feed-back interventions and driving speed:A parametric and comparative analy-sis. Journal of Applied Behavior Analy-sis, 17, 253–281.

Van Houten, R., Nau, P. A., Mackenzie-Keating, S. E., Sameoto, D., & Col-avecchia, B. (1982). An analysis ofsome variables influencing the effec-tiveness of reprimands. Journal of Ap-plied Behavior Analysis, 15, 65–83.

Van Houten, R., Nau, P. A., & Marini, Z.(1980). An analysis of public postingin reducing speeding behavior on anurban highway. Journal of Applied Be-havior Analysis, 13, 383–395.

Van Houten, R., & Retting, R. A. (2001). In-creasing motorist compliance and cau-tion at stop signs. Journal of AppliedBehavior Analysis, 434, 185–193.

Van Houten, R., & Retting, R. A. (2005). In-creasing motorist compliance and cau-tion at stop signs. Journal of AppliedBehavior Analysis, 34, 185–193.

Van Houten, R., & Rolider, A. (1988). Re-creating the scene: An effective wayto provide delayed punishment for in-appropriate motor behavior. Journalof Applied Behavior Analysis, 21,187–192.

Van Houten, R., & Rolider, A. (1990). Therole of reinforcement in reducing inap-propriate behavior: Some myths andmisconceptions. In A. C. Repp & N. N.Singh (Eds.), Perspectives on the useof nonaversive and aversive interven-tions for persons with developmentaldisabilities (pp. 119–127). Sycamore,IL: Sycamore.

Van Norman, R. K. (2005). The effects offunctional communication training,choice making, and an adjusting workschedule on problem behavior main-tained by negative reinforcement. Un-published doctoral dissertation, TheOhio State University, Columbus.

Vargas, E. A. (1986). Intraverbal behavior.In P. N. Chase & L. J. Parrott (Eds.),Psychological aspects of language(pp. 128–151). Springfield, IL: CharlesC. Thomas.

725

Bibliography

Vargas, J. S. (1978). A behavioral approachto the teaching of composition. The Be-havior Analyst, 1, 16–24.

Vargas, J. S. (1984). What are your exercisesteaching? An analysis of stimulus con-trol in instructional materials. In W. L.Heward, T. E. Heron, D. S. Hill, & J.Trap-Porter (Eds.), Focus on behavioranalysis in education (pp. 126–141).Columbus, OH: Merrill.

Vargas, J. S. (1990). B. F. Skinner–The lastfew days. Applied Behavior Analysis,23, 409–410.

Vaughan, M. (1989). Rule-governed behav-ior in behavior analysis: A theoreticaland experimental history. In S. C.Hayes (Ed.), Rule-governed behavior:Cognition, contingencies, and instruc-tional control (pp. 97–118). New York:Plenum Press.

Vaughan, M. E., & Michael, J. L. (1982).Automatic reinforcement: An impor-tant but ignored concept. Behaviorism,10, 217–227.

Vollmer, T. R. (1994). The concept of auto-matic reinforcement: Implications forbehavioral research in developmentaldisabilities. Research in Developmen-tal Disabilities, 15 (3), 187–207.

Vollmer, T. R. (2002). Punishment happens:Some comments on Lerman and Vorn-dran’s review. Journal of Applied Be-havior Analysis, 35, 469–473.

Vollmer, T. R. (2006, May). On the utilityof automatic reinforcement in appliedbehavior analysis. Paper presented atthe 32nd annual meeting of the As-sociation for Behavior Analysis, At-lanta, GA.

Vollmer, T. R., & Hackenberg, T. D. (2001).Reinforcement contingencies and so-cial reinforcement: Some reciprocal re-lations between basic and appliedresearch. Journal of Applied BehaviorAnalysis, 34, 241–253.

Vollmer, T. R., & Iwata, B. A. (1991). Es-tablishing operations and reinforcementeffects. Journal of Applied BehaviorAnalysis, 24, 279–291.

Vollmer, T. R., & Iwata, B. A. (1992). Dif-ferential reinforcement as treatment forbehavior disorders: Procedural andfunctional variations. Research in De-velopmental Disabilities, 13, 393–417.

Vollmer, T. R., Marcus, B. A., Ringdahl,J. E., & Roane, H. S. (1995). Progress-ing from brief assessments to extended

experimental analyses in the evaluationof aberrant behavior. Journal of AppliedBehavior Analysis, 28, 561–576.

Vollmer, T. R., Progar, P. R., Lalli, J. S., VanCamp, C. M., Sierp, B. J., Wright,C. S., Nastasi, J., & Eisenschink, K. J.(1998). Fixed-time schedules attenuateextinction-induced phenomena in thetreatment of severe aberrant behavior.Journal of Applied Behavior Analysis,31, 529–542.

Vollmer, T. R., Roane, H. S., Ringdahl,J. E., & Marcus, B. A. (1999). Evalu-ating treatment challenges with dif-ferential reinforcement of alternativebehavior. Journal of Applied BehaviorAnalysis, 32, 9–23.

Wacker, D. P., & Berg, W. K. (1983). Effectsof picture prompts on the acquisition ofcomplex vocational tasks by mentallyretarded adolescents. Journal of Ap-plied Behavior Analysis, 16, 417–433.

Wacker, D. P., Berg, W. K., Wiggins, B.,Muldoon, M., & Cavanaugh, J. (1985).Evaluation of reinforcer preferences for profoundly handicapped students.Journal of Applied Behavior Analysis,18, 173–178.

Wacker, D., Steege, M., Northup, J.,Reimers, T., Berg, W., & Sasso, G.(1990). Use of functional analysis andacceptability measures to assess andtreat severe behavior problems: An out-patient clinic model. In A. C. Repp &N. N. Singh (Eds.), Perspectives on theuse of nonaversive and aversive inter-ventions for persons with develop-mentaly disabilities (pp. 349–359).Sycamore, IL: Sycamore.

Wagman, J. R., Miltenberger, R. G., & Arn-dorfer, R. E. (1993). Analysis of a sim-plified treatment for stuttering. Journalof Applied Behavior Analysis, 26, 53–61.

Wagman, J. R., Miltenberger, R. G., &Woods, D. W. (1995). Long-termfollow-up of a behavioral treatment forstuttering. Journal of Applied BehaviorAnalysis, 28, 233–234.

Wahler, R. G., & Fox, J. J. (1980). Solitarytoy play and time out: A family treat-ment package for children with aggres-sive and oppositional behavior. Journalof Applied Behavior Analysis, 13, 23–39.

Walker, H. M. (1983). Application of responsecost in school settings: Outcomes, issuesand recommendations. Exceptional Ed-ucation Quarterly, 3, 46–55.

Walker, H. M. (1997). The acting out child:Coping with classroom disruption (2nded.). Longmont, CO: Sopris West.

Wallace, I. (1977). Self-control techniquesof famous novelists. Journal of AppliedBehavior Analysis, 10, 515–525.

Ward, P., & Carnes, M. (2002). Effects ofposting self-set goals on collegiatefootball players’ skill execution duringpractice and games. Journal of AppliedBehavior Analysis, 35, 1–12.

Warner, S. F. (1992). Facilitating basic vo-cabulary acquisition with milieu teach-ing procedures. Journal of EarlyIntervention, 16, 235–251.

Warren, S. F., Rogers-Warren, A., & Baer,D. M. (1976). The role of offer rates incontrolling sharing by young children.Journal of Applied Behavior Analysis,9, 491–497.

Watkins, C. L., Pack-Teixteira, L., &Howard, J.S. (1989). Teaching in-traverbal behavior to severely retardedchildren. The Analysis of Verbal Be-havior, 7, 69–81.

Watson, D. L., & Tharp, R. G. (2007). Self-directed behavior: Self-modification forpersonal adjustment (9th ed.). Belmont,CA: Wadsworth/Thomson Learning.

Watson, J. B. (1913). Psychology as the be-haviorist views it. Psychological Re-view, 20, 158–177.

Watson, J. B. (1924). Behaviorism. NewYork: W. W. Norton.

Watson, P. J., & Workman, E. A. (1981).The nonconcurrent multiple baselineacross-individuals design: An exten-sion of the traditional multiple base-line design. Journal of BehaviorTherapy and Experimental Psychiatry,12, 257–259.

Watson, T. S. (1996). A prompt plus delayedcontingency procedure for reducingbathroom graffiti. Journal of AppliedBehavior Analysis, 29, 121–124.

Webber, J., & Scheuermann, B. (1991). Ac-centuate the positive . . . eliminate thenegative. Teaching Exceptional Chil-dren, 24 (1), 13–19.

Weber, L. H. (2002). The cumulative recordas a management tool. BehavioralTechnology Today, 2, 1–8.

Weeks, M., & Gaylord-Ross, R. (1981).Task difficulty and aberrant behaviorin severely handicapped students.Journal of Applied Behavior Analysis,14, 449–463.

726

Bibliography

Wehby, J. H., & Hollahan, M. S. (2000). Ef-fects of high-probability requests on thelatency to initiate academic tasks.Journal of Applied Behavior Analysis,33, 259–262.

Weiher, R. G., & Harman, R. E. (1975). Theuse of omission training to reduce self-injurious behavior in a retarded child.Behavior Therapy, 6, 261–268.

Weiner, H. (1962). Some effects of responsecost upon human operant behavior.Journal of Experimental Analysis ofBehavior, 5, 201–208.

Wenrich, W. W., Dawley, H. H., & General,D. A. (1976). Self-directed systematicdesensitization: A guide for the student,client, and therapist. Kalamazoo, MI:Behaviordelia.

Werts, M. G., Caldwell, N. K., & Wolery,M. (1996). Peer modeling of responsechains: Observational learning by stu-dents with disabilities. Journal of Ap-plied Behavior Analysis, 29, 53–66.

West, R. P., & Smith, T. G. (2002, Septem-ber 21). Managing the behavior ofgroups of students in public schools:Clocklights and group contingencies.Paper presented at The Ohio StateUniversity Third Focus on BehaviorAnalysis in Education Conference,Columbus, OH.

West, R. P., Young, K. R., & Spooner, F.(1990). Precision teaching: An intro-duction. Teaching Exceptional Chil-dren, 22, 4–9.

Wetherington, C. L. (1982). Is adjunctive be-havior a third class of behavior? Neur-oscience and Biobehavioral Reviews,6, 329–350.

Whaley, D. L., & Malott, R. W. (1971). Ele-mentary principles of behavior. Engle-wood Cliffs, NJ: Prentice Hall.

Whaley, D. L., & Surratt, S. L. (1968).Attitudes of science. Kalamazoo, MI:Behaviordelia.

Wheeler, D. L., Jacobson, J. W., Paglieri,R. A., & Schwartz, A. A. (1993). Anexperimental assessment of facilitatedcommunication. Mental Retardation,31, 49–60.

White, A., & Bailey, J. (1990). Reducing dis-ruptive behaviors of elementary physi-cal education students with sit andwatch. Journal of Applied BehaviorAnalysis, 23, 353–359.

White, D. M. (1991). Use of guided notes topromote generalized notetaking behav-

ior of high school students with learn-ing disabilities. Unpublished master’sthesis. Columbus, OH: The Ohio StateUniversity.

White, G. D. (1977). The effects of observerpresence on the activity level of fami-lies. Journal of Applied BehaviorAnalysis, 10, 734.

White, M. A. (1975). Natural rates of teacherapproval and disapproval in the class-room. Journal of Applied BehaviorAnalysis, 8, 367–372.

White, O. (2005) Trend lines. In G. Sugai &R. Horner (Eds.), Encyclopedia of be-havior modification and cognitive be-havior therapy, Volume 3: Educationalapplications. Pacific Grove, CA; SagePublications.

White, O. R. (1971). The “split-middle”: A“quickie” method of trend estimation(working paper No. 1). Eugene: Uni-versity of Oregon, Regional Center forHandicapped Children.

White, O. R., & Haring, N. G. (1980). Exc-eptional teaching (2nd ed.). Columbus,OH: Charles E. Merrill.

Wieseler, N. A., Hanson, R. H., Chamber-lain, T. P., & Thompson, T. (1985).Functional taxonomy of stereotypicand self-injurious behavior. Mental Re-tardation, 23, 230–234.

Wilder, D. A., & Carr, J. E. (1998). Recentadvances in the modification of establishing operations to reduce aber-rant behavior. Behavioral Interven-tions, 13, 43–59.

Wilder, D. A., Masuda, A., O’Conner, C., &Baham, M. (2001). Brief functionalanalysis and treatment of bizarre vo-calizations in an adult with schizo-phrenia. Journal of Applied BehaviorAnalysis, 34, 65–68.

Wilkenfeld, J., Nickel, M., Blakely, E., &Poling, A. (1992). Acquisition of leverpress responding in rats with delayedreinforcement: A comparison of threeprocedures. Journal of the Experimen-tal Analysis of Behavior, 51, 431–443.

Wilkinson, G. S. (1994). Wide RangeAchievement Test—3. Austin, TX:Pro-Ed.

Wilkinson, L. A. (2003). Using behavioralconsultation to reduce challenging be-havior in the classroom. PreventingSchool Failure, 47 (3), 100–105.

Williams, C. D. (1959). The elimination oftantrum behavior by extinction proce-

dures. Journal of Abnormal and SocialPsychology, 59, 269.

Williams, D. E., Kirkpatrick-Sanchez, S., &Iwata, B. A. (1993). A comparison ofshock intensity in the treatment of long-standing and severe self-injurious be-havior. Research in DevelopmentalDisabilities, 14, 207–219.

Williams, G., Donley, C. R., & Keller, J. W.(2000). Teaching children with autismto ask questions about hidden objects.Journal of Applied Behavior Analysis,33, 627–630.

Williams, J. A., Koegel, R. L., & Egel, A. L.(1981). Response-reinforcer relation-ships and improved learning in autisticchildren. Journal of Applied BehaviorAnalysis, 14, 53–60.

Williams, J. L. (1973). Operant learning:Procedures for changing behavior.Monterey, CA: Brooks/Cole.

Windsor, J., Piche, L. M., & Locke, P. A.(1994). Preference testing: A compari-son of two presentation methods.Research in Developmental Disabili-ties, 15, 439–455.

Winett, R. A., & Winkler, R. C. (1972). Cur-rent behavior modification in the class-room: Be still, be quiet, be docile.Journal of Applied Behavior Analysis,5, 499–504.

Winett, R. A., Moore, J. F., & Anderson,E. S. (1991). Extending the concept ofsocial validity: Behavior analysis fordisease prevention and health promo-tion. Journal of Applied BehaviorAnalysis, 24, 215–230.

Winett, R. A., Neale, M. S., & Grier, H. C.(1979). Effects of self-monitoring onresidential electricity consumption.Journal of Applied Behavior Analysis,12, 173–184.

Witt, J. C., Noell, G. H., LaFleur, L. H., &Mortenson, B. P. (1997). Teacher use ofinterventions in general education set-tings: Measurement and analysis of theindependent variable. Journal of Ap-plied Behavior Analysis, 30, 693–696.

Wittgenstein, L. (1953). Philosophical in-vestigations. New York: Macmillan.

Wolery, M. (1994). Procedural fidelity: A re-minder of its functions. Journal of Be-havioral Education, 4, 381–386.

Wolery, M., & Gast, D. L. (1984). Effectiveand efficient procedures for the trans-fer of stimulus control. Topics in EarlyChildhood Special Education, 4, 52–77.

727

Bibliography

Wolery, M., & Schuster, J. W. (1997). In-structional methods with students whohave significant disabilities. Journal ofSpecial Education, 31, 61–79.

Wolery, M., Ault, M. J., Gast, D. L., Doyle,P.M., & Griffen, A. K. (1991). Teach-ing chained tasks in dyads: Acquisitionof target and observational behaviors.The Journal of Special Education, 25(2), 198–220.

Wolf, M. M. (1978). Social validity: Thecase for subjective measurement orhow applied behavior analysis is find-ing its heart. Journal of Applied Be-havior Analysis, 11, 203–214.

Wolf, M. M., Risley, T. R., & Mees, H. L.(1964). Application of operant condi-tioning procedures to improve the be-haviour problems of an autistic child.Behavior Research and Therapy, 1,305–312.

Wolfe, L. H., Heron, T. E., & Goddard, Y. I.(2000). Effects of self-monitoring onthe on-task behavior and written lan-guage performance of elementary stu-dents with learning disabilities. Journalof Behavioral Education, 10, 49–73.

Wolfensberger, W. (1972). The principle ofnormalization in human services.Toronto: National Institute on MentalRetardation.

Wolff, R. (1977). Systematic desensitizationand negative practice to alter the after-effects of a rape attempt. Journal of Be-havior Therapy and ExperimentalPsychiatry, 8, 423–425.

Wolford, T., Alber, S. R., & Heward, W. L.(2001). Teaching middle school stu-dents with learning disabilities to recruit peer assistance during coopera-tive learning group activities. LearningDisabilities Research & Practice, 16,161–173.

Wolpe, J. (1958). Psychotherapy by recip-rocal inhibition. Stanford, CA: Stan-ford University Press.

Wolpe, J. (1973). The practice of behaviortherapy (2nd ed.). New York: PergamonPress.

Wong, S. E., Seroka, P., L., & Ogisi, J.(2000). Effects of a checklist on self-assessment of blood glucose level by amemory-impaired woman with Dia-betes Mellitus. Journal of Applied Be-havior Analysis, 33, 251–254.

Wood, F. H., & Braaten, S. (1983). Develop-ing guidelines for the use of punishinginterventions in the schools. ExceptionalEducation Quarterly, 3, 68–75.

Wood, S. J., Murdock, J. Y., Cronin, M. E.,Dawson, N. M., & Kirby, P. C. (1998).Effects of self-monitoring on on-taskbehaviors of at-risk middle school stu-dents. Journal of Behavioral Educa-tion, 9, 263–279.

Woods, D. W., & Miltenberger, R. G. (1995).Habit reversal: A review of applicationsand variations. Journal of BehaviorTherapy and Experimental Psychiatry,26 (2), 123–131.

Woods, D. W., Twohig, M. P., Flessner, C. A.,& Roloff, T. J. (2003). Treatment ofvocal tics in children with Tourette syn-drome: Investigating the efficacy ofhabit reversal. Journal of Applied Be-havior Analysis, 36, 109–112.

Worsdell, A. S., Iwata, B. A., Dozier, C. L.,Johnson, A. D., Neidert, P. L., &Thomason, J. L. (2005). Analysis of re-sponse repetition as an error-correctionstrategy during sight-word reading.Journal of Applied Behavior Analysis,38, 511–527.

Wright, C. S., & Vollmer, T. R. (2002). Eval-uation of a treatment package to reducerapid eating. Journal of Applied Be-havior Analysis, 35, 89–93.

Wyatt v. Stickney, 344 F. Supp. 387, 344 F.Supp. 373 (M.D. Ala 1972), 344 F.Supp. 1341, 325 F. Supp. 781 (M.D. Ala.1971), aff’d sub nom, Wyatt v. Aderholt,503 F. 2d. 1305 (5th Cir. 1974).

Yeaton, W. H., & Bailey, J. S. (1983). Uti-lization analysis of a pedestrian safetytraining program. Journal of AppliedBehavior Analysis, 16, 203–216.

Yell, M. (1994). Timeout and students withbehavior disorders: A legal analysis.Education and Treatment of Children,17 (3), 293–301.

Yell, M. (1998). The law and special edu-cation. Upper Saddle River, NJ: Mer-rill/Prentice Hall.

Yell, M. L., & Drasgow, E. (2000). Litigat-ing a free appropriate public education:The Lovaas hearings and cases. Journalof Special Education, 33, 206–215.

Yoon, S., & Bennett, G. M. (2000). Effectsof a stimulus–stimulus pairing proce-dure on conditioning vocal sounds as

reinforcers. The Analysis of Verbal Be-havior, 17, 75–88.

Young, J. M., Krantz, P. J., McClannahan, L.E., & Poulson, C. L. (1994). Generalizedimitation and response-class formationin children with autism. Journal of Ap-plied Behavior Analysis, 27, 685–697.

Young, R. K., West, R. P., Smith, D. J., &Morgan, D. P. (1991). Teaching self-management strategies to adolescents.Longmont, CO: Sopris West.

Zane, T. (2005). Fads in special education. InJ. W. Jacobson, R. M. Foxx, & J. A.Mulick (Eds.), Controversial therapiesin developmental disabilities: Fads, fash-ion, and science in professional practice(pp. 175–191). Hillsdale, NJ: Erlbaum.

Zanolli, K., & Daggett, J. (1998). The ef-fects of reinforcement rate on the spon-taneous social initiations of sociallywithdrawn preschoolers. Journal of Ap-plied Behavior Analysis, 31, 117–125.

Zarcone, J. R., Rodgers, T. A., & Iwata,B. A., (1991). Reliability analysis of theMotivation Assessment Scale: A failureto replicate. Research in Develop-mental Disabilities, 12, 349–360.

Zhou, L., Goff, G. A., & Iwata, B. A. (2000).Effects of increased response effort onself-injury and object manipulation ascompeting responses. Journal of Ap-plied Behavior Analysis, 33, 29–40.

Zhou, L., Iwata, B. A., & Shore, B. A.(2002). Reinforcing efficacy of food onperformance during pre- and postmealsessions. Journal of Applied BehaviorAnalysis, 35, 411–414.

Zhou, L., Iwata, B. A., Goff, G. A., & Shore,B. A. (2001). Longitudinal analysis ofleisure-item preferences. Journal of Ap-plied Behavior Analysis 34, 179–184.

Zimmerman, J., & Ferster, C. B. (1962). In-termittent punishment of S� respond-ing in matching-to-sample. Journal ofthe Experimental Analysis of Behavior,6, 349–356.

Zuriff, G. E. (1985). Behaviorism: A con-ceptual reconstruction. New York: Co-lumbia University Press.

728

Index

IndexPage references followed by "f" indicate illustratedfigures or photographs; followed by "t" indicates atable.

220 questions, 258

AA-B design, 1-2, 15, 20, 178, 191, 196-200, 205-206,

215, 218, 616A-B-A design, 1, 15, 196-197, 200ABAB design, 197Abbreviations, 74, 168Abilities, 29, 69, 213, 287, 296, 453, 537, 550, 584,

590, 617, 619, 650defined, 69

Abscissa, 149, 154-155Absences, 505Absolute value, 170Abuse, 342, 379, 478, 492, 580, 678-679, 716, 722

alcohol, 342, 679child, 379, 678, 716drug, 679, 722emotional, 478, 716of children, 678-679, 722reporting, 678-679types of, 342

Academic achievement, 380, 591, 670, 691, 697-699,709, 721

Academic development, 649Academic performance, 87, 217, 385, 559, 569, 580,

687, 691, 706, 716-717Academic skills, 69, 243, 315, 322, 632Academics, 537acceptance, 78, 140, 258, 318, 375-376, 378, 385,

389, 445, 472, 689ACCESS, 4-5, 10, 12, 15, 18, 77-79, 82, 86, 90, 100,

116, 118, 125, 148, 151, 170, 186, 201,213-214, 226, 240, 249, 253, 257, 259, 273,291-293, 297, 306, 310, 313, 318, 337, 353,356, 448, 484, 487, 489, 500, 502-504, 511,517-518, 525-527, 529-531, 533, 539, 556,559, 568, 571, 576, 581, 597, 603, 610-611,640, 644, 678, 681, 693, 710, 718

Accessibility, 33Accountability, 38, 73, 75, 85, 90, 681, 700

assessment for, 75of teachers, 681

Accreditation, 29, 671, 686, 712Accuracy, 1, 3, 19-20, 81, 93, 97, 101, 112, 115, 119,

122-125, 128-133, 140-144, 150-151, 156,183, 192-193, 209-211, 230-231, 243, 254,256-257, 259, 261, 267-268, 306, 436, 440,442, 519, 533, 562, 570, 590, 601, 606-607,620, 625, 643-644, 650-651, 662, 670,672-673, 696, 703-704, 709-710, 712, 714,724

Achieve, 5, 19, 23, 37-38, 63, 77, 81, 83, 93-95, 97,102, 120, 140, 161, 179, 181, 191, 194, 238,242, 246, 270, 272, 379, 437, 447, 460, 466,477, 495, 497, 553, 567, 569, 577, 580,586-587, 589, 598, 618-620, 627, 632, 634,636, 649, 657-658, 661-663, 670, 681, 683

Achievement, 69, 73, 89-90, 104, 125, 209, 217, 380,455, 463, 568-569, 580, 591, 670, 691,697-699, 709-710, 714, 716, 721, 724, 727

academic, 69, 217, 380, 569, 580, 591, 670, 691,697-699, 709-710, 716, 721, 724, 727

and homework, 463in mathematics, 73influences on, 714tests, 69, 73, 89, 209

Acid, 37Acquisition, 5, 10, 12, 32, 42, 63, 78, 93, 97-98, 104,

109, 123, 157, 183, 190, 211, 213, 215, 229,231-232, 253, 286-287, 306, 317-318, 320,

322-323, 412, 417, 419, 425, 427, 429, 432,498, 536-537, 540-541, 546, 550, 552-554,626-627, 632, 636, 653, 659, 661-662, 680,685, 688-692, 695, 698, 701-703, 705, 708,713, 717, 720, 723-728

language, 32, 42, 78, 213, 215, 287, 498, 536-537,540-541, 546, 550, 552-554, 626,688-691, 703, 705, 708, 723-725, 728

vocabulary, 78, 726ACT, 26-27, 32, 37, 61, 71, 93-94, 290, 312, 342, 345,

386, 456, 469, 559, 562, 573, 575, 585, 591,605-606, 631, 650, 668, 679-683, 692, 697,699-700, 703-705, 709, 718, 720

Acting out, 57, 70-71, 572, 726actions, 25, 32, 46, 51, 59, 74, 115, 286, 315, 346,

356, 372, 429, 539-541, 552, 554-555, 572,585, 664, 676, 678, 680

Activation, 49, 447Active responding, 686Activities, 1-2, 5, 18, 25, 33, 53, 73, 76, 78, 80, 96,

108, 113-114, 128, 140-141, 203, 227-228,250, 259, 268, 291-293, 295-299, 313,325-326, 338, 353, 360-361, 368, 376, 378,380, 382, 431, 437-438, 443, 482, 491-492,500, 507, 512, 514, 516-522, 524-529, 531,537, 541, 555, 564-565, 568, 570-571,574-575, 581, 597-598, 602, 604, 608,611-612, 642, 649, 651, 654-657, 664, 674,680, 690, 693-695, 697, 702, 705, 711, 728

developmental, 76, 78, 295, 353, 382, 431,437-438, 491, 524, 529, 555, 602,656-657, 694-695, 697, 702, 705, 711,728

follow-up, 228, 325, 657, 697instructional, 5, 78, 80, 108, 250, 378, 443, 492,

568, 642, 649, 654, 690, 693, 697, 728learning, 18, 73, 78, 80, 250, 313, 325-326, 361,

431, 537, 541, 568, 570, 608, 649, 651,654, 656, 693-694, 702, 705, 711, 728

ongoing, 5, 108, 250, 259, 376, 378, 380, 482, 524,654, 664

planning, 228, 250, 259, 268sequenced, 1, 73space for, 680varying, 18, 113, 642

Activity level, 727Activity reinforcers, 291Activity Schedules for Children with Autism, 710Actors, 65, 640Adaptation, 53, 83, 237, 254, 286, 299, 369, 409, 688Adaptations, 52, 636Adaptive behavior, 72, 76, 80, 90, 238, 388, 504, 511,

694, 706, 714measurement of, 238

Adaptive behaviors, 79-80, 427, 482, 499, 700Addiction, 64, 594, 649Addition, 24, 31-34, 46, 51, 56, 59, 63, 65, 69-71, 78,

80, 94, 97, 103, 119, 132, 137, 141-142, 148,150, 156-157, 160, 168, 170, 185-186, 188,195, 197, 205-206, 213, 221, 231-232, 242,249-250, 263, 281, 288, 304-305, 307,309-310, 333, 357, 360, 364, 378, 380-381,386, 392, 394-395, 397, 405, 412, 418,422-424, 443, 448, 460, 462, 474, 479, 482,484, 490, 492, 499, 504, 507, 513, 518-520,523, 526, 528, 531, 533, 537-538, 540, 549,551-552, 556, 559, 593, 603, 605, 607-608,612, 616-617, 625, 627, 632, 639, 646, 652,659, 668, 674, 681-682, 703

implied, 46, 56, 186properties of, 46, 148, 309, 537, 540, 552, 556

Addition and subtraction, 422-424, 605, 627, 703Adjectives, 32, 133, 537, 540, 552, 554, 701Adjustment, 9, 74, 76, 90, 331, 604, 708, 726Adjustments, 293, 333, 367, 381, 450, 572Administration, 83, 90, 208, 217, 265, 575, 721

of assessment, 83

Administrative tasks, 598Administrators, 65, 85, 90, 181, 206, 215, 217, 224,

233, 238, 261, 381, 670, 678documentation, 678educational, 206, 217, 670school, 217, 233, 261, 670, 678

Adolescents, 110, 263, 356, 439, 443, 445, 671, 689,702, 704, 710, 712, 715, 719, 722, 724, 726,728

Adults, 34, 79, 87, 99, 108, 153, 233, 258-259, 266,284, 287, 293, 300-302, 317, 336-337, 347,351-354, 356, 360, 418, 437-438, 451-452,501, 525, 527, 540, 555, 564, 566, 575, 590,613, 625, 631-632, 636, 641, 654, 671, 680,685, 688, 691-694, 714, 716-717, 719, 724

Adverbs, 537, 540, 552, 554, 701Advertisements, 23-24, 95Advertising, 669advice, 27, 70, 160, 283, 293, 598, 617, 643Advocacy, 371

effective, 371Affect, 2, 36, 46, 54, 66, 72, 75, 118-119, 128, 183,

216, 262, 269, 278, 285, 299, 303, 341, 364,368, 388-389, 397, 416, 425, 450, 475, 477,480, 499, 520, 549, 585-586, 594, 607, 612,620, 627, 681, 683

Affirmation, 1, 178, 190-192, 195Age, 50, 53, 77, 79-80, 90, 108, 160, 253, 265, 271,

295, 307, 437, 463, 472, 485, 550, 568, 588,593, 611, 632, 636, 654, 666, 687, 719, 721

mental, 79-80, 108, 636, 687, 719, 721motivation and, 79

Agency, 345, 363, 370, 373, 379, 571, 665, 681Agendas, 342-343, 550Agents, 575, 619, 697, 722Aggression, 14, 64, 158, 259, 272, 287, 303, 316, 320,

334, 340, 342, 354-355, 359-360, 365, 367,370, 388, 395, 410, 429, 471-472, 475-480,482-484, 486, 489, 492, 501, 504, 506, 509,511, 513, 517-518, 524-526, 531-534, 537,551, 599, 685, 697, 702, 707, 714, 716,724-725

forms of, 287, 320, 355, 388, 478-479, 482, 533aggressive behavior, 64, 254, 272, 320, 346, 355, 359,

395, 400, 403, 410, 489, 708, 715, 720Agreement, 3, 10-11, 16, 19, 32, 41, 85, 117, 119, 121,

122, 129, 131-144, 256-257, 268, 367,519-520, 602, 606, 657, 678, 691, 700, 715,717

agreements, 119, 135, 138, 141, 559Aides, 71, 261AIDS, 5, 24, 42, 148, 341, 440-441, 574, 652, 679,

694, 696, 724airplanes, 53, 63Alcohol, 24, 342, 355, 679

function of, 24, 355Alcoholism, 649Alert, 131, 164, 303, 367, 491Algebra, 64, 307, 417, 567, 659Allergies, 88Alternative communication, 151Alternatives, 186, 337, 371, 515, 520, 666, 678, 680,

707, 714Ambiguity, 363American Association on Mental Retardation, 676,

687, 701American Educational Research Association, 383American Guidance Service, 694, 710American Psychological Association, 28, 370, 471,

667-669, 671, 685-686, 689American Sign Language, 214Analysis, 1-5, 7-10, 13, 18, 20, 22-43, 44-49, 55,

57-59, 62-66, 68-69, 75, 80, 83, 85, 89-90,92-93, 96-97, 100-101, 104, 110, 115-116,118-121, 122-125, 127-128, 131, 133, 135,140-143, 146, 148-159, 162-164, 168-170,174-177, 178-188, 193-194, 196-202,

729

205-208, 212-213, 215-218, 220-221,224-226, 228-229, 231-244, 245-274,276-277, 280, 282-283, 285-289, 293-294,301-302, 304-305, 308, 311, 317-322, 324,327-328, 334-335, 337-338, 340-343,344-346, 348, 352-354, 357-359, 363, 365,369-371, 373, 374, 377, 382-384, 390-391,395, 398, 400-405, 407, 408, 411, 415, 426,434-435, 437-446, 449-450, 452-453, 454,457, 459, 464-465, 467, 468-470, 472-475,478, 481, 483-484, 486-490, 493, 495,498-499, 501-502, 507, 510, 514-516,518-519, 521-523, 525-526, 528-530,532-534, 536-539, 543-544, 547-551, 554,556-557, 558, 561, 563, 579, 581, 583-586,589, 594, 599-600, 607, 612, 615-617,622-626, 628, 630, 632-633, 636-638, 645,653, 655, 657, 662, 664-675, 677-678, 680,682-684, 685-728

story, 150, 152, 212, 525, 687, 689, 720textual, 13, 18, 536, 539, 543-544, 548, 556-557,

719Anecdotal reports, 320, 362, 457Anger, 42, 395, 485, 697, 720

management, 42Angles, 643animals, 42, 50, 110, 279, 355, 455, 460, 713Animation, 465Anonymity, 673Anorexia, 722Anorexia nervosa, 722antecedent variables, 48, 393, 405, 512, 533, 542, 553Antecedent-based interventions, 498Anthropology, 709Antisocial behavior, 356, 537anxiety, 17, 609, 614, 620, 697

management programs, 609, 620APA Ethical Principles of Psychologists, 681

and Code of Conduct, 681Applicants, 413Application, 11, 16, 27-28, 31, 34, 37, 40-41, 43,

57-58, 63, 82, 85-86, 93, 188, 192-193, 200,204, 211, 215, 217, 221, 233, 235, 237,249-250, 255-256, 271, 273, 283, 321, 333,335, 340, 345, 347, 351, 366, 368, 370-371,373, 376, 382, 389, 421, 439, 466, 469,476-477, 479, 491, 493, 502, 506, 524, 550,559, 577-578, 580, 586, 607, 613, 619-620,669, 682-683, 686-687, 695, 699-700,702-704, 707-708, 710, 718, 720-723,725-726, 728

Applications, 34, 36-37, 52, 133, 170, 224, 242, 258,282, 305, 317, 322, 341-343, 354, 363, 371,373, 379, 381, 389, 416, 455, 465, 467,469-470, 476, 479-480, 482-483, 488, 503,515, 538, 550, 557, 559, 561, 575, 587, 598,603, 619, 636, 646, 688, 695-696, 698, 701,704, 707, 709, 714, 716, 723-724, 727-728

Applied behavior analysis, 1-3, 5, 9-10, 22-43, 44-45,47, 55, 59, 63, 65-66, 68-69, 85, 89-90,92-93, 96, 101, 104, 110, 115, 118, 120,122-123, 125, 127-128, 133, 135, 140-143,146, 148-157, 159, 164, 169-170, 174-176,178-184, 187-188, 193-194, 196-197,199-200, 202, 205-208, 212-213, 215, 217,220-221, 224-226, 228, 231, 234, 236,239-241, 245-274, 276-277, 282, 285,293-294, 301-302, 305, 311, 317-322, 324,328, 334-335, 337-338, 340-343, 344, 352,354, 358-359, 363, 365, 370-371, 373, 374,377, 383-384, 390-391, 400-402, 408, 411,426, 434, 440-441, 444, 446, 454, 464,468-470, 472-473, 475, 481, 484, 486-490,493, 495, 498-499, 501-502, 507, 510,515-516, 536-537, 558, 563, 579, 583,585-586, 589, 599-600, 615, 622-624, 626,628, 637-638, 645, 653, 655, 657, 664-666,668, 671, 674-675, 682-684, 685-728

data collection, 65, 110, 115, 155-157, 257, 719employment, 691preference assessments, 154, 301, 691, 693, 701,

708, 714, 719reinforcement, 1-3, 5, 9-10, 28, 34, 40, 44, 59, 66,

69, 90, 155-157, 159, 183, 187, 200, 202,205-206, 212-213, 221, 226, 239-240,248, 250-251, 253-254, 257, 259-260,262-263, 270, 276-277, 282, 285,293-294, 301-302, 305, 311, 317-322,324, 328, 334-335, 337-338, 340-343,

352, 354, 358-359, 363, 371, 373, 374,377, 383, 391, 400-402, 411, 446, 454,464, 469-470, 472-473, 475, 481, 484,486-490, 493, 495, 498-499, 501-502,507, 515-516, 536, 585-586, 589, 622,624, 626, 628, 645, 653, 655, 666, 674,685-728

stimulus control, 2, 44, 55, 66, 408, 411, 499, 586,689, 694, 705, 708-709, 711, 717, 720,724, 726-727

target behavior, 3, 5, 9-10, 24, 26, 68-69, 85,89-90, 101, 110, 118, 120, 123, 125,127-128, 133, 135, 140, 143, 149-151,164, 176, 180, 182, 184, 187, 193-194,197, 205-206, 208, 221, 234, 239-240,246, 253-255, 258, 263, 267-268,270-271, 273-274, 301, 305, 359, 470,487, 493, 516, 585-586, 589, 599, 615,623-624, 626, 628, 645, 653

Applied Behavior Analysis (ABA), 2, 22, 27, 34, 42-43Applied behavioral analysis, 457, 696, 716-718Appreciation, 65, 538, 652Approaches, 32, 34, 37, 70, 79, 89, 91, 131, 140, 208,

217, 258, 295, 298-299, 331, 368-369, 373,385, 389, 402, 435, 511-512, 539, 559, 566,595, 607, 632, 634, 665, 691, 696, 698, 712

brief, 299, 402, 511, 698Appropriate education, 83Appropriateness, 93, 119, 205-206, 216, 218, 233-234,

237-238, 242, 244, 258, 270, 274Area, 3, 10, 22, 25, 44-45, 68, 72, 75, 86, 90, 92,

113-114, 122, 146, 149, 164, 176, 178, 181,191-193, 196, 217, 220, 224, 229, 245, 274,276, 311, 324, 330, 337-338, 342-343, 344,359, 374-375, 378-379, 390, 396, 400, 404,408, 414, 426, 434, 449, 454, 458, 468, 481,498-499, 510, 530, 536, 546, 550, 558, 583,590, 601, 622, 650, 660, 664-665, 671, 673,681-683

Arguments, 58, 183Arizona, 34Art, 40, 42, 227, 296, 462, 465, 716, 720

defining, 40music, 296

Art activities, 227Arthritis, 418Articles, 27, 31, 36, 227, 256, 271, 499, 537Articulation, 531, 533, 537, 553Artifacts, 115, 121, 126, 128, 143, 240Artist, 29Arts, 414, 478, 561, 577, 608, 649-650, 656Arts specialists, 478Assertiveness, 589Assessing, 18, 30, 72, 75, 79, 90-91, 102, 117,

122-144, 151, 180, 183-185, 213, 219, 238,250, 255-259, 261, 263, 274, 278, 299, 301,411, 429, 433, 435, 438, 441, 452-453, 587,617, 625, 630, 635, 656, 659, 662, 672-673,692-694, 706, 710, 714, 718, 723

Assessment, 3, 5-6, 8-11, 14, 16-17, 32-33, 40, 58,68-75, 77, 79, 81-83, 89-90, 94, 97, 99, 101,120, 129, 131-134, 137, 139, 142, 144, 149,152, 155, 159, 200, 211, 213-215, 226, 229,232, 245, 259, 263, 266-267, 271, 274, 276,295-301, 307, 310, 320, 333, 336-337, 340,342, 353, 363-364, 368-369, 382, 411, 418,430, 437, 440-441, 447-448, 452-453,470-472, 484, 488, 503, 507, 510-534,550-551, 556, 562, 572-573, 577, 598-599,601, 604, 608, 619, 635, 651, 658-660,665-666, 669-672, 675, 677, 679, 681, 683,685-686, 688-690, 692, 694-701, 704-709,711, 713-720, 722-724, 727-728

Assessment:, 69, 132, 298, 688, 695, 701, 704, 708alternate, 713alternative, 5-6, 8, 11, 159, 310, 320, 340, 353,

368-369, 448, 453, 484, 507, 511-513,516, 527, 530, 533-534, 677, 685, 696,699, 704, 714, 716, 724

authentic, 263authentic assessment, 263behavioral, 3, 8-11, 14, 17, 32, 58, 68-71, 73-74,

77, 79, 81-82, 89-90, 94, 99, 101, 129,131-134, 137, 139, 142, 144, 149, 152,155, 159, 215, 274, 307, 320, 336-337,364, 437, 441, 448, 452, 470-471, 510,514-516, 519-521, 523-524, 529, 550,604, 619, 658, 669, 671-672, 675, 681,685-686, 688-690, 692, 694-701,

704-709, 711, 713-720, 722-724, 727-728case example, 534, 708, 718checklists, 3, 9, 69-70, 72-73, 89, 519, 533, 651community, 3, 40, 70, 73, 99, 215, 507, 659,

665-666, 685, 689, 692, 694, 698-699,707, 709, 711, 714, 716

components of, 159, 211, 437, 441, 452, 556, 577,651

concerns about, 32, 681continuous, 5, 73-74, 99, 226, 307, 336, 503, 507,

516-519, 522, 526, 533contract, 3, 16, 562criterion-referenced, 69, 73, 670curriculum-based, 73, 670cycle, 159, 635, 711day-to-day, 75, 299, 665decision making, 94, 562, 666, 677, 683definitions, 89-90, 129, 132, 134, 142, 448, 470,

522, 700demand for, 471descriptive, 5, 58, 73, 75, 81, 90, 307, 472, 510,

514, 516, 518-519, 521, 533-534, 573,686, 689, 695, 706-707, 711, 719, 724

direct, 5-6, 9-10, 14, 69-70, 73, 75, 81-82, 89-90,94, 101, 131, 142, 155, 245, 263, 266,274, 295, 300, 307, 310, 333, 342, 418,516, 519-521, 529, 659, 683, 688, 695,704, 706, 709, 722, 724

direct observation, 5, 70, 73, 90, 333, 516,519-520, 704

early childhood, 686, 697, 723, 727ecological, 6, 68, 70, 75, 90, 368, 701essays, 133, 695family, 69-71, 90, 133, 296, 598, 660, 666, 670,

677, 683, 714-715formal, 5-6, 8-9, 17, 70, 430, 556, 666, 671formative, 94, 120, 259framework for, 301functional, 5, 8-9, 14, 17, 32-33, 40, 58, 69-72, 74,

77, 81, 120, 152, 159, 200, 215, 226,263, 266-267, 271, 274, 320, 353, 363,382, 453, 470-472, 484, 503, 507,510-534, 551, 556, 562, 573, 601, 604,608, 669, 672, 681, 686, 688-690, 692,694-696, 699, 704-709, 714-717,719-720, 722-723, 727

functional behavioral, 716group, 5, 8-10, 17, 73, 81, 263, 267, 299, 336-337,

488, 516, 531-533, 562, 572-573, 577,608, 619, 635, 651, 670, 688-689,698-699, 701, 704-708, 716, 718,727-728

guidelines for, 310, 363, 484, 573, 671-672, 686,688-689, 713, 728

health, 82, 470, 665, 681, 700, 713, 717, 727HELPING, 81-82, 651, 666, 677, 692, 701history, 6, 9, 14, 17, 58, 75, 90, 363, 411, 471, 520,

524, 550, 599, 635, 666, 695, 704,715-716, 718

in grades, 296, 713informal, 70instrument, 33, 131, 259, 274, 707integrated, 296, 608intelligence tests, 73interpretation of, 101, 129, 267, 696, 715inventories, 73, 573lifestyle, 68, 619math, 73-74, 81, 97, 101, 131, 267, 297, 340, 382,

524-525, 562, 577, 604, 608, 635, 697,706-707, 711

mathematics, 73, 97, 701methods, 3, 6, 14, 32, 40, 58, 69-70, 75, 89-90, 94,

120, 129, 134, 139, 144, 152, 245, 259,263, 266, 274, 295-296, 298-301, 310,363, 437, 441, 452, 507, 510, 514, 516,519-520, 533, 556, 562, 651, 660,665-666, 670, 685, 692, 696-697, 704,708, 714-715, 717, 723-724, 727-728

methods of, 40, 152, 298, 514, 516, 520, 651, 697monitoring, 16, 69, 71, 75, 89, 94, 245, 524, 526,

528, 562, 598-599, 601, 604, 608, 619,658, 683, 685, 689, 692, 699-700, 705,707-708, 713, 719, 724, 727-728

monitoring and, 94, 601, 608, 689, 724need for, 133, 271, 658-659, 677needs assessment surveys, 71, 90norm-referenced, 73, 670objective, 6, 32-33, 70, 83, 89, 129, 133, 519-520,

659, 670

730

objectives and, 670observation, 3, 5-6, 8, 10, 14, 16, 68, 70, 73-75, 90,

99, 101, 120, 129, 131, 133-134, 144,259, 295, 297, 300, 333, 364, 488, 516,519-520, 529, 532-533, 598-599, 601,659, 695-696, 700, 704, 711, 719,723-724

of motor skills, 704of problem behavior, 5, 8, 152, 320, 363, 471,

513-515, 520-524, 526, 528-529, 532,534, 700, 704, 711, 717, 724

of social skills, 694of spelling, 722of writing, 598of young children, 688, 719planning and, 89, 245, 259, 263, 266-267, 271, 274postassessment, 430preference, 17, 155, 213, 245, 276, 295-301, 310,

337, 364, 382, 484, 530, 562, 665, 681,689-690, 694, 698-699, 701, 708,714-715, 718-719, 727

principles, 3, 33, 40, 58, 266, 276, 368, 411, 524,658, 666, 669, 672, 681, 683, 685-686,688, 695, 704-705, 709, 711, 713, 717,719, 722, 724, 727

problem, 5-6, 8-9, 69-71, 73, 75, 77, 81-82, 89-90,97, 99, 131, 152, 259, 266, 320, 340,342, 363-364, 369, 411, 470-472, 484,488, 503, 507, 511-516, 518-534, 551,562, 577, 608, 681, 686, 695-697, 700,704, 706-707, 711, 714, 717-718, 722,724

procedures, 3, 9, 14, 16-17, 33, 40, 58, 71, 73, 75,79, 83, 89-90, 94, 99, 120, 129, 131-132,134, 142, 211, 213-215, 259, 266-267,271, 274, 276, 295, 297, 299, 307, 310,320, 333, 363-364, 368-369, 382, 411,418, 430, 437, 447-448, 452-453,470-471, 488, 503, 507, 513, 516, 519,521, 524, 530, 534, 551, 556, 572-573,601, 635, 665-666, 669-670, 672, 675,677, 683, 686, 688-689, 692, 696-699,704, 706, 708, 711, 714-715, 717-718,720, 723-724, 727-728

protecting, 677purpose of, 69-70, 75, 213, 382, 437, 452, 521,

534, 666purposes, 8, 69, 101, 131, 133-134, 142, 511, 513rating scales, 9, 72, 519-521, 533, 688reasons for, 132-133reliability, 14, 17, 129, 131-133, 142, 144, 266, 271,

274, 520, 688-689, 692, 700, 704, 708,714, 720, 722, 728

reliability of, 17, 129, 132-133, 142, 266, 274, 520,708, 720

risk, 368-369, 516, 562, 573, 677, 689, 704, 728science education, 724scoring, 73, 129, 132-133self-assessment, 16, 418, 601, 608, 651service, 671, 683, 689, 694social studies, 267, 297, 685software, 723special education, 73-75, 83, 267, 320, 526, 573,

604, 608, 677, 686, 697-698, 701,708-709, 711, 715-716, 719, 722, 724,727-728

standardized assessments, 551standardized testing, 551standardized tests, 69, 73steps in, 8, 229, 437, 440-441student achievement, 670summative, 94, 120, 134, 670supportive, 77, 82, 90, 706task analysis, 8, 68, 229, 259, 437, 440-441,

452-453, 556, 665technology, 32, 40, 75, 94, 142, 263, 271, 274, 659,

685, 689, 695-696, 698-699, 704,706-707, 714-715, 719, 722-724

technology and, 695-696, 704, 723-724threat, 82, 215, 484time sampling, 10-11, 120, 137, 516, 599, 716, 719,

724transition, 333, 665validity of, 17, 142, 215, 259, 263, 266, 271, 274,

699vocational, 320, 337, 452, 670, 688, 699, 719work samples, 651

Assessment methods, 69-70, 75, 89-90, 295, 300-301,363, 519-520, 533, 714

experimental, 301, 363, 533, 714functional assessment, 69, 519-520, 533, 714interviews, 69-70, 89, 519-520, 533

Assessments, 24, 69, 73, 75, 131-134, 139-142, 144,154, 202, 211, 213, 254, 258, 262, 268, 274,288, 295-296, 299-301, 306, 310, 316,336-337, 353, 364, 372, 385, 469, 476, 479,484, 499, 503, 507-508, 516, 518-522,524-525, 527, 531, 533, 550-551, 562, 616,666, 679, 691, 693, 701, 708, 714, 717,719-720, 726

classroom, 73, 134, 254, 258, 268, 364, 385, 479,484, 508, 516, 524-525, 527, 531, 691,693, 701, 708, 714, 719, 726

comprehensive, 69, 551of attention, 522, 525, 531, 708of language, 550of sleep, 520of students, 693, 701outcome, 262, 469, 516, 524quality, 131-134, 139-142, 144, 274, 306, 310, 337,

364, 372, 476, 551, 720Assets, 3, 69Assignments, 110, 116, 134, 183-184, 278, 280, 291,

294, 385, 417, 561, 576, 599, 604, 606-607,623, 625, 650, 694, 701, 724

extra, 184, 623Assistance, 70-71, 102, 259, 316, 320-321, 418-420,

431, 437, 442, 452, 493, 505, 508, 514, 518,526, 562, 589, 591, 604, 608, 625, 629, 636,639, 651, 653, 656-657, 659-660, 691, 697,728

cards, 639, 653, 657, 691, 697resource room, 259, 653

Assistive devices, 507, 694Assistive technology, 593Association, 28-29, 32, 35, 48, 288, 368-370, 383,

409, 447, 471, 667-671, 674, 676, 678,685-687, 689-692, 695-699, 701-702, 708,710, 712-713, 718, 721, 726

assumptions, 22-23, 25-26, 33, 42, 94, 148, 174,178-195, 235, 243, 249, 569, 624

conclusions, 148, 174, 186, 624error, 235

atmosphere, 597, 640Attending, 34, 70-71, 74, 203, 206, 262, 346, 417-418,

429-430, 444, 474, 483, 525, 571, 673-674,690

Attending skills, 418, 429-430Attention, 8, 13, 24, 34, 55-56, 60, 69-71, 73-74, 82,

84, 90, 101, 103, 113, 117, 128, 150-154,156, 168-169, 183-184, 187, 199, 202-207,212, 215, 227, 254, 259, 262, 264, 266,277-278, 280, 284-290, 293-294, 296, 301,303, 307-308, 310, 316, 318, 320, 323,356-357, 361-362, 367-368, 376, 382-383,388-389, 396, 398-399, 417, 427, 431-433,459-460, 464, 469-472, 477-479, 483,489-490, 494, 501, 507, 511-512, 514-519,521-529, 531-533, 538-541, 552, 562, 564,575-576, 578, 580, 596, 600, 604, 606,608-609, 613, 623, 636, 649-653, 655, 670,674, 685, 687, 689, 692, 698-699, 704-705,708-709, 712-714, 716-718, 724

and learning, 713, 716from teachers, 187negative, 13, 56, 69-71, 82, 84, 90, 117, 128, 262,

278, 316, 318, 320, 323, 356, 362,367-368, 382-383, 389, 469-472, 477,479, 483, 501, 511-512, 515-516, 533,538, 552, 576, 608-609, 704-705, 709,718

positive, 13, 56, 69, 82, 103, 117, 202, 254, 262,266, 277-278, 280, 284-290, 293-294,296, 301, 303, 307-308, 310, 316, 318,320, 323, 356, 361-362, 368, 376, 382,388-389, 459-460, 470, 472, 477, 479,483, 490, 501, 511-512, 515, 523, 533,564, 576, 578, 580, 596, 608-609,649-652, 685, 687, 689, 698, 705, 708,716, 718, 724

problems of, 202selective, 522, 685student, 34, 55, 73-74, 82, 84, 101, 103, 113, 117,

128, 156, 169, 184, 199, 203, 207, 212,227, 254, 259, 262, 277, 280, 293, 296,301, 318, 323, 356-357, 367, 376, 382,388-389, 417, 427, 431-433, 460, 469,477-478, 483, 489, 494, 507, 511, 516,

518-519, 521, 524-525, 562, 576, 580,600, 604, 606, 608, 623, 636, 649-653,655, 670, 685, 698, 705, 712-714

sustained, 82, 264, 303, 687, 689theories, 712

Attention-deficit/hyperactivity disorder (ADHD), 266,296, 562

Attenuation, 521, 707Attitudes, 23-25, 27, 39-40, 42, 179, 727

teacher, 23-24, 40, 727Attributions, 185Audience, 2, 10, 35, 79, 180, 186, 249, 409, 536, 539,

543, 548-549, 557, 617Audio, 110, 112, 116, 119, 130, 132, 654, 690, 711,

718digital, 116, 119

Audio recordings, 116, 718Audiotape, 104, 116, 119, 130, 132, 134, 366, 443Audiotapes, 117-118, 385, 566Augmentative communication, 552Ault, 73, 442, 516, 689, 728Austin, 42, 451-452, 456, 686-688, 695, 699, 701,

704, 706, 714, 717, 720, 723, 725, 727Authentic assessment, 263AUTHOR, 35, 148, 471, 635, 650, 669, 685-686, 688,

696, 709, 713, 720, 725Authority, 23, 25, 70, 89, 234, 675-676

competent, 89legal, 675-676

Authors, 20, 27, 34, 36, 45, 58, 60, 76, 110, 139, 148,183, 189, 197, 203, 208, 224, 241, 256, 266,268, 271-272, 278, 282-283, 289, 308,319-321, 347-349, 358, 368, 413, 450, 476,513, 561, 578, 596, 603, 613, 633-634, 648,654, 678, 691, 720

Autism, 34, 40, 79, 94, 141, 151-152, 213-215, 225,227-228, 288, 291, 307, 337, 340, 365, 418,420-421, 427, 429, 445, 471, 477, 486-487,489, 502, 529, 537, 546, 550-553, 555, 570,590, 602, 623, 628, 632, 636, 644, 646, 650,654, 667, 682, 685-686, 691, 694, 697-698,701, 705-707, 709-713, 716-717, 719-720,723-724, 727-728

Availability, 32, 119, 253, 281, 309, 318, 326, 332-333,350, 357, 389, 393, 397, 401, 403-407, 410,497, 499-500, 522, 559, 656

Average, 10-11, 19-20, 52, 82, 112, 156, 169-171,174, 176-177, 181, 200, 203-204, 247, 258,264, 273, 277, 293, 305-306, 327, 329-330,332-333, 340, 345-346, 384, 394, 488-489,493-495, 503, 520, 561, 578, 580, 591,598-599, 601, 603, 606, 693

Averages, 578, 649Aversion therapy, 710Aversive conditioning, 713aversive consequences, 345, 616Aversive stimulus, 2, 8, 14, 44, 56-57, 59, 63, 66, 315,

338, 347, 388, 400, 471, 479, 540Avoiding, 15, 240, 346, 388, 588, 672-673, 678Awards, 328Awareness, 2, 7, 9, 12, 56, 78, 82, 130, 143, 333, 614,

624, 674, 717self, 9, 614, 674, 717

Awareness training, 9, 614

BB-A-B design, 1-2, 15, 20, 196-200, 205-206, 215, 218Babbling, 52, 287, 546Back, 34, 55, 74, 87, 124, 160, 197, 216, 219, 229,

277, 293, 345, 347-348, 360, 376, 379, 384,406, 420, 428, 432-433, 437, 461-462, 472,485, 527-529, 564, 585, 597, 599, 603, 611,616, 629, 671, 681, 720

Background, 45, 64, 128, 180, 253, 271, 458, 598, 666Background knowledge, 180Backward chaining, 2-3, 59, 434-436, 442-446, 448,

450, 452-453, 699, 722Balance, 14, 47, 49, 78, 217, 241, 296, 299, 368, 574,

666-667Baltimore, 685, 695-697, 702, 704, 706, 711, 718,

723-724Bar graphs, 148, 152-154, 176Baseline data, 17, 19, 187-188, 190, 195, 200, 212,

221, 223, 229-231, 233-235, 294, 321, 376,384, 387, 443, 456, 474, 491, 494, 497, 503,530, 561, 568, 570, 615

AS, 17, 19, 187-188, 190, 195, 200, 212, 221, 223,229, 231, 233-235, 376, 384, 387, 443,456, 474, 491, 494, 497, 503, 530, 561,

731

568, 570, 615Baseline data collection, 190, 195BASIC, 1, 3-4, 7, 11-12, 16, 18, 23-27, 30-31, 34,

36-37, 40-43, 44-66, 73, 75, 85, 89, 93-95,98-100, 102-103, 110, 120, 123, 137, 149,160-161, 169, 174-175, 178-195, 215, 221,224, 229, 243, 246, 258, 272, 278, 289, 291,296, 298, 307, 312, 317-318, 322, 326, 329,333-334, 336, 339-343, 345, 348, 350-351,355, 364, 366-367, 370-373, 375, 378, 389,391, 414-416, 425, 429, 465, 469, 473-474,476, 482, 487-488, 496, 499, 508, 514, 537,550-551, 553, 556, 570-571, 582, 587, 623,630-631, 640, 644, 654, 661, 666, 669, 674,683, 688, 692, 696-697, 702, 707-708, 715,720, 723, 726

Basic research, 27, 36, 40-41, 43, 317, 322, 326, 329,340, 342-343, 348, 351, 367, 371, 473-474,476, 692

Basic skills, 73, 702Beginning reading, 704Beginning teachers, 294, 699Behavior, 1-20, 22-43, 44-66, 68-91, 92-104, 108-121,

122-131, 133-144, 146-177, 178-195,196-213, 215-219, 220-229, 231, 233-244,245-274, 276-283, 285-298, 300-310,311-323, 324-326, 328-343, 344-373,374-389, 390-407, 408-414, 416-421,424-425, 426-432, 434-453, 454-467,468-480, 481-497, 498-509, 510-534,536-557, 558-564, 566-570, 572-582,583-620, 622-663, 664-684, 685-728

adaptive, 38, 52, 56, 72, 76-77, 79-80, 89-90, 238,286, 308-309, 315, 362, 388, 427, 437,482, 499, 504, 511, 524, 534, 681, 694,700, 706, 713-714, 717

aggressive, 46, 64, 71, 83, 207, 254, 272, 320,346, 355, 359, 362, 372, 387-388, 395,400, 402-403, 410, 457, 478-479, 489,500-501, 503, 513, 613, 688, 693, 697,706, 708, 714-715, 720, 726

awareness of, 333, 624challenging, 65, 83, 248, 259-262, 286, 368, 379,

499, 507, 511, 524, 623, 683, 686, 692,722, 727

communication and, 76, 539dangerous, 207, 238, 315, 322, 369, 373, 421, 459,

486, 494, 499, 588, 594, 641, 665desired, 9, 12, 16, 69, 80-81, 83, 89-91, 102, 104,

127, 143, 161, 176, 179, 182, 184,187-188, 204, 233, 242, 256, 322-323,330, 361, 388-389, 396, 418, 437, 461,466, 482, 485, 495, 497, 511, 513, 552,554, 566-567, 573, 581, 586, 589, 595,597-599, 601-602, 605-606, 608, 612,615, 619-620, 625-627, 631-634, 636,639, 642, 648-650, 654, 660-662, 670,677

disruptive, 57, 70-71, 79, 93, 112-114, 187, 227,260, 267, 277-278, 293-294, 315,321-322, 345, 356-357, 359-361, 366,376-380, 382-384, 387, 397-398, 420,477, 479, 488-489, 492-493, 499, 505,511, 519, 561-562, 568, 575-579, 606,631, 651, 668, 686, 688-689, 691,693-694, 697-698, 701, 704-706, 708,713-714, 716, 719, 722, 727

environment and, 6, 33, 39, 45-48, 63, 75, 179,184, 189, 297, 313, 587, 640, 657, 660

modification, 28, 32, 82, 94, 120, 203, 215, 242,383, 439, 445-446, 453, 640, 685,687-692, 694, 696-702, 704-710,712-714, 716, 718, 720-727

observation of, 5-6, 25, 29, 42, 187, 195, 376, 397,516, 519-520, 533, 552, 659

off-task, 99, 111, 127, 227, 267, 293, 333, 335, 485,526-528, 561-562, 576, 606, 608, 616,689, 692, 696, 699, 719

repetitive, 116, 151, 238, 250, 287, 320, 363, 472,494, 641

replacement, 8, 63, 80, 259, 320, 323, 360, 391,478, 482, 513, 524, 533-534, 552, 694

routines and, 40self-esteem and, 589simple, 2, 5, 13, 16, 26, 39, 42, 47, 52, 54, 63, 80,

86, 96, 140, 142, 147, 149-150, 152, 154,170, 176, 186, 199-200, 211, 215,233-235, 248-249, 254-256, 265, 271,273, 278-279, 283, 293-295, 306, 343,

360, 393, 398, 403, 409, 411-412, 418,429, 437, 465, 469, 531, 540, 546,554-555, 560, 564, 587, 591, 601, 603,613, 616-617, 643, 648-650, 652, 660,681, 690, 710, 717, 719

social, 2-3, 9, 13, 16, 18, 23-24, 26, 29, 32-34, 36,38-40, 42, 47-48, 52, 54, 59-60, 68-69,72, 75-77, 79-80, 82-83, 89-91, 94-95,117, 129, 141, 150, 156, 166, 176, 181,184, 197, 202-206, 218, 226-227, 233,237, 240, 246-247, 250, 253-254,256-259, 261-264, 267, 270, 272, 274,277, 279, 281, 285-288, 290, 293,295-297, 301, 307, 309, 313, 315-316,321-322, 325, 329, 333-334, 337, 350,368-370, 376-378, 387, 397, 399, 402,406-407, 416, 418, 421, 427, 431, 448,455, 465, 470-472, 477, 482-483, 490,499-501, 506-509, 511-512, 514-515,517-518, 521, 527, 537, 539-540, 548,550-553, 556, 561, 569, 574, 576-578,580, 588, 591-593, 607, 609, 611,614-617, 625-626, 628, 632, 646, 649,651-652, 660, 665-668, 670-671,680-681, 683, 685, 687-689, 693-694,696-700, 704-706, 711-715, 717,719-720, 722-728

target, 3-7, 9-16, 18-20, 24, 26, 38, 54, 62, 68-91,93-94, 99-103, 108-113, 116, 118-121,123-130, 133-140, 143-144, 147,149-151, 158, 160-162, 164, 176, 180,182, 184-185, 187-188, 190, 193-195,197, 200, 203-206, 208-209, 216, 218,221, 223, 227, 229, 233-234, 238-240,242-244, 246, 253-255, 258-259, 263,265, 267-268, 270-271, 273-274, 277,279, 282, 287-288, 290, 292, 296-298,300-301, 303-308, 310, 316, 322, 326,330, 347, 349-350, 352, 355-356, 359,366, 368, 372, 375, 379, 381, 386, 445,448, 455, 461, 463, 466, 470, 476, 478,487, 490-493, 496-497, 504, 509, 516,518-519, 521-522, 553, 555, 561,566-568, 570, 572-574, 577-582,585-586, 588-589, 595-599, 601, 603,605-606, 608-610, 612, 615-617,619-620, 623-635, 639-646, 648-654,656, 659-663, 665, 674, 681, 687, 689,696, 700, 725, 728

triggers, 512violent, 365, 494, 497, 722

Behavior change, 1, 3, 5, 8-11, 13, 15-16, 18, 23,37-38, 40-41, 43, 44, 56, 58-59, 64-66,68-69, 71, 73, 76, 78, 80-83, 89, 91, 94, 98,102, 119-120, 124, 127-128, 140, 142, 144,147-148, 150-151, 158-159, 164-166,169-170, 173-177, 178-182, 186, 188,191-195, 198, 203, 205-206, 208-209, 211,216-217, 225, 229, 236, 238, 242, 244, 247,250-251, 254-255, 258-259, 261-264,268-274, 276, 282, 294, 304, 311, 316,322-323, 324-325, 344-345, 347, 356, 361,368-370, 374, 381, 384, 408, 410, 432, 434,437, 448, 454, 459, 462, 468, 470, 477, 481,492, 498-500, 507, 536-537, 558, 566, 568,575, 580-581, 583, 586-587, 589-590,592-593, 599, 606-607, 612, 617-619,622-663, 664-665, 667-668, 672, 680-681,690, 697, 699-702, 718

Behavior changes, 3, 20, 38, 43, 54, 63-64, 79, 94-95,123, 159, 169, 174, 183, 185, 193-194, 200,221, 223, 235-236, 238-239, 242, 251,256-257, 259, 261, 269, 272, 274, 278, 280,484, 589, 601, 615, 617, 620, 623-624,630-633, 643, 648, 654, 658-663

Behavior contracts, 559Behavior disorders, 259, 291, 505, 561, 590, 607,

687-688, 707, 714, 722, 726, 728Behavior inhibition, 665Behavior management, 28, 83, 200, 378, 580, 673,

686, 704Behavior modification, 28, 32, 203, 439, 445, 640,

685, 687-688, 690-692, 694, 696-698,700-702, 704, 706-710, 712-714, 716, 718,720-725, 727

Behavior problems, 88, 334, 409, 479, 485, 491, 505,531, 550, 659, 680-681, 690, 694-695, 698,705, 707, 711, 726

Behavior, social, 72

Behavior support plan, 721Behavior support plans, 256, 691, 722Behavior therapy, 17, 370, 561, 614, 620, 686-687,

689, 693, 696, 700-701, 703-704, 706,709-710, 713-714, 717-718, 722-723,726-728

Behavioral approach, 39, 726Behavioral assessment, 3, 68-70, 73, 82, 89, 510,

521, 669, 672, 688-689, 694-695, 700, 704,706, 708, 716-718

Behavioral challenges, 39, 93, 247, 694Behavioral consultants, 616Behavioral contingencies, 44, 52Behavioral contracts, 558, 722Behavioral development, 29Behavioral interview, 70-71, 82, 520Behavioral observations, 521Behavioral pharmacology, 550, 692Behavioral principles, 52, 181, 308, 398, 466, 524,

559, 624Behavioral problems, 80, 570Behavioral rehearsal, 233Behavioral sequence, 365, 437, 441, 463Behavioral support, 718Behaviorally disordered, 711-712Behaviorism, 3, 11, 14, 22, 27-29, 31-35, 39-43, 57,

538, 543, 547, 584-585, 688, 691, 693, 698,701-702, 707, 711-712, 716-717, 721,723-724, 726, 728

Behaviors, 1-8, 10-16, 18-20, 23, 30, 34-38, 43, 46-57,59-61, 63-66, 68-91, 93-95, 97-100, 102,104, 108-110, 112-113, 115-119, 121, 123,125, 127-131, 135-136, 138-140, 143-144,150-152, 156, 158, 164-165, 167, 170, 176,180, 183-186, 188, 190, 193-195, 198-201,203-204, 206-208, 215-218, 220-225, 227,229-230, 232-240, 242-244, 246, 250,252-254, 256, 258, 260, 262-263, 265,267-271, 277-279, 282, 284, 286-287,289-293, 297-298, 300, 304-309, 315-318,320-323, 324-325, 328, 332-337, 339-340,342-343, 345, 347-348, 350, 353, 355-357,359-362, 364-367, 369-370, 372, 376-384,386, 388-389, 394, 404, 407, 409-410, 414,416-418, 427-433, 436-438, 440, 442-446,449, 452-453, 455-456, 458-463, 465, 467,469-474, 476-480, 482-486, 488-494,496-497, 499-502, 504-509, 511-513, 516,518-522, 524-525, 527, 532-533, 537, 540,543, 546-547, 551-553, 557, 561, 564, 566,568-582, 584-587, 589-590, 592-593,595-599, 602-606, 610-617, 619-620,623-627, 629-634, 639, 641, 644-646,648-653, 657, 659-663, 666-668, 674,680-683, 685-689, 692-693, 697-698, 700,703, 705-708, 710, 713-714, 716-718, 722,725, 727-728

at-risk, 164, 728bad, 125, 253, 315, 345, 369, 461, 587, 597, 603,

619, 710communicative, 8, 484, 506-509coping, 8describing, 7, 13, 36-37, 69, 73, 93, 98, 236, 250,

282, 293, 307-308, 361, 443, 519, 648desirable, 5, 10, 38, 57, 72, 79-80, 87, 89-90, 158,

188, 203, 206, 215, 233, 238, 243, 279,286, 304, 320, 325, 337, 353, 355-356,367, 378-379, 382, 389, 455, 478, 483,496, 499, 504, 511-513, 520, 569-570,573-574, 590, 595, 606, 612, 624, 629,631-633, 639, 660, 667

enabling, 23, 94, 364, 589nonverbal, 63, 112, 258, 279, 357, 427, 540, 543,

551-553, 557, 644, 687on-task, 38, 80, 99, 110, 113, 127, 130, 320, 479,

485, 561, 590, 605-606, 613, 689, 693,700, 705, 708, 728

self-management, 16, 75, 569, 575, 581, 584-587,589-590, 592-593, 595-599, 602-606,610-617, 619-620, 630, 634, 652, 657,659, 663, 687, 692, 703, 705-706, 708,710, 713-714, 716

SHARE, 2-3, 7-8, 46, 54, 60, 65, 88, 235, 243, 258,263, 271, 336, 410, 455-456, 478, 547,557, 590, 616, 627, 649, 660, 662

surface, 49, 81, 263, 473, 576, 579, 666, 703verbal, 2, 5-6, 8, 10-11, 13, 15-16, 18, 20, 23, 36,

54, 63, 66, 70, 79-81, 90, 104, 183, 204,208, 230, 252, 256, 258, 277, 279, 289,

732

293, 307-308, 316, 333, 350, 353,356-357, 359, 361, 364, 366, 376, 388,404, 407, 414, 418, 427, 429-433,442-443, 445, 455, 460, 469, 477-479,483-485, 492, 496, 507, 524-525, 537,540, 543, 546-547, 551-553, 557, 561,566, 572-574, 602, 612-613, 616, 620,644, 652, 687-689, 692, 703, 707-708,714, 717-718, 722, 728

Beliefs, 23, 26, 676ability, 676control, 23, 26

Belmont Report, 671, 713Benefits, 24, 76, 78, 83, 94-95, 133-134, 148, 152,

158, 176, 179, 187, 194, 215, 248-249, 443,540, 577, 584, 587, 589, 615, 619, 633, 662,676-678

of teaching, 584, 662Best practice, 369, 574, 715Best practices, 615Best treatment phase, 213, 218Beverages, 291Bias, 6, 11-12, 122, 124, 130, 141, 143, 164, 253-255,

704system, 12, 124, 130, 143, 253-254test, 254

Bibliography, 31, 685-728Bills, 598, 625Biography, 30biological sciences, 24, 30Bipolar disorder, 359, 531Bishop, 376, 689Biting, 9, 198, 287, 342, 346, 448, 479, 486, 531, 584,

598-599, 686Blame, 661Bleeding, 118, 263Blending, 657Blindness, 240Blocks, 80, 86, 102, 204, 284, 308, 493, 519, 628-629,

655Board of education, 292Bolt, 104Bonuses, 567Bookmark, 596Books:, 710

assignment, 647picture, 461recorded, 31, 385, 698

Borrowing, 102Boundaries, 88, 104, 325, 683, 720Boyce, 164, 689Boys, 86, 128, 200, 259, 308-309, 328, 333, 337-338,

359, 386, 502, 612, 628, 644, 654, 689, 693,715-717

Brain, 537, 550, 693-694, 697, 714, 723research, 693-694, 697, 714, 723

Brain injury, 550, 693-694, 723Breakfast, 421, 556Bruises, 60, 118, 353, 524Buffer, 387Bullying, 82Burns, 60Busing, 601, 642

CCaffeine, 102, 342, 687CAGE, 50, 286Calculators, 108Calendars, 330, 341California, 439, 541, 716Calorie, 608, 618Campaigns, 23Canada, 704, 718Canon, 368Capacity, 10, 217, 315, 393-394, 401, 427, 617,

675-677, 684Capitalization, 561Cards, 56, 98, 114, 183, 256, 267, 291, 307, 318, 330,

336, 421, 487, 499, 502, 506, 568, 570, 592,606, 639, 649, 653, 657, 691, 693, 697, 706,710, 713, 718

Career, 673information, 673

Caregiver, 40, 86, 286-287, 368, 418, 432, 508, 521,680, 696, 698

Caregivers, 9, 95, 287, 297, 366, 418, 428, 432, 491,508, 513, 519, 522, 533, 546-547, 678,680-681, 683

infants, 287

Caregiving, 413Caring, 368, 670Carrier, 86, 224Cartoons, 296, 376, 464Case examples, 524, 534Case studies, 609, 666Case study, 569, 609, 691, 697-698, 700-701,

704-705, 709, 718, 720CAST, 95, 182, 465, 584Categories, 249, 258, 265, 289, 293, 296, 336, 537,

630, 662, 711Categorizing, 61, 662Catheterization, 713Causal relations, 24Cause and effect, 32, 280, 309Ceiling, 102, 128, 398, 404Ceilings, 242Centers, 380Central tendency, 154, 171, 174Cerebral palsy, 64, 225, 601, 681Certification, 22, 29, 44-45, 58, 68, 92, 122, 146, 178,

196, 220, 245, 276, 311, 324, 344, 347, 374,390, 408, 426, 434, 454, 468, 481, 498, 510,536, 558, 583, 622, 664-665, 668, 671-674,688, 712, 720, 722

competence and, 672, 674Chains, 2-3, 8, 66, 227, 435-436, 442-444, 447, 450,

452-453, 597, 653, 703, 709, 727Challenging behavior, 259, 261-262, 368, 499, 507,

524, 686, 727Challenging behaviors, 260, 286, 511, 692, 722Change, 1-20, 23-24, 26-27, 31, 33, 36-43, 44-48,

50-51, 53-54, 56-61, 64-66, 68-71, 73,75-76, 78, 80-84, 88-91, 93-95, 98, 100-102,115, 119-120, 123-124, 127-128, 130-131,140, 142, 144, 147-151, 158-171, 173-177,178-195, 197-198, 203, 205-206, 208-209,211, 216-217, 221-222, 225, 227, 229,234-244, 247, 250-251, 253-256, 258-264,268-274, 276, 279, 281-285, 289, 294-296,303-304, 308-310, 311-313, 316, 322-323,324-327, 330, 333, 336, 344-347, 349-350,355-356, 361, 364, 366-370, 372, 374-375,381-382, 384, 388, 392-394, 397, 399,403-404, 406, 408, 410, 420-422, 432,434-437, 439, 448, 454, 459, 462, 468, 470,476-478, 481, 491-492, 496, 498-500, 502,504, 507-508, 512, 524, 527, 531-533,536-537, 552, 558, 566-568, 573, 575,580-582, 583, 585-590, 592-593, 597-599,603, 606-607, 609, 612, 615-620, 622-663,664-665, 667-668, 670, 672, 675, 677,680-681, 683, 687-688, 690, 697-702, 715,718, 724

acceptance of, 140, 258attitude, 13, 26, 80, 649constraints on, 180continuing, 37, 89, 94, 190, 234, 403, 660, 683decision making for, 677essential, 3, 37-39, 45, 54, 61, 123, 128, 179, 182,

237, 279, 296, 436, 477, 649facilitating, 333, 641-642in schools, 568motivation for, 448planned, 10, 13, 18-19, 48, 75, 127, 182, 185, 188,

203, 229, 235, 243, 251, 253, 255,273-274, 289, 295, 333, 374, 406, 524,589, 612, 626-627, 631, 661, 667, 677

problems with, 124, 345, 603, 627, 635, 646rate of, 3, 5-7, 10, 13, 16-17, 23, 43, 54, 60, 81, 84,

98, 100, 120, 150, 158, 176, 184, 237,239, 242, 303-304, 316, 326-327, 333,346, 355, 381, 436, 448, 470, 492, 496,508, 606, 618, 636, 643, 646, 650, 652,718

reciprocal, 653resistance to, 3, 15, 186, 468, 476, 624, 646, 688,

724science of, 3, 27, 31, 33, 40-43, 47, 181-182, 229,

264, 283stages of, 131, 254, 308, 325, 507, 648, 700sustainable, 667theory of, 585, 636

Change agent, 586, 589-590Change agents, 575, 619, 697Changes, 1-6, 9-12, 15-16, 20, 23, 30, 36, 38, 41, 43,

46-51, 53-54, 56, 58, 60-61, 63-66, 79, 81,91, 94-95, 97-98, 101, 116, 118, 123, 126,128-129, 133, 140, 142-143, 147, 149-152,

154-155, 157-162, 166-169, 174-177,179-181, 183-189, 191-195, 200, 202, 216,219, 221-223, 227, 229, 233, 235-244, 246,251-253, 256-257, 259, 261-262, 268-269,271-272, 274, 278, 280-281, 294, 309,312-313, 333, 343, 346, 348-349, 367, 391,394, 399, 401, 409-410, 420, 435-436,459-460, 464, 466, 484, 496, 511, 519, 531,550, 586, 589, 601, 605, 615-617, 620,623-624, 630-633, 643, 648, 654, 658-663,676, 680, 685, 702, 704, 706, 713

economic, 23Changing criterion designs, 184-185, 220-244, 249Channels, 108, 706Character, 587, 661Characterization, 61Charting, 17, 159-163, 176, 657, 698Charts, 98, 132, 146, 148, 158-162, 169, 172-173,

176, 656, 687, 692, 720comparison, 148, 159, 173, 176, 692, 720data, 98, 132, 146, 148, 158-162, 169, 172-173,

176, 687, 720equal-interval, 146, 158-159, 173feedback, 148, 158, 176

Chats, 361Cheating, 572Checkers, 570Checklist, 3, 68, 72-73, 90, 167, 377, 379, 418, 520,

559, 603, 685, 704, 709, 715, 728Checklists, 3, 9, 69-70, 72-73, 89, 519, 533, 616, 641,

651samples, 651

Chicago, 685, 688, 690, 699, 701-702, 710Child Behavior Checklist (CBCL), 72Child care, 39, 135, 710Child development, 28, 293, 550, 685, 687-688, 704Child needs, 555Children, 24, 28, 34, 36-37, 39-40, 72, 75-76, 79, 83,

85-86, 94, 99, 102, 104, 108-109, 113-114,116, 125-126, 131, 133, 141, 154, 158, 183,203-205, 209, 211, 213-215, 227-229,233-234, 236, 250, 252, 256, 258, 266, 268,279, 287-290, 292-293, 296-297, 308,317-318, 326-327, 350, 353, 356-357, 359,364, 366, 376, 378, 380, 413-414, 418, 421,427-429, 431-432, 438, 443, 456-458, 463,485, 492, 505-506, 540-541, 546-547,550-555, 560-566, 569-570, 576, 579, 581,590-594, 596, 602, 604-608, 611, 613, 617,620, 628-629, 631-632, 635-636, 641,644-646, 649-650, 653, 655, 657, 659, 671,673-674, 678-679, 682, 685-728

art of, 716focus on, 214, 380, 593, 671, 687, 690, 692, 698,

700-701, 703-705, 709, 715-716, 724,726-727

rights of, 688self-evaluation, 602, 606, 608, 698, 704-705, 717,

719, 723with ADHD, 688

Children with disabilities, 109, 719, 725Chip, 302, 307, 530-531Chips, 18, 290, 296, 302, 359, 530-531, 570-571, 574,

597, 635Choice, 11, 26, 82, 93, 108, 198-199, 206, 234, 257,

259-260, 262, 295-298, 300, 303, 324,336-339, 363, 368, 375, 381, 414, 419, 446,463, 491, 494, 499, 525, 528, 530, 552, 602,608, 611, 618, 633, 643, 647, 657, 680-681,684, 694, 696, 700, 702, 705, 713-716, 718,725

restricted, 525, 714Choking, 200, 479Chomsky, Noam, 538Circles, 151, 154, 337, 444, 626Citizenship, 604Clarifying, 40, 79, 520Clarity, 37, 98, 268, 499Class discussions, 649Classification, 289-290, 540, 543-545, 552, 595, 689,

707Classroom, 9, 23, 34, 39-40, 48, 55, 57, 73, 78, 80,

87, 94, 108, 117-118, 134, 156, 181-185,225-226, 228-229, 250, 254, 256, 258-261,264, 266, 268, 278, 294, 297, 312, 330, 333,338, 345-346, 356-357, 360, 362, 364, 375,377-378, 382-383, 385, 388, 409, 436, 442,444, 456-458, 462, 477-479, 483-485, 488,493-494, 508, 511, 514, 516, 524-529, 531,

733

561, 564, 568, 570-571, 574-579, 581,590-592, 594, 599-600, 602, 604-608, 610,612-613, 617, 623, 625-628, 632, 635, 637,641-642, 646, 649-653, 655-657, 685-693,695-696, 698-699, 701-711, 713-714, 716,718-719, 721-722, 724-727

arrangements, 48, 725conference, 687, 691, 704, 710, 718, 727displays, 156, 185, 568environment in, 436, 594organizing, 23, 592visitors, 345

Classroom behavior, 87, 254, 259, 377, 477, 479, 485,511, 599-600, 607, 687, 689-690, 711, 714,722, 725

management system, 689Classroom control, 294, 699Classroom environment, 228

time, 228Classroom management, 34, 704, 714, 718

positive behavioral support, 718schedules, 704

Classrooms, 24, 118, 236-237, 259, 261, 266-268,376, 443, 459, 561, 574-575, 580, 589, 591,604, 636, 649-651, 686, 692-693, 697, 701,704, 706, 708, 712, 717

behavior, 24, 118, 236-237, 259, 261, 266-268,376, 443, 459, 561, 574-575, 580, 589,591, 604, 636, 649-651, 686, 692-693,697, 701, 704, 706, 708, 712, 717

behavior in, 24, 118, 259, 443, 686, 692-693, 697,704, 708, 717

inclusive, 376, 697, 701regular, 118, 259, 261, 574, 591, 650, 701, 717special, 267, 376, 574, 591, 604, 649-650, 686,

693, 697, 701, 704, 708, 712Clauses, 537Cleaning, 87, 293, 307-308, 339, 382, 457, 651Cleanup, 361CLEAR, 19, 25-26, 30-31, 35, 58, 61, 85-86, 88, 91,

114-115, 133, 142, 144, 160, 162, 164, 171,174, 181, 189-190, 199, 205, 209, 226, 231,239, 249-250, 252, 255, 258, 260, 268,272-273, 302-303, 361, 369, 371, 400, 402,411, 432, 446, 466, 515, 520-522, 530, 533,538-539, 550, 567, 573, 576, 601, 607, 623,644-645, 667, 674, 677

Clicker, 4, 454-455, 460-461, 466, 716Client records, 672Clients, 39, 41, 60, 63, 70-71, 76, 78, 81, 85, 90,

94-95, 120, 238, 241, 257-258, 261, 270,299, 367, 369-370, 381, 566, 568, 589, 599,603, 605, 611, 613, 615-616, 620, 664, 667,669, 672-673, 678, 680-681, 683, 694

behavior of, 60, 81, 90, 94, 120, 261, 568, 599, 605context of, 63, 70, 76, 95feedback for, 589, 694responsibilities, 678

Climate, 52Clinical assessment, 598, 619Clinical judgment, 168, 333Clinicians, 85, 227, 238, 244, 550, 589, 595Clips, 108, 129, 261, 596Closure, 314, 691Clothing, 108, 312, 360, 568, 571

religious, 571Clusters, 164, 442, 579Coaches, 39, 224, 368, 640Code of ethics, 667, 669, 696, 713Codes, 7, 16, 83, 95, 110, 134, 259, 369-370, 516,

664, 667-668, 671, 684Coercion, 315, 594, 677Cognition, 695, 700, 709, 713, 726Cognitions, 32Cognitive abilities, 590Cognitive approach, 537Cognitive behavior therapy, 701, 709, 727Cognitive Psychology, 33, 537Cohesiveness, 577COIN, 182, 209, 230, 419, 439, 565, 603, 639, 659Collaborative learning, 635, 647Collecting, 128-130, 143, 187, 190, 221, 231, 235,

321, 330, 381, 427, 447-448, 561, 568, 570,576, 598, 619, 704, 707, 723

College students, 307, 331Colleges, 671color, 7, 17, 48, 61, 95, 98, 102-103, 169, 300, 355,

411-412, 415-417, 419, 421, 451, 545, 549,642, 702

Colors, 98, 103, 412, 487, 642, 719Com, 22, 45, 68, 92, 112, 122, 146, 169, 178, 196,

220, 245, 276, 311, 324, 331, 344, 374, 390,408, 426, 434, 454, 468, 481, 498, 510, 528,536, 558, 583, 605, 665, 671, 688, 716

Comics, 568Commitment, 32, 36, 38, 413, 615-616, 620Committees, 674Communication, 1, 8, 37, 39, 65, 76, 79, 83, 94, 100,

151-152, 159, 179, 214, 227, 237, 250, 252,257, 260, 262, 266, 290, 321, 342, 367, 427,447, 483-484, 498-499, 501, 506-509, 520,531, 533, 536, 539-540, 552, 562, 623, 678,682, 689-690, 693-696, 698-699, 703, 706,709, 711, 717, 719-720, 724-725, 727

behavior and, 8, 37, 76, 94, 100, 179, 237, 250,262, 266, 483, 499, 507-508, 520, 539,689, 706, 709

behavior as, 342, 483, 539, 696, 725boards, 506, 719disorders, 250, 520, 693, 699, 703, 717, 725facilitated, 94, 682, 698, 727good, 37, 39, 266, 520, 623, 678, 696, 719impairments, 694, 724language development, 689, 699manual, 214, 447, 711, 720parents, 39, 65, 79, 83, 257, 531, 533, 552, 562,

623, 682, 693, 699, 706, 717, 720selection of, 76, 79, 83, 694skill development, 83, 711skills for, 706skills training, 699, 703, 719styles, 237total, 8, 100, 250, 252, 257, 447, 562, 699, 711,

719verbal and nonverbal, 427

Communication skills, 76, 427, 447, 499, 520, 623,682, 698

instruction of, 698Communications, 103, 717, 723Community, 3, 26, 31, 40, 70, 73, 99, 116, 215, 230,

234, 237, 249-250, 258, 443, 507, 537, 602,623, 627, 636-639, 641, 649-650, 659,665-666, 674, 680, 685, 689, 692, 694,698-699, 702, 707, 709, 711, 714, 716

agencies, 674groups, 636, 694, 709surveys, 714

Community settings, 70, 73, 234, 507, 650, 666, 680,694

Community skills, 689community-based instruction, 641, 685Comparison, 7, 11, 14, 17, 26, 42, 52, 87, 102, 114,

131, 148, 151, 157, 159, 165, 167, 173-176,182, 184, 190-192, 202, 208, 210, 213, 215,217, 236-237, 246-248, 252-254, 263-266,274, 318, 321, 326, 341, 414-417, 483, 501,590, 685, 690-692, 696, 701-702, 704, 712,714, 717-721, 724-725, 727

Comparisons, 7, 31, 41, 43, 93, 130, 151-153,186-187, 191, 202, 237, 246, 249-250, 253,259, 264, 416, 446, 465, 670, 722

Competence, 2, 8, 76, 90, 102, 133-134, 144, 256,274, 648, 664, 669, 671-674, 684

clinical, 671, 674credentialing, 671maintaining, 672, 674mathematical, 102

Competencies, 72, 83Competency, 725Competing, 3-4, 9, 26, 64, 69, 201, 217, 296, 305,

316, 322, 332, 417, 464, 502, 598, 614, 720,725, 728

Competition, 65, 79, 279-280, 417, 579, 648competitive employment, 691Complaints, 382-383Complexity, 45, 62-66, 81, 97, 128-129, 139, 185, 255,

270, 442, 450, 453, 455, 551, 635, 656, 676,704

studies, 139, 185, 453Compliance, 9, 86, 110, 129, 137, 236, 249, 255, 257,

266, 320-323, 378-379, 382, 385, 457, 472,500, 502, 504-506, 509, 542, 561, 568, 606,641-642, 680, 686, 690, 694-695, 699, 705,709, 714, 721, 725

compliance skills, 378Component analyses, 4, 272Components, 4, 13, 20, 40, 64, 66, 81, 89, 159, 179,

182, 186, 194, 205, 209, 211, 229-232, 269,

280, 309, 339, 341, 355, 357, 360, 366, 370,393, 401-402, 429, 433, 435, 437-438, 441,447-449, 452, 455, 460, 463, 465, 482, 489,496, 524, 526-527, 534, 541, 556, 559,567-568, 577, 580-581, 586, 625, 629, 651,656-657, 663, 716, 721

Composition, 20, 451, 550, 648, 726Comprehension, 109, 132, 181, 542, 605Comprehension skills, 181Computation, 81, 237

instruction, 237Computer programs, 593computer software, 119, 168-169Computer systems, 704Computer technology, 467Computers, 119, 121, 123, 329, 333, 335, 465-466,

695, 720, 723-724operating systems, 119software for, 119, 121, 723

Computing, 11, 135-136, 722Concept, 4, 9, 16, 45, 54-57, 60-61, 79, 81, 93, 102,

124, 254, 257, 280, 283, 289, 291, 309, 345,368, 391, 408, 412-414, 424, 455, 475, 480,568, 586, 593, 607, 619, 629, 639, 676, 687,691, 705, 711, 718, 726-727

Concept development, 705Concept formation, 4, 408, 412-413, 424Concept learning, 691Concepts, 22, 29-30, 44-66, 102-103, 124, 178-179,

194, 276, 278, 283, 311, 344, 371, 374, 390,408, 413, 426, 465, 468, 536, 622-623, 639,661, 676, 689, 711, 713

guides, 179, 194scientific, 29-30, 45-46, 63, 65, 124, 179, 194, 371,

713spontaneous, 30, 468teaching of, 689

Conceptual model, 39Conceptual system, 33Conceptual understanding, 58, 271-272, 499Conceptualization, 465, 585Conceptualization of self, 585Conclusions, 12, 93, 126, 130, 136, 143, 147-148,

170, 174, 186, 202, 210, 247, 255, 272, 354,381, 515, 522, 624

Conditional probabilities, 4, 24, 119, 518Conditioned stimulus, 4, 9, 12, 15, 44, 47, 50-51, 66,

400, 409, 469Conferences, 599-600, 674Confidence, 24, 39, 125, 133, 142, 150, 170, 174, 177,

189-190, 193, 209, 222-223, 235, 254-255,264, 271, 273, 371, 461, 666

Confidence level, 39Confidentiality, 4, 75, 664-665, 669, 672, 677-679, 684

children and, 679Conflict, 4, 41, 242, 664-665, 683-684Conflict of interest, 4, 664-665, 683-684conflicts, 83, 90, 227, 520, 537, 673, 683, 703conflicts of interest, 683Confounding variables, 8, 24, 42, 180, 185, 189, 194,

251-252, 254, 257, 269, 273Confrontation, 572Confusion, 57, 189, 280, 313, 381, 449, 499, 572, 587,

624Conjectures, 235Conjunctions, 227, 537Connotation, 26, 62, 282, 345, 587Connotations, 33, 35, 345Consciousness, 27, 29, 33, 240Consent, 10, 86, 370, 664-665, 672-673, 675-679,

684, 701, 703, 714Consequence, 2-4, 14-16, 18, 44, 47-48, 53-59, 61-62,

66, 72, 203-204, 206, 209, 278, 280-284,286-287, 297, 300, 304, 309, 312, 315-316,330, 337, 340, 345-348, 351, 353, 356,358-359, 361, 364, 366, 372, 377, 381-382,388, 393, 396, 398-399, 401, 403, 405-407,431, 470, 472, 474, 477, 480, 484, 492, 496,499-500, 513, 516-518, 522-525, 527,541-542, 544, 561, 575, 580, 582, 588, 594,605, 608-609, 612, 616, 618, 645, 656, 692

Consequences, 1-4, 8-9, 12, 15-16, 24, 30-32, 40, 44,48, 51-56, 58, 60-63, 65-66, 69, 73-74, 78,88, 90, 95, 103, 178, 185, 210, 226, 253,279-284, 286-287, 300, 306, 308-309, 316,325, 335, 340, 345-346, 353, 356-357, 366,369, 372, 386, 388, 393, 398, 406, 412, 416,420, 425, 431, 469-470, 472-473, 476-477,480, 484, 486, 492, 494, 499-500, 514, 516,

734

518, 520-522, 524, 539, 544, 546, 548-549,552-556, 561, 569, 579, 586, 588, 590-591,595, 597, 600, 604, 606-609, 611-618, 620,628-629, 641, 644, 646, 655-657, 660, 663,670, 675-676, 680, 688, 695-696, 701, 721,724

reinforcement and, 4, 40, 54, 56, 58, 61-62, 66,226, 282, 284, 300, 316, 325, 353, 406,412, 477, 494, 539, 544, 556, 607-608,611, 620, 644, 688

Conservation, 42, 256, 660Consideration, 70, 83, 100, 109, 127, 186, 206, 217,

238, 272, 306, 316, 362, 369, 379, 381, 387,453, 506, 514, 566, 571, 624, 633, 665,667-668

Consistency, 14, 125, 129, 133-134, 136, 142, 189,193, 255-256, 273, 343, 477, 499, 659-660,680, 714

Constant time delay (CTD), 442Constant time delay procedure, 420Constructs, 30, 32, 43, 235, 585, 619, 709Consultants, 370, 616consultation, 29, 437, 561, 664, 672-673, 723, 727

organizational, 673peer, 727relationships, 29, 664, 672, 727

Contact, 6, 12-13, 15, 24, 33, 54, 64, 69-70, 79-80,85-86, 94, 99, 115, 124, 130, 147-148, 176,185, 187, 197, 203, 206, 229, 235-236, 247,263, 266, 273, 289-290, 293, 298, 304, 307,353, 355-360, 364, 376, 394, 420, 428-430,432, 437, 446, 453, 455, 472, 478, 480, 491,494, 503, 540, 546, 574, 597, 607, 611, 631,634, 636, 643-644, 650, 652, 660, 662, 678,683, 690, 699

Content, 22, 44-45, 68, 92, 122, 125, 146, 178, 180,196, 217, 220, 245, 261, 264, 267, 276, 311,324, 344, 374, 378, 390, 408, 426, 434, 454,468, 481, 498, 510, 536, 556, 558, 583, 592,594-595, 622, 632, 641-643, 664-665, 671,685, 720

expectations, 267, 671knowledge, 45, 125, 180, 264, 267, 595, 632, 641,

671meaningful, 125, 180, 264, 267, 632

Context, 27, 30, 33, 35, 39-40, 47-48, 61, 63, 70, 73,76, 88, 95-96, 100, 109, 123, 131, 143, 157,166, 183, 188, 193, 211, 244, 248-251,265-266, 304, 313-314, 316, 366, 369, 377,381, 395, 399, 403, 409-410, 416-417, 419,427-428, 432, 443, 483, 514, 518, 526, 541,544, 564, 575, 635, 642, 665-666, 670, 676,687, 690-691, 697-698, 700, 708-709,714-715, 722-723, 725

vocabulary and, 35Contextual variables, 266Contingency, 1-6, 8-10, 12, 15-16, 18, 28, 30-31,

44-45, 53, 55-56, 61-64, 66, 111, 187,203-204, 206, 211, 276, 278-282, 284-286,291, 293, 296, 300-301, 303-310, 311-316,318-319, 322, 325-327, 329, 333, 335-336,339, 342, 345, 347-349, 351, 353, 355,376-377, 385-387, 405, 407, 415-416, 431,458, 465, 488, 491-492, 494, 497, 499-500,508, 510, 522-523, 527, 541, 558-582, 586,588-590, 594, 604-605, 608-609, 611-612,615, 618, 622, 630-632, 643-650, 654-656,662-663, 685, 687, 689-690, 693-694, 698,702, 707-714, 718-719, 721, 725-726

Contingency contracting, 558-582, 618, 632, 694, 702,711-712, 714

Contingency contracts, 296, 559-560, 563-565, 567Continuing education, 674Continuous measurement, 5, 7, 39, 43, 92, 113, 115,

122, 126-127, 181, 239Continuous recording, 127, 157, 516-519continuous reinforcement, 4-5, 307-308, 324-325, 336,

343, 475, 480, 497, 503, 648, 653Contracts, 296, 382, 558-560, 562-567, 581, 592, 722

contingency, 296, 558-560, 562-567, 581Control, 1-2, 4, 6-9, 11-13, 15-18, 23-24, 26-27, 29,

37, 39-43, 44, 47-48, 51, 55, 58-64, 66, 93,104, 120, 128, 155, 170, 174, 177, 178-180,182-187, 189-191, 193-194, 197, 201-207,209-213, 215, 217-218, 221-223, 225, 229,234-244, 245-246, 248-249, 251-257,263-265, 268-270, 272-273, 278-282, 290,293-294, 303-305, 307, 310, 336-337, 339,341-342, 347-348, 350, 356, 371-372, 393,

398-399, 405, 408-425, 428, 440-441, 445,449-450, 465, 482, 484, 491, 499-500, 504,508, 514, 520, 522, 529, 531, 536, 538-540,543, 546-557, 559, 561, 566, 568, 570, 574,583-588, 590-592, 594, 596-598, 603, 607,609, 611-614, 616-617, 619-620, 627,629-630, 632, 639-642, 648, 652, 654,660-663, 677, 680, 682, 685-690, 694-696,698-700, 702-703, 705, 707-709, 711-714,716-722, 724-727

impulse control, 16self-control, 16, 336-337, 583, 585-587, 591-592,

594, 607, 612-613, 617, 619, 654, 688,690, 694, 698, 713-714, 716-718, 720,722, 724, 726

Control group, 246, 254Control groups, 43, 246Controlling stimulus, 411, 542Controversial treatments, 94Conventions, 135, 160-166, 537, 546Convergence, 499, 508Conversation skills, 636Conversations, 119, 533, 539, 613, 632, 636

beginning, 613Cooking, 30, 263, 601, 634, 636, 724cooperation, 206, 381, 648, 660Cooperative learning, 127, 656, 728

components, 656Cooperative play, 99, 110, 131, 143, 203, 281, 301Coordination, 648Coping, 8, 726

behaviors, 8Copyright, 1, 22, 31, 44, 48-49, 68, 92, 104, 122, 141,

146, 149, 151-164, 173, 175, 178, 196, 199,201-203, 205, 207, 212-214, 220, 225-226,228, 231, 234, 241, 245, 251-252, 261,276-277, 285, 288, 292, 294, 301-302, 305,311, 317-321, 324, 328, 334, 337-338, 344,346, 352, 354, 358-359, 363, 365, 374, 377,379, 383-384, 390, 408, 411, 426, 434,438-441, 443-447, 451-452, 454, 456-457,462, 464, 468, 471-473, 475, 481, 484-490,493, 495, 498, 501-502, 507, 510, 536, 558,560-561, 563-565, 569, 578-580, 583, 594,596, 599-600, 602, 605, 610, 614, 622, 626,628, 637-638, 640, 645, 647, 653, 655, 658,664, 669-670, 673, 680, 685, 721

Correlation, 23, 42, 135, 179, 192, 194, 269, 350, 356,400, 512, 516, 518-519, 606

Correlational studies, 23-24Costs, 633, 665Cough, 53Council for Exceptional Children, 438, 443, 485, 605Counseling, 293, 524

psychology, 293Counting, 5, 8, 11, 53, 96-98, 100, 108-109, 115, 118,

126, 132, 135, 160-161, 170, 186, 286, 487,604, 608, 706

Countoons, 604-605, 692Courses, 666Courtesy, 38Courts, 83, 369, 381, 676Creating, 30, 142, 169, 249, 373, 506, 587, 594-595,

598, 604, 606, 609, 612, 648-649, 690Creativity, 39, 86, 695, 702, 709

fluency, 695increasing, 695

Credentialing, 671, 722Credentials, 675Credibility, 237Credit, 326, 573, 673Criterion, 3-6, 8-10, 16, 19, 36, 38, 69, 73, 78, 92,

97-98, 101-104, 120, 128-129, 140, 142,144, 184-185, 220-244, 249, 255, 258, 272,291, 305-306, 310, 321, 334-337, 340, 343,364, 380, 418, 420, 430-431, 436, 440-443,452-453, 455-456, 458, 462-467, 475,492-497, 501-502, 504-505, 527, 559, 562,570, 573-574, 576-577, 579-581, 589-590,602, 604, 611-612, 616, 618, 620, 631,638-639, 646, 657, 670, 700

Criticism, 52, 204, 361, 538, 629, 652Cubes, 461Cue, 15, 256-257, 277, 333, 339, 418-419, 425, 429,

440-442, 444, 449, 453, 461, 596, 598, 604,625, 653, 656-657

Cues, 112, 134, 339, 341, 417, 419, 429, 455, 508,592-593, 595-596, 604, 616, 620, 651-652,657

visual, 419, 595, 604Cultural, 40, 52, 63, 253, 266, 286, 666, 683Cultural practices, 52, 63, 286, 666Culture, 48, 60, 594, 666, 668, 670, 682-683, 688, 698

high, 48, 688influence of, 666mainstream, 682popular, 682

Cumulative records, 5, 148, 155-158, 176, 316-317Cup drinking, 290, 705Curiosity, 25, 27, 50Curriculum, 34, 73, 94, 125, 180, 216, 267, 330, 368,

414, 417, 531, 556, 590-591, 605, 619,635-636, 649, 655, 659, 667, 670-671, 690,702

adaptations, 636aligned, 667basic skills, 73, 702challenges to, 590goals and objectives, 667implementing, 94, 667kinds of, 94No Child Left Behind Act of 2001, 94preschool, 34social skills, 649trends, 216, 267

Curriculum design, 34, 414, 636Cursive writing, 76

DDaily instruction, 269Daily living, 36, 72, 636, 716Daily living skills, 72, 636, 716Daily routines, 40Daily schedule, 520Dangerous behavior, 207, 315, 322Data, 1-3, 5-7, 9-12, 15-20, 26, 29, 31-32, 37, 39, 41,

43, 65, 69-71, 73, 75, 79, 85, 90, 94, 96-104,109-119, 121, 123-144, 146-177, 180-195,197-202, 204, 208-210, 212-213, 215-218,221-225, 227, 229-237, 239-243, 246-257,259, 262-274, 277, 279-280, 287, 293-294,296-298, 300, 305, 307-308, 316, 320-321,333, 337, 340, 342, 358, 362, 364, 367,370-371, 373, 376-377, 379, 381-382, 384,387-388, 414, 430-433, 440-443, 445-448,453, 456, 463, 471, 474, 476-477, 488-491,494, 497, 503, 507, 510, 513, 515-520, 523,528, 530, 532, 538, 561, 563, 567-570, 572,574, 576, 578, 580-581, 584, 598, 601-602,606-607, 615-617, 619-620, 627, 645, 650,653, 657-659, 672-673, 681-683, 687,689-690, 695, 700-701, 704, 706, 713, 715,717, 719-720, 723

Data:, 135-136, 169, 615validity, 7, 10, 15-17, 19, 79, 85, 123-127, 142-143,

180, 182, 184, 193-194, 215, 242, 246,250-253, 256-257, 259, 262-267,270-271, 273-274, 370, 387, 448, 520,616-617, 687, 689, 700, 720

Data analysis, 119, 121, 269Data collection, 65, 103, 109-110, 115-117, 119, 121,

129, 134, 155-157, 190, 195, 257, 367, 388,442, 517, 672, 719

applied behavior analysis, 65, 110, 115, 155-157,257, 719

during baseline, 156-157Data paths, 113, 150-153, 158, 168-169, 171,

174-177, 194, 209-210, 213, 216, 218, 222,227, 267, 269

Data points, 2, 5, 15-16, 19, 98, 149-150, 153-154,157, 162, 164-165, 167-174, 176-177,189-193, 195, 197-198, 204, 209-210,221-223, 230, 233-234, 237, 243, 293, 337,367, 377, 445, 489, 523, 602, 653, 658

Databases, 93, 593Dating, 26, 368Day care, 472-473, 479, 694Daydreaming, 598, 643Death, 31, 53, 588, 618Debt, 345, 572DEC, 160deception, 673Decision making, 94, 118, 127, 140, 146, 562,

666-668, 676-677, 683-684, 702, 710ethical, 666-668, 676-677, 683-684school-based, 710

Decision rules, 466, 496, 684Decision-making, 162, 368, 379, 670, 677

735

curriculum and, 670Decoding, 246Deduction, 585Definition, 2, 8, 15, 17-18, 22-43, 45-47, 51, 53, 58,

61, 68-69, 76, 85-89, 91, 93, 95, 98, 110,112, 118, 120, 129, 132-133, 144, 161, 163,180, 185, 188, 206, 246-247, 255, 267-268,273, 278, 309, 312, 322, 327, 329, 331-332,334-335, 345, 368, 372, 375, 381-382, 386,389, 391, 405, 413-414, 425, 427-428, 432,435, 452, 455, 462, 466, 469-470, 479, 490,501, 537-539, 559, 568, 571, 575, 585-586,599, 624, 660, 666, 672, 680

Definitions, 12, 60, 85-90, 118, 128-130, 132, 134,142, 255, 306, 334, 393, 448, 470, 501, 522,539-540, 543, 585-586, 623-624, 661, 680,700

demonstrations, 31, 36, 234, 272, 293, 409, 594, 674density, 156-157, 355, 713Dependence, 471, 507

psychological, 471Dependency, 62, 215, 290, 676Depression, 277, 589Depth, 95, 126, 312, 386Description, 7, 15-16, 19, 23-25, 27, 31, 36-37, 42-43,

47, 52, 69, 73, 87-89, 93, 102, 120, 132,135, 140, 160, 168, 175, 177, 179, 181, 194,202, 239, 258-259, 272, 274, 280, 282, 307,309, 312, 315, 381, 386, 401, 435, 482, 496,503, 520, 537, 556, 559, 581, 588, 716

Descriptions, 3, 27, 33, 50, 72, 81, 88, 90, 119, 129,148, 162, 187, 255-256, 272, 274, 283, 299,410, 505, 520, 537, 539, 603, 681

Descriptive data, 75, 510, 516, 518Descriptive research, 194Descriptors, 134Desensitization, 17, 583, 612, 614, 620, 727-728Design, 1-3, 5, 7-8, 11-12, 15-16, 20, 34, 37, 39-41,

58, 64, 80, 88, 94, 101, 164, 168-169, 174,178-180, 182-187, 190-191, 193-195,196-219, 220-244, 245-246, 248-254, 257,263-264, 267-274, 282, 292, 310, 316, 318,320-321, 353, 362, 414, 455, 457, 489, 499,523, 528, 530, 533, 562, 573, 581, 590, 601,613, 615-616, 629-630, 632-633, 636, 644,646, 688, 690, 692, 695, 698, 700-702, 704,710-711, 714-715, 725-726

Designs, 2, 13, 15-16, 20, 37, 43, 178, 182-186, 191,194-195, 196-219, 220-244, 246-250, 253,263-264, 269, 273-274, 293, 304, 364, 574,586, 591, 690, 696, 699-701, 704-705, 712,719

Determinism, 5, 16, 22, 25, 27, 42, 179, 181, 194, 511,594

Development, 9, 12, 19, 23-24, 27-29, 31, 34, 37,39-40, 42, 54, 57, 78-80, 82-85, 90, 93, 97,101, 187, 236, 240, 263, 267, 270-272, 286,293, 309, 315, 322, 325, 334, 342-343, 354,370-371, 380, 409, 411-418, 424-425,427-428, 436, 439, 465, 482-483, 500, 505,513, 516, 520, 538, 540, 546, 550, 553,561-563, 567, 591, 594, 616, 633, 649, 664,667, 672, 674, 681, 685, 687-689, 692,696-697, 699, 703-705, 707-708, 711,713-714, 716, 722-723

screening, 703Developmental delay, 385, 524Developmental disabilities, 34, 76, 79, 88, 94, 99, 102,

151, 156, 198, 214, 229, 263, 284, 287, 295,300, 317, 353-354, 365-366, 382, 385, 429,431, 437-438, 442, 456, 460, 487, 489, 491,493-494, 499, 501, 505, 508, 550-551, 555,580, 590, 601-602, 631, 653, 656-657, 665,667, 671, 676-677, 686-689, 691-692,694-706, 708-712, 714-715, 717-728

Devices, 24, 32, 73, 108-109, 112, 120, 128-129, 134,148, 256, 507, 549, 603-604, 653-654, 694

Dewey, John, 591Diabetes, 240, 695, 728Diabetes mellitus, 728Diagnosis, 280, 520, 531, 676, 717, 719Diagrams, 62, 348, 710Dialogue, 55, 717Differences, 1, 41, 58, 63-64, 66, 99, 114, 131, 137,

151, 157, 167, 174, 208-209, 212, 215-217,230, 246, 253, 300-303, 450, 500, 524, 624,627-628, 645, 666, 682, 705, 717

Differential reinforcement, 2-6, 8, 10, 15-17, 20, 59,100, 103, 250-251, 304, 310, 324, 334-336,

343, 353, 362, 366-367, 369, 373, 379-380,385, 410, 412-413, 416, 424-425, 454-456,458-459, 466, 477, 481-497, 500-501, 503,506, 509, 524, 540, 549, 551-552, 569, 581,655, 666, 674, 690, 693-694, 707-708,715-718, 726

Differential reinforcement of alternative behavior(DRA), 5, 8, 304, 310, 366, 385, 481-483,506, 509

Differential reinforcement of incompatible behavior(DRI), 5, 366, 481-483

Differentiation, 15-16, 87, 454-455, 466Digestive system, 50Digitized speech, 507Dignity, 28, 665, 669, 676, 678, 680, 684, 721Diligence, 27, 598Dimensions, 10, 17, 22-23, 36, 38, 43, 46-48, 61, 65,

69, 79, 92-93, 95, 103-104, 119-121, 122,149-152, 156, 176, 255, 261, 268, 316, 341,411-412, 419, 436, 450, 456-457, 463, 465,605, 616, 635, 641, 643, 670, 683, 687,718-719

Diminishing returns, 699Direct assistance, 419Direct instruction, 75, 442, 593, 634, 659, 662, 667,

688Direct observation, 5, 29, 70, 73, 90, 333, 516,

519-520, 641, 652, 662, 704Direct observations, 69, 73, 82, 89, 521, 529, 682Direct replication, 6, 15, 186, 245, 265-266, 274Direct supervision, 643Directions, 72, 75, 80, 84, 100, 262, 272, 333, 378,

398, 417, 485, 540, 593, 614, 641-643, 690,695, 704-705, 708, 715

Directors, 29, 640, 668Disabilities, 12, 34, 39-40, 76, 79, 82-83, 88, 94, 99,

102, 108-110, 114, 125, 151, 154, 156, 183,198, 201, 211-212, 214, 229, 250, 258-263,284, 287, 295-297, 300, 316-317, 353-354,359, 365-366, 382, 385, 418, 429, 431,436-439, 442-448, 455-457, 460, 472,486-487, 489, 491, 493-494, 499, 501, 505,507-508, 537, 550-551, 555, 568-569, 572,576, 580, 590, 599, 601-603, 605, 613,625-626, 631, 636-637, 640-641, 648,650-651, 653-654, 656-660, 665, 667, 671,676-677, 685-706, 708-712, 714-715,717-728

ADHD, 296, 688, 696developmental delays, 505, 650, 691intellectual, 654, 711, 718, 721visual impairment, 472

Disability, 12, 39, 83, 280, 437, 608, 676, 680, 689,699, 701, 706, 724

Disabled students, 685, 701, 724Discipline, 23, 34-37, 45, 85, 135, 179, 185, 272, 282,

361, 370, 501, 561, 623, 661, 665, 681, 706Disclosure, 10, 675Discourse, 541discrimination, 4-6, 17, 57, 78, 102-103, 162, 208,

217, 345, 393, 408, 410-413, 416, 419, 421,424, 450, 462, 499, 540, 544, 622, 636, 641,694, 712, 718, 720, 722, 724

Discussion, 27, 34, 40, 54-55, 69, 96, 105, 115-116,130, 164, 179, 232, 236, 246-248, 253, 264,272, 274, 312-313, 356, 367, 376, 395, 442,447, 512, 524, 538-539, 543, 551, 566, 584,593-594, 607, 613, 624, 667, 675, 690

guided, 40, 594Discussions, 34, 40, 57, 85, 253, 258, 280, 312, 369,

381, 649, 677conceptual, 40, 280issue, 253, 258, 369, 677

Displacement, 2, 8, 45-46, 65, 86, 97Disruptions, 99, 104, 321, 356, 360, 387, 469, 471,

478, 483, 499, 516, 576, 697, 719Distortions, 267, 549Distractibility, 623Distracting, 450, 461Distraction, 152, 463-464Distractions, 417, 432, 598Distress, 447-448Distribution, 15, 162, 164, 176, 291, 299, 571Distributions, 115, 316-317, 519, 704Diversity, 65, 79, 204-205, 308, 629, 698Division, 28, 96-97, 160, 170, 190, 231, 562, 602, 630,

710Division on Developmental Disabilities, 602Divisors, 98, 101, 160

Doctrine, 368, 373Documentation, 675, 678Domain, 30, 40Doubling, 98, 159-160, 173Doubt, 13, 16, 22, 25-27, 37, 42, 140, 174, 179, 199,

237, 256, 283, 356, 371, 589, 683Down syndrome, 531Drawing, 5, 93, 98, 116, 132, 164, 167-168, 171-172,

174, 177, 227Dressing, 451, 464, 504, 506, 596, 618, 656-657Dressing skills, 451Drill and practice, 282Drowning, 23-24Drug abuse, 722Drugs, 24, 342, 355, 369

abuse, 342Dube, 116, 413, 421, 685, 711Durability, 15, 624Duration, 5-7, 10-11, 13, 17-20, 37, 46, 54, 72, 74, 88,

92, 94-104, 106, 109-115, 117, 119-120,122-123, 126, 128, 132, 136-137, 144,152-153, 155-156, 178, 184, 186, 193, 199,207, 243, 258-259, 268, 278, 297-298,306-307, 316, 326, 330, 332-335, 337, 343,350-351, 355, 360, 364, 369, 380-381, 389,409, 419, 424, 429, 455-459, 463, 466, 471,474-475, 488-489, 491, 493, 495, 503-504,506, 529, 568, 574, 605, 643, 662, 691, 693,725

Duration recording, 99-100, 113-115Duty to warn, 679Dyads, 728Dynamics, 70, 465, 712

EEar, 49, 409, 439-441, 521, 628, 723Ear infections, 521Early childhood, 686, 693, 697, 723, 727Early childhood special education, 686, 693, 697, 727Early intervention, 706, 710, 723, 726Earth, 691Eating skills, 660Ecological assessment, 6, 68, 75, 90, 701Economics, 437, 685Economy, 10, 14, 18, 49, 53, 59, 130, 254, 290, 488,

558-582, 632, 686, 690-691, 697-698, 701,716

Education, 1, 22, 28-29, 31, 34-35, 39, 42, 44, 57, 68,73-75, 78, 80, 82-83, 92, 94-95, 101, 114,116, 122, 146, 156, 178, 181, 183, 196, 220,245-246, 259, 261, 264, 267, 276-277, 280,292, 311, 320-321, 324, 344, 356, 374,376-377, 390, 408-410, 413, 422-424, 426,434, 436, 438-439, 443-444, 454, 457-458,463, 468, 481-482, 492, 498, 505, 510, 526,536, 550, 558, 569-571, 573-574, 576,583-584, 589-591, 594, 602, 604, 608, 619,622-623, 626-627, 632, 636-637, 642, 647,649-651, 653, 656, 658, 664-665, 668-671,674, 677, 682, 684, 685-694, 696-713,715-728

at home, 116, 492, 574, 591, 627, 632, 670evidence-based practices, 267exemplary, 267for teachers, 75, 95, 594, 651, 685, 701, 709funding for, 34global, 591perspectives on, 34, 623, 686, 708, 712, 717,

724-726prevention and, 39, 727records, 1, 73, 156, 602, 684right to, 95, 668-670, 677, 684, 686, 688, 725supports, 576, 707, 709

Education programs, 594Education Week, 706Educational administration, 721Educational goals, 217Educational research, 383, 705, 725

applied, 383, 705, 725Educational technology, 346, 699

effective use of, 346, 699Educators, 75, 79, 83, 85, 95, 210, 282, 381, 591, 695Effective instruction, 69, 318

generalization, 69maintenance and, 69

Effective teaching, 69, 695Effectiveness, 1, 3, 5, 7, 9, 11, 14-15, 17-19, 26-27,

37-38, 59, 85, 88-89, 94-95, 120, 179, 183,200, 204, 212-213, 217-218, 224, 227, 236,

736

238, 244, 246-247, 249-250, 253, 266, 270,281-285, 289-291, 300-302, 304, 306,309-310, 316, 333, 345-346, 349-352,354-356, 361, 363-364, 366-369, 371-372,377, 381, 387, 391-401, 403, 405-407, 410,416, 435, 463, 469, 476-477, 479, 484-485,494, 496, 500, 502-504, 506-508, 511, 513,524, 551, 568, 577, 581, 586, 588, 590, 595,601-602, 606-607, 611, 613, 615, 642, 650,657, 668-669, 674, 682, 694, 698-699, 701,708-709, 712-713, 716-718, 720, 722, 725

Efficiency, 2, 41, 78-79, 94, 102, 119-120, 154, 215,417, 449, 455, 460, 465-466, 478, 480, 506,590

Effort, 25, 33-34, 38, 60, 64, 75, 80, 101, 119, 135,148, 164, 169, 174, 185, 187, 194-195, 197,206, 236, 255-256, 258, 265, 267, 270-271,273-274, 293, 301, 306, 342, 349, 356, 359,361, 387, 396, 398, 401, 406, 473, 476, 480,484, 496, 500, 516, 561-562, 572, 575, 591,597, 615, 623-624, 627, 633, 642-643,648-649, 652, 660, 663, 697, 703, 716, 728

Ego, 9, 32Eighth grade, 718eKidTools, 592Elaborating, 549Electric shock treatment, 708Electrical shock, 665Elementary school, 110, 183, 296, 330, 577, 581, 604,

611, 687, 695, 697, 704, 707, 719-720Elementary students, 113, 156, 211, 277, 377, 603,

605, 650, 653, 689, 692, 720, 728E-mail, 62, 545, 681Embarrassment, 240Emotion, 471, 548, 697Emotional and behavioral disorders, 613, 704, 723Emotions, 395, 547, 698

anger, 395Empirical data, 520Empiricism, 6, 16, 22, 25, 27, 42, 93, 120, 179, 710Employers, 36, 95, 120Employment, 540, 691Empowerment, 680

goal, 680Encouragement, 23, 268, 325, 327, 580, 594, 612,

616Endowment, 49, 584Energy, 15, 17, 46-47, 65, 180, 256, 295, 409-410,

574, 595light, 65, 409sound, 409

Energy conservation, 256Engagement, 15, 72, 93, 99, 108, 113, 240, 267, 291,

297-298, 310, 333-334, 336-337, 364, 502,511, 610, 654-655, 680, 699, 705, 710, 724

English, 131, 284, 409, 462, 539-540, 542-543, 578,688-689, 693

Old, 462Standard, 131, 540

English language, 543Enrichment, 73, 718Enthusiasm, 204, 308, 341, 461, 629Environment, 1-2, 4, 6-12, 14-15, 30-33, 35, 38-39,

44-48, 51-54, 56, 58-61, 63, 65-66, 69-70,73, 75-77, 79, 83-87, 90, 93, 115-116,118-119, 121, 129-130, 134, 140, 150, 153,179-181, 184-185, 187, 189-191, 194,202-203, 206, 216, 221-222, 228-229, 233,235, 237, 251, 259, 263-264, 270-271,273-274, 289, 297, 309, 313, 315, 326, 337,347, 360-361, 366, 368-369, 372, 375-378,380-381, 389, 394, 396, 398, 404-405, 407,409, 417, 427, 432, 436, 439, 449-450, 453,462, 469, 477-478, 484, 491, 501, 504,508-509, 511, 514, 516, 519, 522, 533,539-540, 546-547, 551-552, 556, 562, 566,575, 584, 586-587, 589-590, 594-598, 617,619, 624-627, 631, 636, 639-641, 648, 650,652, 656-657, 659-661, 663, 668-669, 675,680, 684, 695-696, 716, 723

arranging, 405, 562, 680home, 75, 116, 185, 297, 313, 315, 381, 508, 533,

546, 594, 596, 627, 631, 659-660least restrictive, 368-369, 375, 680outdoor, 376

Environmental control, 409Environmental factors, 52, 194, 722Equal intervals, 10, 149, 159, 165, 176, 335, 493, 497Equal opportunity, 56, 127, 244

Equating, 57Equations, 63, 444, 465, 567Equipment, 83, 119, 129, 168, 230, 326, 335, 353,

403-404, 594, 670, 681Error, 3, 11-12, 19, 57, 80, 95, 124-125, 128-132, 137,

143, 168, 212-213, 235, 245, 269-270, 274,318-320, 323, 360-361, 415-416, 418-420,425, 438, 442, 477, 513, 589, 610, 613, 650,674, 688, 700, 716, 718, 721, 728

Error correction, 319, 323, 416, 589, 650, 688Errors, 64, 101, 128, 131-132, 143, 212-213, 230,

269-270, 283, 305, 318, 360-361, 419, 443,562, 603, 637, 639-641, 651, 662, 670, 696,721, 724

correcting, 128, 131grammar, 361

ESCAPE, 2, 6-8, 14, 24, 34, 40, 56, 64, 69, 80, 186,198-199, 257, 259, 261, 311-312, 314-318,320-323, 340, 354-355, 363-364, 367, 370,372, 378, 382, 394, 401, 406, 410, 468,471-472, 474-475, 477-479, 482-484, 487,500-502, 504-506, 509, 511-516, 518,521-529, 532-533, 553, 584, 598, 603, 609,616, 674, 686, 693, 695, 698, 705-706,714-716, 718, 722

Estimation, 170, 633, 727Ethical and legal issues, 571Ethical codes, 7, 95, 369, 664, 667, 671Ethical considerations, 65, 68, 70, 178, 196, 220, 241,

245, 322-323, 345, 367, 373, 389, 664-684Ethical guidelines, 671Ethical issues, 215, 312, 367, 581, 665-666, 669, 675,

684, 700, 723, 725privacy, 669, 684

Ethical principles, 668-669, 681, 684, 685-686, 713Ethical responsibility, 373, 673-674Ethics, 7, 27, 234, 368, 370, 479, 664-669, 671,

682-683, 685-686, 692, 696, 700, 706, 710,713, 717, 721

APA, 669, 685-686codes, 7, 370, 664, 667-668, 671codes of, 7, 370, 664, 667-668ethical dilemmas, 665integrity, 669principle, 368, 370, 669professional boundaries, 683value, 7, 27, 700

Ethnic, 636Ethnic groups, 636Ethnicity, 253, 636Ethologists, 110Etiology, 723Evaluation, 16, 69, 82, 85, 88, 94, 96, 113, 120, 133,

153, 178, 196, 202, 220, 231, 238, 244, 245,250, 259, 271-273, 333, 343, 352, 363-364,370, 373, 381, 389, 437, 447, 453, 474, 478,521, 534, 561-562, 567, 574, 583, 601-602,606, 608, 612, 616, 618-619, 669-670, 672,681, 686, 689-690, 693, 695-705, 707-708,712-719, 722-724, 726, 728

Evaluations, 264, 364, 606-607, 679Event recording, 7, 18-19, 92, 99, 108-109, 113, 115,

120, 135-137, 140, 144Events, 1-2, 5, 9-11, 14, 18, 22-27, 30-33, 36-37,

40-43, 45-48, 51, 54, 59-62, 65, 70-73, 75,78, 86, 90, 93-96, 100, 108, 110, 119-120,125, 128, 133-135, 143-144, 150, 167,169-170, 179-181, 185, 187, 189, 193-195,202, 209, 215, 240, 253, 280-281, 283, 285,291, 293, 295, 312, 315-316, 320, 322-323,325, 337, 347, 349-350, 353, 367, 372, 377,391, 393, 395, 397-401, 406, 410, 416-417,424, 499-500, 508, 511-514, 516, 518-522,532-533, 536, 539, 542, 544, 547-549, 552,554, 557, 571, 584, 586, 589, 594-595, 597,603, 611, 614-615, 619-620, 623, 627, 630,641, 669, 674, 683, 691, 709, 712

intersection, 10, 86stimulus, 1-2, 5, 9-11, 14, 18, 27, 30, 40, 42, 45-48,

51, 54, 59-62, 65, 78, 90, 96, 100, 110,120, 280-281, 283, 285, 295, 312,315-316, 322, 337, 347, 349-350, 353,367, 372, 377, 391, 393, 395, 397-401,406, 410, 416-417, 424, 499-500, 508,512, 516, 520, 542, 544, 547-549, 552,554, 557, 586, 595, 597, 614, 619, 627,630, 641, 691, 709

subsequent, 5, 23, 45, 60, 70, 100, 120, 170, 189,195, 291, 512, 516, 518, 520, 586, 595,

615, 619Evidence, 2, 17, 24, 27, 29-30, 40, 64, 68, 94-95, 103,

120, 126, 142-143, 181-182, 188, 193, 207,209-210, 213, 221, 233, 252, 255, 258, 262,267, 270-271, 288, 307, 355, 367, 382, 397,400, 428, 442, 446, 511, 515, 561, 568-569,574, 591, 609, 612, 637, 646, 665, 671, 680,682, 690, 702, 713

Evidence based, 94, 120, 682Evidence-based interventions, 94Evidence-based practice, 94, 702Evidence-based practices, 267Evolution, 13, 16, 19, 49, 53-54, 59, 65, 289, 400, 584,

689, 725Exceptional, 173, 305, 438, 443, 485, 546, 584, 594,

605, 685, 690, 692-694, 696-697, 701-702,705, 707-710, 712, 714, 716-717, 719-722,724, 726-728

Exceptions, 50, 164, 168, 170, 342, 479, 525, 559,566

Exclusion, 7, 13, 374-375, 377-378, 380-381, 389, 719Exemplars, 103, 412-413, 428, 635, 702, 713Exercise, 42, 69, 81, 86, 128, 180, 326-328, 356,

359-360, 397, 418, 543, 575, 580, 585-586,589, 595, 608, 616-618, 692-694, 705, 708

Exercises, 305, 457, 545, 726Exhibits, 17, 187, 666, 681Expectancy, 179, 676Expectations, 6, 12, 25, 79, 128, 130, 143, 164,

254-255, 267, 273, 567, 569, 576, 593, 601,611, 671

classroom behavior, 254Experience, 16, 36, 39, 60, 64-65, 76, 139, 190, 202,

206-207, 232, 256, 260, 274, 297, 345, 355,362, 364, 369, 395, 409, 428, 431, 437, 455,460, 465, 474-475, 562, 567, 591, 612, 618,640-641, 661, 666, 671, 673, 680-681, 684,694

experiences, 9, 48, 60, 64, 83, 128, 305, 345, 350,400, 448, 463, 587, 619, 634, 641, 649, 666,674-675, 683

Experimental control, 4, 7, 11-12, 26, 37, 41, 43, 174,178-180, 182, 184-185, 189, 191, 194, 197,202-207, 209-210, 213, 217-218, 221-223,234-244, 248-249, 251-252, 270, 272-273,278, 482

demonstration of, 179-180, 182, 197, 205-207, 209,218, 223, 235-238, 240-244, 248-249,273

Experimental design, 1-3, 7-8, 11-12, 15, 94, 174,178-180, 182, 186, 191, 193-195, 202, 208,212, 218, 221, 224, 237, 246, 249-253, 257,264, 267, 269-270, 272-273, 353, 533, 590

Experimental group, 246-247, 254Experimental research, 155, 257Experimentation, 16, 23, 25, 27, 40, 42-43, 45, 93,

148, 179, 182, 186, 197, 216-217, 248,264-266, 270, 537, 710

Experiments, 8, 14-15, 20, 25-26, 31, 39-43, 50, 59,63, 96, 165, 170, 174, 179-180, 182-183,185-187, 191, 193-194, 197, 201-202, 209,215, 221, 224, 235, 237, 239-240, 246,249-250, 253-255, 263-268, 270-271,273-274, 277, 293, 342, 356, 409, 492, 590,601, 616, 682, 720

Expert, 262, 437, 449, 676, 695Experts, 259, 261, 274, 437, 452, 463Explanation, 26, 29, 31-33, 52, 135, 160, 162, 221,

280, 286, 304, 351, 361, 366, 470, 584, 645,712

Explicit teaching, 256Exploratory play, 680Expressive language, 414, 539, 550Expressive Vocabulary Test, 551Expulsion, 7, 667-668Expulsions, 445Extensions, 200, 207, 381, 464-465, 544, 546-547,

713Extinction, 4-8, 14-17, 44, 50-51, 54, 57, 59, 66, 186,

232, 266, 289, 303-305, 310, 325, 331, 336,340, 343, 346, 348, 353, 357-358, 366-367,369, 373, 379, 381-382, 392-393, 396,401-407, 411-412, 455-456, 459, 463, 466,468-480, 482-483, 485-486, 490, 497,499-502, 504, 507-509, 512, 522, 524, 538,553, 611, 624, 628, 644-646, 675, 688, 693,696, 698-699, 702-704, 706-707, 710, 715,717-719, 722, 726-727

Extra credit, 673

737

Extraneous variables, 7, 26, 185, 195, 236-237, 253Eye contact, 266, 289-290, 356, 376, 429-430, 455,

636, 652, 699eyes, 49, 201, 314, 472

FFables, 698FACES, 111, 605, 666, 704, 708Facets, 623

conditions, 623facial expression, 402, 641-642Facilitated communication, 94, 682, 698, 727Facilitating, 116, 333, 503, 555, 641-642, 722, 726Facilities, 598Factoring, 63Factors, 1, 3, 8, 23-26, 32-33, 39-40, 42, 52, 63, 65,

68-69, 71, 75, 83, 97, 123, 128, 140, 143,170, 177, 182, 191, 193-195, 208, 239,241-242, 247, 251-253, 265, 269-271, 273,316, 324, 345, 350, 362, 366, 380, 409, 416,425, 435, 445, 450, 453, 460, 487, 494, 499,520, 560, 587, 613, 619, 633, 635, 642-643,656, 659, 665, 676, 680, 708, 722

Facts, 23, 29, 42, 63, 81, 97, 147-148, 150, 179, 194,327, 330, 333, 339, 632, 635, 655, 689, 709,711

Fading, 7, 40, 141, 153, 227-229, 408, 419, 421-425,445-446, 459, 466, 479, 486-487, 492,551-554, 674, 686, 689, 699, 704, 706, 712,715-716, 718-720

Failure, 38, 94, 168, 235-237, 240, 255, 265, 315, 317,371, 373, 387, 403, 499, 537, 551, 555, 567,588, 593, 616, 623, 628, 697, 704, 720,727-728

repeated, 240, 265, 387Fairness, 267Falls, 168, 177, 492, 594Families, 76, 95, 563, 679, 693, 706, 727

needs, 76step, 563

Family, 63, 69-71, 90, 133, 201, 236, 258, 296, 325,339, 380, 384, 386, 499, 561, 563-565,597-598, 623, 631, 660, 666, 670, 676-677,683-684, 710, 714-715, 726

Family dynamics, 70Family members, 71, 90, 258, 296, 325, 597, 660,

676-677, 683Family Process, 715Fast-food restaurants, 259, 625, 641Fathers, 36Fax, 160, 162, 681fear, 17, 395, 459, 547, 614, 620, 665Federal statutes, 370Feedback, 104, 128-130, 133, 143, 148, 158, 176,

183, 205, 211, 256, 258, 268, 282-283,307-308, 330, 361, 370, 381, 383, 420, 443,496-497, 570, 579-580, 589, 591, 601, 608,616, 626, 646, 648-649, 651-652, 672,690-691, 694, 696, 702, 707, 709-711, 715,722, 724-725

and practice, 129, 143, 256, 282, 649, 702, 710evaluative, 601facilitative, 710general, 183, 256, 307-308, 361, 443, 570,

648-649, 651, 691, 702, 709, 711, 722immediate, 148, 176, 308, 420, 496, 591, 616, 646,

690-691mediated, 256, 709seeking, 589

Feeding, 50, 250, 256, 287, 317-318, 323, 362, 421,618, 668, 692, 713, 715

tube, 317feelings, 14, 27, 33, 65, 87, 400, 546, 584, 589, 595,

603, 612, 706Fees, 669, 672, 678Females, 104, 164, 414, 632Fiber, 15, 46Fiction, 7, 22, 32, 587, 607, 617Fidelity, 14, 19, 37, 83, 245, 255-257, 273-274, 486,

660, 669, 702, 727Field testing, 582Fifth grade, 330, 681Fighting, 293, 355, 386, 561, 572, 686, 725File, 678Files, 598, 678Findings, 7, 15, 17, 23-27, 34, 36, 40, 42, 45, 63, 69,

148, 170, 180, 184, 191, 193, 200, 246, 255,263-267, 270-271, 274, 277, 285, 291, 316,342, 350-351, 354, 385, 562, 617, 673-674,

682, 697presentation of, 15, 24, 63, 316

fire, 47, 247, 545, 552Fish, 40, 565, 584Fixed schedules, 475, 480, 648Flash cards, 98, 307, 318Flexibility, 229, 241, 246, 249, 270, 273, 375, 559, 580flight, 115, 444, 711, 716Floors, 116, 242, 601Flow, 606, 620, 641Fluency, 63, 88, 97, 126, 128, 643, 656, 689, 693, 695,

703, 706, 711, 720calculating, 97oral reading, 126, 128, 695

FOCUS, 6, 15, 23, 38, 40-41, 69-70, 80, 89, 100,123-125, 150, 167, 169, 184, 197, 204, 214,229, 246, 264, 273, 282, 315-316, 380-381,436, 460, 494, 513, 522, 554, 567, 573, 589,593, 613, 624, 659, 671, 687, 690, 692,698-701, 703-705, 709, 715-716, 724,726-727

Folders, 571, 625work, 625

Food, 1, 3, 7, 14, 19, 30-32, 49-50, 53, 57-60, 66,81-82, 102, 183, 226, 250, 259, 263, 277,281, 283-284, 286-287, 289-291, 302, 312,315, 317-318, 347, 349-350, 360, 385,391-399, 401, 403-407, 409, 412, 421, 429,435, 442, 444-445, 447-449, 459, 471-472,492, 494, 501, 540, 546, 548, 552, 568,570-571, 596-597, 605, 614, 618, 625-626,637, 639-642, 652-653, 657, 685, 693, 695,698, 704, 709, 714-715, 718-719, 728

cooling, 394Food preparation, 442, 698Forgetting, 470, 633Formative assessment, 94, 120, 259Forms, 3-4, 7, 9, 11-13, 16, 29, 32, 43, 52-53, 56, 58,

61, 63, 66, 72, 85-87, 101, 110, 129, 133,148, 153, 204-205, 221, 224, 227, 243, 283,287-288, 291, 293, 307, 315, 317-318, 320,323, 350, 355-356, 361, 377, 380, 382, 388,396, 402, 405-406, 409, 413, 418, 427, 431,455, 469-470, 478-479, 482, 506, 512, 520,523, 533, 540-541, 548, 551-552, 557, 571,585, 595, 603-604, 608, 623, 628-629,632-633, 636, 648, 653-654, 661-662, 678,698

Formula, 135-137, 140, 163, 171, 305, 701Formulas, 249, 305Forward, 3, 8, 18, 39, 286, 308, 359, 434-436, 438,

442-443, 446, 448, 452-453, 631, 711Forward chaining, 3, 8, 18, 434-436, 438, 442-443,

446, 452-453, 711Foundations, 34, 419, 538, 687, 700Fowler, 75, 97, 590, 599, 646, 650, 654, 687, 696,

705, 723Fractions, 625Frames, 604, 710Free play, 281, 515, 523, 526, 528-529, 532Free time, 26, 83, 282, 290, 295-298, 312-313,

384-385, 387-388, 488, 492, 559, 561, 568,573, 576, 580-581, 590, 608, 685, 704

Free writing, 100Freedom, 28, 413, 680, 696, 721Frequency, 1-3, 5-8, 10-15, 17, 19, 31, 46, 48-49, 51,

53-59, 61-63, 65-66, 70, 72-73, 81, 90,92-93, 96, 101, 104-105, 109, 112-113, 115,117, 120, 127, 130, 132, 138-141, 149, 151,156, 159-163, 173, 176-177, 184, 187, 198,202, 204, 206, 213, 226, 229, 236, 243, 250,253, 258, 278-281, 283-285, 287-289, 291,294, 298, 303-304, 306, 309-310, 312-313,316-317, 338, 342, 345-352, 356-361,366-367, 369, 372, 375-376, 381-382,388-389, 391-401, 405-406, 409-410, 424,427, 455-457, 459, 462-463, 466, 469-470,473-475, 477, 480, 484, 486, 491, 494, 496,499, 516, 522, 532, 541, 546, 551, 553-554,575, 578, 585-586, 588-589, 595, 597, 600,606, 608-611, 613-614, 620, 624, 629, 643,650, 652, 657, 670, 686, 692, 701

of sound, 104Frequency distributions, 316-317Frequency polygon, 149Frequency recording, 516Frequently asked questions, 571Freudian, 549Fromm, Erich, 286

Frustration, 594Fun, 229, 461, 555, 612, 649Functional analysis, 8, 29, 46, 48, 153, 194, 198, 202,

207, 226, 287, 320, 357, 382-383, 472-474,478, 483-484, 486, 489, 499, 501-502, 510,514-516, 518, 521-523, 525-526, 528-529,532-534, 674, 693-694, 703, 705-707, 710,714-715, 717, 723, 726-727

Functional assessment, 9, 69, 72, 74, 510-511, 514,516, 519-521, 524, 526, 529, 533, 604, 608,672, 686, 692, 694-695, 705, 708-709,714-715, 723

Functional Assessment Checklist for Teachers andStaff (FACTS), 709

Functional assessments, 507Functional behavior analysis, 215, 353Functional behavior assessment, 5, 8, 69, 226, 266,

363, 470-471, 503, 507, 510-534, 688Functional communication training, 1, 8, 151-152, 159,

257, 260, 262, 266, 367, 484, 498, 501,506-509, 562, 690, 693-696, 699, 706, 720,725

Functional communication training (FCT), 8, 266, 367,484, 498, 506, 509

Functional relationships, 62functional skills, 669, 680Functioning, 34, 69, 72, 82-84, 90, 204, 296, 348, 362,

366, 394-395, 402, 407, 410, 414, 420, 543,568, 571-573, 650, 659

Functions, 2-4, 8-9, 12-13, 19, 46, 48, 52, 56, 59-61,65-66, 69, 89, 93, 120, 152-153, 179, 194,221-222, 249, 287, 289, 296, 303, 306-307,310, 313, 349-350, 356-359, 372, 394-395,397-400, 403, 409-410, 424, 452, 499-500,508, 511-513, 516, 520-522, 524, 526, 529,531-533, 537, 539, 543, 549, 557, 591, 596,674, 676, 700, 703, 705, 711, 727

Funding, 34, 224, 594Furniture, 577, 639, 643Fusion, 630

GGalvanic skin response (GSR), 49Gambler, 80

compulsive, 80Games, 78, 81, 98, 224-225, 296-298, 313, 459,

571-572, 598, 688, 726Gastrostomy tube, 471Gaze, 298Gender, 253, 636General education, 78, 156, 183, 259, 261, 267, 443,

526, 570, 574, 576, 627, 632, 636-637,649-651, 653, 686, 692-693, 706, 708, 712,720, 727

general education classroom, 78, 156, 183, 570, 574,576, 632, 650-651, 653, 686, 692

Generalization, 4-5, 8-10, 12, 14-18, 34, 59, 69, 93,117, 123, 127, 153-154, 156, 216, 218,230-231, 233, 238, 244, 261, 263, 270-271,370, 396, 408, 410-412, 414, 424, 442-443,445-446, 448, 484, 507-508, 524, 544, 546,548, 557, 573, 589-590, 594, 613-614, 619,622-646, 648-663, 682, 685-692, 694-699,702, 704-706, 708-709, 714, 717, 722-725

enhancing, 691, 694math instruction, 691of behavior, 4-5, 8, 12, 14-18, 34, 59, 93, 117, 123,

127, 153-154, 156, 216, 218, 231, 233,238, 244, 261, 263, 270-271, 370, 396,410-411, 442, 446, 484, 507-508, 524,589, 594, 619, 622-646, 648-663,685-688, 690-692, 695, 698-699, 702,704-706, 708-709, 714, 717, 722,724-725

Generalizations, 656Generativity, 695Genes, 52Genres, 306Geography, 649Gifted students, 581Girls, 308, 335, 494-495, 574, 629, 646, 650, 655, 720Gloves, 470Glucose, 240-241, 418, 685, 695, 728goal setting, 101, 562-563, 601, 608, 619, 690,

710-711, 719Goals, 23, 39-41, 68, 75, 81, 83, 89, 130, 143, 217,

224-225, 257-258, 262, 264, 274, 336, 381,417, 432, 562, 564, 567, 579, 581, 584, 587,589-591, 594, 601, 616, 619, 664, 667, 670,

738

680, 684, 693, 709, 726chart, 224, 601, 616clarifying, 40lesson, 417, 590-591

Goals and objectives, 667Goal-setting, 562, 707Gold standard, 126Golden Rule, 668Good Behavior Game, 210, 266, 386, 577-580, 688,

712, 719Government, 23, 31, 42, 264

relations, 23, 31, 42Grades, 23, 25-26, 81, 281, 283, 296, 328, 567, 590,

652, 702, 713Grading, 73, 591, 646, 648, 670

level, 73, 670multiple, 646

Graduated guidance, 230, 419-420, 425, 442Graffiti, 116, 227, 249, 713, 726Grammar, 20, 361, 537, 546, 610, 655, 688, 690, 710,

720Graph, 2, 5, 9-10, 15-16, 31, 93, 141-142, 146-158,

164-171, 174, 176-177, 188-189, 191, 200,242, 247, 267, 277, 331-332, 367, 373,514-515, 530, 591, 601-602, 609, 616, 618,657, 721

Graph paper, 168Grapheme, 630Graphic displays, 98, 142, 146-177, 185, 221, 249,

674Graphics, 152, 169, 724Graphs, 5, 146-154, 157-159, 164-172, 174-177, 188,

197, 210, 221, 236, 247, 259, 302, 367, 616,690, 704, 715

Grasping, 50, 53Greek, 585Grid, The, 329-330Grief, 25Grooming, 574Group cohesiveness, 577Group instruction, 236, 267, 339, 477-478, 516, 670,

701Group size, 217Group therapy, 376Grouping, 613Groups, 11, 43, 164, 181-182, 184, 221, 227, 246-248,

253-254, 256, 263-266, 274, 336-337,385-386, 411, 564, 566, 574-575, 579-580,636, 642, 647, 652, 656, 694, 709, 721, 727

Groups:, 246Growth, 36, 187, 253Guessing, 235Guidance, 183, 187, 195, 228-230, 307, 318, 361,

418-420, 425, 429, 431-433, 442-444, 448,460, 466, 615, 675, 694, 710

guided notes, 180, 261, 718, 727guided practice, 594, 626, 635Guidelines, 7, 73, 76, 94, 96, 139, 165-166, 190, 235,

241, 243, 246, 278, 299, 305, 310, 345, 361,363, 366, 369-370, 372, 375, 387, 431, 433,435, 455, 460, 463, 467, 476, 482, 484, 490,494, 501, 559, 563, 566-567, 571, 573-574,580, 582, 584, 593, 603, 611, 615, 620,648-649, 667-668, 671-673, 675, 683-684,686, 688-689, 713, 728

Guidelines for Responsible Conduct, 668, 671-673,684, 688

Guides, 36, 179, 194, 235, 418-419, 441, 683Guilford Press, 687, 690, 700Guilt, 64, 561, 595, 598, 603, 720

HHabituation, 9, 44, 50, 364, 372Hand washing, 512, 522-524Handicapped children, 158, 596, 701, 718, 727Handling, 72, 80Handwriting, 103-104, 254, 283, 421, 457, 706, 723

cursive, 103-104, 457manuscript, 104, 457problems, 457

Happiness, 125, 547, 698Hardware, 119, 121, 543Harlow, 429, 700Harvard University, 690, 716-717Head Start, 703Health, 39, 42, 82, 95, 258, 470, 486, 588, 618,

630-631, 665, 673, 676, 678, 680-681, 684,700, 703, 713, 717, 727

exercise, 42, 618

Health and well-being, 673Health care, 676, 703Health insurance, 681, 700Health Insurance Portability and Accountability Act

(HIPAA), 681, 700Health problems, 618Hearing aids, 440-441, 724Hearing disorders, 703Hearing loss, 64heart, 33, 49, 95, 195, 240, 257, 399, 588, 603, 617,

728Height, 95, 165, 167, 176, 457, 464Helping, 41, 80-82, 238, 293, 402, 593, 628, 634, 649,

651-652, 657, 661, 666, 677, 680, 692, 701helping others, 666Hemingway, Ernest, 603Hierarchy, 420, 568, 581, 614, 620, 667Higher education, 709Highly qualified, 94High-probability request sequences, 505, 705Hippocrates, 368Hippocratic Oath, 368Histogram, 2, 152History, 2, 4, 6, 9, 12-14, 17-19, 23, 27, 42, 44, 51-52,

54, 57-58, 60-61, 64, 66, 75, 90, 104, 140,183-184, 201, 225, 279, 281-282, 284, 287,289, 309, 315, 327, 335, 348-349, 363,371-372, 380, 389, 393-394, 396-397,399-404, 406-407, 410-412, 417, 424,428-429, 432, 471, 476, 482, 500, 504,508-509, 520, 524, 537-538, 540, 546-547,550, 557, 584, 587, 599-600, 623, 635, 641,644, 649-650, 666, 674, 695, 704, 715-716,718, 721, 726

Holes, 330, 570-571, 700Home, 40-41, 75, 78, 116, 166, 185, 240, 287, 293,

296-297, 313, 315, 320, 350, 364, 381-383,403, 418, 437, 442-443, 472-473, 479, 489,492, 502, 508, 531-533, 545-546, 559, 564,571, 574, 577, 581, 585, 591-592, 594, 596,614, 627, 629, 631-632, 635, 654, 659-660,665, 670, 673, 679, 681, 683, 686, 694, 698,708, 711, 714, 718

Homework, 48, 116, 183, 217, 291-293, 314-315, 330,337, 463, 499, 562-563, 565-566, 572, 590,592, 599, 604, 627, 685, 701, 712, 714, 719,724

Homework assignments, 116, 183, 599, 701, 724homework completion, 314-315, 562, 714Homework performance, 562-563, 712Honesty, 26-27, 413Hope, 42-43, 94, 182, 189, 633, 643, 654Horse, 264, 455, 460, 541, 565Human development, 713Human relations, 669Human rights, 385Human services, 71, 75, 80, 85, 368, 373, 697, 728Humanistic education, 591Humiliation, 676Humor, 549Hyperactivity, 101, 207, 254, 266, 296, 482, 524, 526,

562, 689, 705, 713, 719Hypoglycemia, 240Hypotheses, 23, 70, 235, 337, 470, 511, 514, 516,

519-525, 527-529, 532-534, 707Hypothesis statements, 522, 525Hypothesis testing, 235

IId, 9, 32IDEAL, 126, 180, 191, 237, 304, 320, 357, 369, 591,

651, 722Ideas, 38, 95, 461, 682identity, 14, 414, 458ignorance, 189, 248Illinois, 36Illiteracy, 64, 537Illness, 470, 676Illustration, 104, 172, 198, 353, 386, 422-424, 435,

446, 449-451, 463, 474Illustrations, 224, 250, 421Imitation, 8-9, 13, 79, 108-109, 213-215, 418,

426-433, 444, 455, 479, 536, 541, 544, 548,555, 557, 644, 687, 697, 702, 712, 725, 728

Immediacy, 54, 101, 279, 309, 337, 350, 387, 389,427-428, 432, 608

Impairment, 64, 472, 492Implementation, 69, 118, 130, 238-239, 241, 244, 256,

274, 282, 370, 379, 387-388, 448, 466,

485-486, 516, 562, 564, 570, 573, 579, 582,589, 592-593, 601, 615, 672, 675, 691, 703,719

changing criterion designs, 238-239, 241, 244reversal design, 238, 244

Importance, 26, 33-34, 37-38, 40, 43, 45-46, 54, 60,76, 78-79, 82-85, 88-89, 103, 124, 126, 130,197, 210, 213, 215-216, 235, 238, 246, 249,258-259, 268, 270-274, 279, 286, 293, 309,313, 316, 369, 373, 375, 396, 403, 435, 464,490, 507, 608, 615, 625, 631, 633, 636, 640,646, 649, 659-660, 683, 702

Improvisation, 709Impulse control, 16Inappropriate verbalizations, 570, 711Incentives, 703Incidental teaching, 448, 552, 680, 691, 700, 710Inclusion, 33, 217, 242, 269, 274, 559, 575Inclusive classrooms, 701Income, 81, 571, 665Independent functioning, 72, 82-84, 90Independent living, 436independent practice, 81, 594independent variable, 1-2, 5, 7-11, 13, 15-16, 18-20,

24, 26, 42, 146, 149-151, 153, 163, 167,169, 174, 176-177, 178, 180-195, 197-198,200-201, 206, 211, 218-219, 221-225, 227,229, 232-238, 240, 242-244, 246-257,264-265, 267, 269-274, 277, 280, 553, 587,624, 727

Independent work, 526-527, 643, 719Indexes, 117Indiana, 30, 36Indications, 142, 459Indiscriminable contingencies, 10, 634, 644-648, 662,

697Individual differences, 63-64, 66Individualized Education Program, 570, 623, 642, 665Individualized education program (IEP), 570, 665

target behaviors, 570team members, 665

Induction, 624Inductive reasoning, 195Infants, 29, 287, 289, 429, 439, 687, 715

crying, 289environment, 289, 439hearing, 287stimulation, 287, 289, 715

Inferential statistics, 235Inflection, 462Inflections, 104Influence, 4, 10, 12, 23, 32, 38, 40, 42-43, 45-46, 48,

52, 63, 75, 93, 114, 119, 133-134, 144, 153,180, 182, 186, 189, 193, 199, 215, 236-237,252-253, 257, 273, 279-280, 283, 286,315-316, 320, 327, 331-332, 341-342, 345,350, 391, 417, 476, 480, 494, 500, 502, 575,585, 588-589, 600, 613, 619, 624, 626, 651,661, 664, 666, 677, 680, 683, 703, 705,707-708, 711-712, 717-718

Information, 3-4, 8-9, 32, 45, 70-73, 75, 82-83, 85,89-90, 96-97, 100, 102, 112, 125, 127, 130,133, 137, 147, 150, 160, 163-165, 173, 177,179, 181, 195, 200, 205-206, 212, 238, 240,246, 250, 259, 265-266, 268, 282-283, 296,329, 369, 381, 401, 407, 441, 511-512, 514,516, 518-522, 524-527, 529-534, 537, 540,543, 551-552, 555, 571, 592-593, 604-606,615, 627, 633, 659, 671, 673, 675-679, 682,684, 723-724

policies, 369, 381, 571, 677-678Information processing, 32Informed consent, 10, 370, 664-665, 673, 675-678,

684, 703Inner speech, 593Innovation, 33Inquiry, 3, 23-24, 39-40, 89, 264, 371, 466, 665, 673

naturalistic, 40In-service training, 206Instruction, 2, 10-12, 16, 18, 34, 63, 69, 73, 75, 78, 80,

87, 102, 108-109, 117, 127, 148, 187, 190,195, 211, 217-218, 228, 230-232, 236-237,243, 267, 269, 283, 307-308, 318, 321, 323,327, 336, 339-340, 342, 345, 378, 396, 407,414-415, 417-420, 422, 425, 429, 441-442,447-448, 469, 477-478, 487, 492-493, 502,504, 506, 513, 516-518, 531, 541, 568, 577,583, 587, 591, 593, 596, 599-600, 608,612-613, 620, 623, 625-627, 629, 631-632,

739

634-642, 644, 648, 651-653, 657-663, 667,670, 685-686, 688-691, 695, 697-698,701-703, 705-706, 708, 711, 713-716,718-722, 725

accountable, 689adequate, 10, 318, 378, 541, 667data-based, 447, 701delivery of, 308, 327, 487, 504differentiation of, 87in preschool classrooms, 686indirect, 78, 80, 283, 307, 642, 706individualized, 623, 642, 648, 670, 708learning strategies, 593sequencing, 429, 627, 636sheltered, 531strategy, 2, 16, 18, 69, 187, 190, 195, 232, 323,

447, 613, 635-636, 640, 648, 652,662-663, 688, 690, 697-698, 706, 718

targeted, 18, 78, 117, 127, 187, 195, 236, 307, 442,516, 568, 577, 596, 620, 632, 660,662-663

unit, 18, 102, 415, 442, 698Instructional activities, 642Instructional design, 636Instructional environment, 625, 640Instructional examples, 630, 635, 648Instructional games, 81Instructional objectives, 670Instructional procedures, 211, 320, 413, 670Instructional programs, 78, 642Instructional strategies, 659, 682Instructive feedback, 690, 702Instrumentation, 24Insulin, 240, 695Insurance, 629, 678, 681, 700Integration, 79, 272, 723Integrity, 14, 18, 117, 121, 206, 245, 255-257, 267,

272-274, 455, 669, 672, 698, 715, 719Intellectual disabilities, 654, 711, 718

moderate, 711severe, 654

intelligence, 7, 73, 466, 537, 550, 716Intelligence tests, 73Intensity, 10, 12, 48, 56-58, 72, 83, 103-104, 107, 120,

306, 338, 347, 349-351, 355, 364, 372,419-420, 456, 471, 520, 532, 699, 727

Intensity/magnitude, 350Interaction, 2, 13, 40, 45-47, 53, 61, 65, 82, 110, 141,

179, 181, 203, 206, 227, 284, 313, 322,376-377, 402, 407, 511, 517-518, 537-540,550, 553, 556, 562, 642, 650, 666, 683-684,692, 697, 706, 716, 723

Interactions, 4, 12, 36, 60, 70, 75-76, 78, 90, 100, 125,132, 141, 166, 227, 254, 289, 295, 334, 338,356, 380, 401-402, 407, 427, 455, 466, 483,511-513, 539-540, 551, 561, 568, 576-577,580, 584, 586, 625, 636-637, 673, 681-682,684, 689-690, 697-698, 703, 705-706,714-715, 719

Interference, 12, 196, 216, 218, 673, 711intermittent schedules of reinforcement, 4, 10,

325-326, 334, 336, 341, 343, 366, 475, 480,644-645, 662

Internal validity, 7, 10, 15, 178, 180, 194, 215,250-252, 257, 267, 273-274

Internet, 95, 598conduct, 95

Interobserver agreement (IOA), 3, 10, 122, 133, 136,143, 268

Interobserver reliability, 700Interpersonal relations, 40Interpretation, 3, 12, 26, 29, 70, 96, 101-102, 129,

131, 134, 140, 150, 165, 174, 177, 247, 267,269, 272, 274, 393, 395, 406, 519-520, 696,705, 715

Interpreting, 20, 75, 90, 98, 135, 146-177, 202, 246,248, 269, 501, 514, 522, 525, 527, 529,532-533

Interrater reliability, 714, 720Interspersed requests, 9, 505, 509, 702Interval recording, 10-11, 13, 18, 20, 92, 98, 110-115,

120, 130, 135, 600, 700Intervention, 1, 3-5, 8-9, 11-12, 15-16, 18, 29, 38-39,

65, 68-71, 74-77, 79-81, 83-84, 86, 88-90,94-95, 99, 104, 117, 120, 127-128, 147-148,150, 158, 161, 169-170, 179, 182, 185-187,190, 195, 197-198, 200, 205-208, 210-213,215-218, 224, 229, 232-234, 236-238,240-241, 245-246, 248, 254-263, 269, 274,

278, 287, 291, 296, 303, 308, 316, 318,334-336, 340, 346-347, 355-356, 358-359,365-370, 372-373, 406, 412, 427, 434, 443,453, 469-471, 474, 477-479, 483-486,488-492, 494, 496-497, 498-509, 511-513,518, 521-524, 526-529, 531, 533-534,550-552, 554, 560-562, 566, 570, 572, 574,577-578, 580-581, 584, 587, 592, 595, 598,601-602, 604-605, 609, 611-617, 619,623-626, 629-633, 636, 641, 648, 650, 652,655-661, 663, 664-668, 674, 677, 681-684,685-689, 691, 693, 695, 697, 701, 703-704,706-711, 713, 720, 722-723, 726

Intervention:, 206, 366, 503, 522, 524amount of, 503discrete trial training, 562facilitated communication, 94, 682

Intervention programs, 39, 69, 550, 554Interventions, 22-24, 28-29, 37-40, 43, 58, 69-71, 73,

94, 100, 120, 123, 148, 151, 164, 169, 178,196, 202, 206, 208, 210, 216-217, 219, 220,233, 245-246, 250, 256, 258, 262, 267,273-274, 282, 288, 295, 299, 310, 316, 318,329, 333, 337, 342, 345, 347, 353-354, 356,361, 366-373, 398, 417, 428, 448, 473,482-484, 486, 498-509, 511-513, 523-524,526, 533, 550, 559, 586, 592, 609-610,615-617, 623-624, 629-630, 641-642, 644,656-657, 661, 663, 666, 669, 671-674,680-682, 685-686, 691, 693, 701, 706, 708,710-711, 713-714, 717, 719, 721-728

antecedent-based, 498behavior reduction, 367consequence-based, 372

Interviewing, 71, 641, 708Interviews, 9, 69-70, 72-73, 89, 380, 519-522,

524-525, 527, 531, 533initial, 69, 73, 89, 380, 520, 522

Intonation, 537Intraverbals, 16, 288, 536, 541-542, 548-549, 555,

557, 723intrinsic motivation, 325Introduction, 150, 169, 188, 190, 192-194, 198, 213,

218, 221, 226, 236-237, 242, 265, 271, 305,326, 336, 338, 353, 362, 419, 428, 493, 594,701, 709-711, 714-715, 717, 727

introspection, 27, 29Investigator, 180, 182, 184-186, 188-190, 192-193,

204-205, 209, 215, 217, 224, 235, 242,252-255, 264, 272, 505

Invitations, 683Iowa Tests of Basic Skills, 73, 702IPods, 654Iron, 612IRT, 6, 10, 16, 18, 92, 100-101, 103, 106, 110,

118-120, 136-137, 144, 334-335, 491,493-497

caution, 101uses, 6, 16, 136

Issues, 28-29, 35, 40-41, 43, 69, 119, 185, 207, 215,260, 267, 270, 312, 345, 362, 367, 370, 381,471, 571, 581, 607, 615, 665-666, 668-669,675, 678, 684, 691-692, 698, 700, 708-710,713, 721, 723, 725-726

controversial, 345, 362Italics, 46, 57, 60Items, 5, 18, 72-73, 101-102, 104, 136, 154, 156,

213-214, 226, 257, 259, 290-291, 296-297,299-300, 302-303, 336-337, 353, 418, 449,487, 500, 502, 520-521, 525-526, 552, 562,564, 566, 568-574, 581, 585, 590, 595, 597,602, 605-606, 611, 637, 641, 651, 657

specifications, 590

JJob satisfaction, 81Joint, 52, 64, 647Journal articles, 499Journal of Applied Behavior Analysis (JABA), 36, 43,

674Journal writing, 561Journals, 27-29, 94, 168-169, 272Judges, 86, 262Judging, 36, 76, 82, 108, 142, 255, 369Judgment, 37, 75, 88, 94, 120, 128, 131, 168, 271,

333, 463, 479judgments, 76, 140, 148, 170, 176, 258, 709-710Judicial decisions, 690Junior high school, 184

Justice, 413, 669, 712juvenile delinquency, 649

KKansas, 36, 160, 162, 699, 715, 719Kelley, 100, 155, 186, 201, 207, 250, 290, 359-360,

476, 483-484, 503, 562-563, 685, 693, 696,705, 707, 711, 713, 725

Key terms, 22, 44, 68, 92, 122, 146, 178, 196, 220,245, 276, 311, 324, 344, 374, 390, 408, 426,434, 454, 468, 481, 498, 510, 536, 558, 583,622, 664

Keyboard, 98, 682Keyboarding, 88Keystrokes, 128Kicking, 460, 474, 479, 485-486, 489, 531Kindergarten, 99, 291, 462, 578, 604Kindergarten children, 604Kindergarten students, 291, 578Knots, 78Knowledge, 10, 13, 23, 25-26, 29, 32-33, 38, 40, 42,

45, 47, 58-59, 65, 70, 83, 85, 93, 110, 120,124-125, 127, 130, 143, 179-180, 190, 194,256, 264-265, 267, 272, 274, 283, 340, 342,369, 371, 373, 398, 401, 403, 466, 537, 548,561, 575, 585, 587-588, 595, 632, 639, 641,649, 658-659, 661, 666, 670-672, 674-678,684, 689, 702, 707, 712

domains, 40, 712of subject, 127, 264prior, 58-59, 65, 124, 180, 190, 403, 632, 659, 666,

675prior knowledge, 180topic, 585, 632vertical, 10

Knowledge base, 23, 26, 33, 340, 342, 371, 661Kodak, 207, 265, 502, 705

LLabels, 81, 93, 120, 149-150, 167-169, 176, 505, 509,

586, 595, 670, 680situation, 168

Laboratory research, 30, 185, 277, 290, 314, 371, 465,492, 723

Language, 8, 13, 16, 20, 32, 34, 36, 38, 40, 42, 47, 56,69, 78-79, 83, 135, 213-215, 235, 248, 250,278-280, 282-283, 287, 296, 325, 362, 381,402, 404-405, 407, 409, 414, 430, 455, 471,498, 521, 536-543, 545-546, 550-557, 561,566, 577, 589, 608, 617, 623, 626, 642,649-650, 656, 674, 677, 682, 688-691,693-694, 699-700, 703, 705, 708-709,714-716, 719, 722-725, 728

acquisition, 32, 42, 78, 213, 215, 287, 498,536-537, 540-541, 546, 550, 552-554,626, 688-691, 703, 705, 708, 723-725,728

animal, 32, 402, 404, 555, 725body, 47, 250, 287, 430, 555, 557, 677clear, 250, 402, 521, 538-539, 550, 623, 674, 677delay, 16, 278-280, 283, 296, 608, 691, 709, 725difference, 280, 542, 617, 656empowering, 38informative, 38play with, 79, 278, 405, 649playing with, 649production, 79, 279, 455, 542, 642receptive, 405, 414, 471, 539, 542, 545, 550-551,

555-556written, 8, 16, 83, 414, 536-537, 539, 541-542, 545,

551, 556, 561, 714, 728Language acquisition, 32, 42, 287, 498, 536-537, 546,

550, 723Language arts, 414, 561, 577, 608, 649-650, 656Language development, 78, 689, 699, 708, 714, 722

procedure, 689, 699, 708use, 78, 689, 699, 708, 714, 722

Language instruction, 541Language learning, 79Language skills, 38, 40, 47, 78, 83, 296, 521, 551, 556Language systems, 409Languages, 409, 691Large group, 183, 589Latency, 5, 10, 13, 15, 17-18, 54, 92, 96, 100-103,

109-110, 119-120, 136-137, 144, 156, 243,278, 282, 392, 405, 409, 424, 440-441,455-457, 466, 552, 605, 643-644, 662, 727

Laundry skills, 229, 231, 724

740

Law, 11, 26, 29, 31, 83, 94, 181, 291, 324, 338-339,667, 679, 690, 701, 725, 728

case, 701, 728Law of effect, 701Leaders, 42Leads, 52, 63, 71, 79, 86, 133, 148, 240, 280, 283,

312-314, 322, 369, 588, 606, 609, 666, 680Learned behavior, 30, 396, 427, 431, 538, 550, 589,

625, 639, 644, 662Learners, 74, 215, 258, 267, 295-296, 306-307, 362,

402, 407, 417-419, 427, 429-432, 443, 455,462, 492, 494, 496, 550-554, 570-575, 591,604, 617, 631, 634, 639, 645, 648, 654,661-662, 680, 691, 708, 714

active, 362, 431, 591auditory, 604

Learning, 4, 9, 12, 18-19, 28, 30-32, 34, 39, 48-50, 58,63, 66, 73, 78-80, 82-83, 88, 90, 94, 103,110, 127, 160, 162, 180-181, 183, 211-212,243-244, 249-250, 261, 263-264, 267, 277,279-281, 289, 307-309, 312-315, 317-318,320, 322-323, 325-326, 334, 343, 345, 361,370-371, 373, 394, 397, 399-400, 403,406-407, 414, 419, 429-431, 441-442, 444,446, 450, 460, 493-494, 504, 537, 539, 541,547, 556, 568-570, 584, 589-590, 592-594,603, 605, 608, 619-620, 630-631, 633,635-636, 641, 647, 649-654, 656, 661, 667,670, 678, 685, 688-689, 691, 693-694,698-702, 705-706, 708-709, 711-713,715-716, 720, 722-724, 726-728

Learning:, 685, 702, 727areas of, 702connected, 110discovery, 31, 181, 361distance, 691enjoyable, 589events, 9, 18, 30-32, 48, 73, 78, 90, 94, 110,

180-181, 280-281, 312, 315, 320,322-323, 325, 397, 399-400, 406, 539,547, 584, 589, 594, 603, 619-620, 630,641, 691, 709, 712

incidental, 32, 691, 700mastery, 103, 243, 441, 670observable, 31, 547, 589readiness, 32real-world, 263, 371scenarios, 584to learn, 34, 49, 73, 82-83, 103, 394, 429, 446, 450,

541, 593, 633, 635, 649, 667, 711Learning activities, 649Learning disabilities, 110, 183, 211-212, 261, 263, 460,

493, 537, 568-569, 590, 603, 605, 650, 653,656, 667, 685, 688, 691, 693-694, 701, 708,711, 720, 723-724, 727-728

Learning disability, 280, 608, 689, 699, 701, 706Learning environment, 504Learning environments, 82Learning experiences, 9Learning goals, 83Learning groups, 647, 652, 656, 709Learning opportunities, 79Learning problems, 631, 641, 702Learning process, 446Learning rates, 243-244Learning Strategies, 592-593Least restrictive alternative, 368, 373, 387, 680Lecture, 180, 261, 475, 599Legal context, 676

and, 676Legal issues, 571, 666, 675Legibility, 165Legitimacy, 94, 120, 123Leisure, 36, 81, 83, 99, 227, 250-251, 291, 298, 353,

357, 385, 418, 487, 492, 499-500, 502, 521,524, 526-527, 571, 575, 633, 654, 693, 719,728

Lesson plans, 591free, 591

Lessons, 114, 118, 529, 641, 697, 706Let Me Hear Your Voice, 710Letter formation, 457, 724Letter writing, 598Letters, 47, 76, 82, 104, 108, 197, 201, 305, 413-414,

421, 502, 542, 598, 623, 649, 687Level, 1-2, 4-7, 10, 12, 15, 17, 19-20, 23-26, 34,

37-39, 42, 48, 57, 59, 64, 66, 70, 72-73, 75,78, 81-83, 93-94, 97, 101-104, 111, 120,125, 128, 130, 133-134, 140, 142-144,

146-150, 152-153, 166-171, 174-175, 177,179-181, 185, 187-195, 198, 200, 205-206,208-210, 212-213, 215-217, 221-225, 227,229-231, 233-235, 237, 239-244, 246-249,251, 253, 255-259, 261-263, 270-274, 278,281, 291, 293-295, 304-305, 309-310, 334,337, 346, 348, 364, 366, 368-369, 376,381-382, 385, 387, 392-393, 397, 399,405-406, 414, 416-418, 420, 430, 432,437-438, 440, 443, 452, 455-460, 462-467,471, 473-476, 491, 496-497, 514, 529, 531,549, 558, 562, 568-570, 572-575, 580-581,592-593, 599-600, 602, 604, 611, 633, 636,650, 656-657, 659, 663, 670-671, 676-677,689, 719-720, 722, 727-728

Lexicon, 537Library, 296-297, 330-331, 507, 571, 584, 587Licensing, 73, 668Licensure, 668, 671

certification and, 671Lighting, 7, 48, 75, 185Likert scale, 3, 72, 520Limitations, 41, 127, 136, 142, 168, 202, 208, 234,

238, 243-244, 264, 272, 289, 299, 304, 345,361, 371, 418, 459, 465-466, 514, 516,518-521, 534, 672

Limits, 53, 56, 89, 102, 236, 256, 335, 350, 439, 497,588, 643, 664, 678, 683, 696, 700

Lincoln, Abraham, 126Line graphs, 148-149, 164, 176Line of best fit, 163Lines, 10, 98, 149-150, 161-164, 167-168, 171-174,

176-177, 193, 231, 241, 292, 296, 377, 419,421, 505, 564, 602, 613, 641, 658, 681, 687,727

Linguistics, 537-539, 556Liquids, 290, 431, 445Listening, 109, 291, 382, 417, 459, 525, 565, 618, 715

discriminative, 417Literacy, 550, 703, 712Literature, 24, 39, 58-59, 62, 82, 96, 100, 110, 114,

135, 156, 164-165, 170, 179, 183, 190, 197,199, 207-208, 221, 224, 239-240, 266,271-272, 287, 295, 341-342, 346-348, 366,369, 413, 427-429, 447-448, 463, 465, 470,474, 480, 559, 568, 586, 615, 619, 624, 630,666, 674, 681-682, 695, 709, 713

graphic, 156, 164-165, 170, 208, 221, 239, 674Literature review, 272Litigation, 381Living environment, 369Local Education Agency, 571Locker room, 224Locus of control, 584logical reasoning, 190Logical thinking, 591logos, 653, 657Longitudinal study, 562Long-term goals, 83, 667Loss, 15, 18, 64, 81, 95, 128, 170, 240, 270-271, 279,

333, 343, 345, 347, 349-350, 375-377,382-383, 386-389, 513, 572, 584, 589, 608,610, 617-618, 665, 676, 683

feelings of, 589Lotteries, 330Lottery, The, 330Love, 65, 395, 725Lunchtime, 284, 513Lynch, D., 708

Mmachines, 128, 327, 636-639, 653, 657Magazines, 571Magnitude, 5, 10, 49, 54, 63, 92, 94, 103-104, 107,

118, 120, 132, 151, 171, 178-179, 193,241-242, 244, 278, 293, 306, 310, 316, 322,337-338, 348-351, 364, 369, 372, 386-387,389, 392, 398, 405, 455-458, 463, 466, 473,476, 480, 484, 643-644, 662, 695, 702, 707

maintenance, 3, 5, 8-11, 15, 20, 34, 58, 69, 78, 80, 82,93, 123, 127, 129, 154, 156, 183, 197, 205,261, 263, 270-271, 286-287, 314, 317, 320,325, 362, 370, 448, 465, 484-485, 505,507-508, 524, 546, 568-569, 573, 589-590,613-614, 619, 622-663, 670, 680, 682, 685,687-688, 695, 697, 702, 705-706, 708-709,711, 717, 719-720, 724-725

of learning, 34, 317, 325, 619, 636, 708, 720, 724Malapropism, 16

Malpractice, 95Management, 16, 28, 34, 42, 75, 83, 200, 246, 293,

314, 378, 569, 573-575, 580-581, 583-620,630, 634, 652, 654, 656-657, 659, 663, 673,678, 686-687, 689-693, 696, 702-710,712-714, 716, 718, 726

Mand, 9, 11, 213-215, 307, 405, 407, 448, 536,539-545, 548-557, 689-690, 694, 711, 720,723

Mandates, 575Manual dexterity, 648Manual prompts, 228Marketing, 258Marking, 108, 112, 134, 330, 560, 570, 586, 596marriage, 69, 179Mastery, 103, 125, 128, 243, 435, 437-438, 440-441,

452, 572, 670, 682Matching, 11, 14, 17, 152, 154, 259, 324, 334,

338-339, 408, 414-417, 425, 431-432, 487,555-556, 607, 620, 699, 702, 710, 713, 720,723, 728

Materials, 80, 83, 87, 108, 110, 131, 199, 203, 216,227, 257-259, 262, 277, 291-292, 297, 353,355, 357, 416-417, 420-421, 439-441, 483,485-487, 502, 511, 514, 528-529, 533, 571,591, 593-595, 603, 619-620, 625, 630, 640,642, 654, 670, 697, 710, 726

adaptation of, 83Math calculation, 97Math instruction, 691Math skills, 76, 125Mathematics, 24, 73, 97, 414, 463, 646, 701, 710Mathematics performance, 646matter, 7, 20, 24-27, 29, 31, 33, 35, 37, 45-46, 52, 56,

62, 83, 94, 104, 123-124, 126-127, 179-180,185, 202, 221, 224, 246, 263-264, 277, 283,286, 332, 539, 547, 572, 631, 646, 649, 661,682, 697, 715, 721-722

Maturation, 202, 216, 242, 252-253Mealtime skills, 82Mean, 2, 7, 11, 19-20, 47, 55, 80-81, 83, 85, 97,

99-101, 104, 115, 122, 124, 135-137,140-141, 144, 154-156, 170-171, 174-175,177, 187, 195, 201, 210, 213, 224, 226, 229,235-236, 242, 246-248, 259, 261, 265, 278,286, 288, 293-294, 327-328, 332, 346,350-351, 362, 377, 382, 386-387, 391, 418,447, 464, 487, 491, 494-495, 530, 537, 539,555, 561, 567, 577, 580, 601-602, 639, 650,660, 666, 683, 687

Meaning, 7, 19, 25, 34, 37, 52, 62, 69, 88, 93, 95, 142,148, 169, 176, 185, 203, 221, 231, 282, 286,309, 326, 328, 345, 412, 429, 470-471, 479,502, 504, 537, 539-540, 542-543, 547, 549,624, 650, 677, 708

of words, 537, 539shades of, 412

Meaningfulness, 76, 85, 264, 267Meanings, 7, 10, 16, 62, 96, 135, 288, 539, 624Measurement, 1, 3, 5-7, 9-14, 18-19, 24-25, 31, 36-39,

41, 43, 45-46, 68-69, 75, 86, 92-97, 99-102,104, 108, 110, 112-121, 122-144, 147,149-150, 158, 165-166, 168-170, 176-177,179, 181, 185, 187, 189-191, 193-195, 209,216, 218, 221, 223-224, 229, 231-240,243-244, 246-247, 249, 252-254, 257, 262,267-268, 271-273, 432, 475, 488, 491, 567,599, 605, 662, 670, 691-692, 716, 727-728

attribute, 36, 45in algebra, 567interobserver agreement (IOA), 3, 10, 122, 133,

136, 143, 268of time, 3, 5-7, 9-10, 13-14, 18-19, 75, 92, 94-97,

99-101, 104, 110, 112, 115-120, 128, 135,144, 149-150, 165-166, 168, 170,176-177, 187, 190, 216, 237, 244, 247,267, 475, 488, 491, 605, 716

parameters of, 104, 127proficiency, 19, 97, 101social validity, 38, 68, 257, 262, 267, 727-728terms, 6, 14, 18-19, 37-38, 45-46, 68, 75, 92, 94,

96, 101-102, 104, 110, 115, 120, 122,135, 150, 166, 168, 170, 177, 185, 271,567

variables, 3, 7, 9-10, 14, 19, 24, 36, 38, 41, 43, 45,69, 75, 93, 104, 120, 128, 133, 140, 147,150, 158, 165, 169-170, 176, 179, 181,185, 187, 189, 191, 193-195, 216, 218,223, 234-238, 240, 244, 246-247, 249,

741

252-254, 257, 267, 271-273, 432, 475,692

Measurement error, 11-12, 124-125, 128-132, 143,716

random, 11-12, 124source, 11, 131-132

Measurement scales, 126, 128, 143Measurements, 9, 15, 18, 93-94, 99, 110, 119,

123-124, 128, 132-134, 142, 147, 150, 155,165-166, 169-170, 176, 185, 190, 243, 253,491, 682, 722

cost, 15, 119, 142Measures, 2-3, 5, 10, 12-13, 15, 17, 19, 25, 36, 70, 73,

81, 93-104, 109-111, 113-115, 117-118, 120,123-135, 137, 139, 141-144, 147, 150-151,153-154, 162, 167-168, 170-171, 174-177,181, 183-185, 187-195, 197-198, 200-201,209, 212-213, 216, 221-223, 226-227, 229,231, 236, 240, 243, 246-247, 254, 257, 261,264, 267, 298, 359, 364, 458, 462, 482, 516,553, 573, 605, 617, 627, 659, 670, 693, 704,716, 726

Measures of central tendency, 174Media, 101, 571, 673, 715

examination of, 715magazines, 571

Median, 100-101, 154, 163, 170-171, 174, 177, 590,646

Mediating variables, 30Mediation, 2, 287-288, 309, 366, 373, 419, 501, 509,

538, 556, 632, 677, 702Mediators, 549, 612, 620Medical history, 471Medications, 254, 520Meetings, 616, 677

telephone, 616Memory, 32, 102, 418, 550, 687, 715, 728Mental health, 676Mental retardation, 34, 99, 108, 110, 149, 152-153,

156-157, 200, 212, 225, 231, 266, 301-302,308, 320, 336, 351-352, 357, 359-360, 362,411, 414, 418, 429, 438, 442-443, 445, 449,474, 478, 483, 486, 492, 494-495, 502, 531,574, 590, 601-602, 613, 635-636, 639, 642,644, 650, 657, 676, 687, 689-693, 696-699,701-708, 710-714, 718-720, 722, 727-728

Mentors, 674Messages, 62, 94, 629, 682Metaphors, 462, 537Michigan, 36, 676, 693, 697, 699-700, 705, 709, 711,

718, 720Microsoft Excel, 169, 690, 721Middle school, 104, 183, 561, 656, 685, 714, 721, 728Milieu teaching procedures, 726Milk, 2, 24, 34, 277-278, 297, 330, 361, 421Minnesota, 690Misbehavior, 15, 18, 57, 346, 353, 356, 360-361, 372,

377-378, 381-382, 385, 388, 493, 496, 499,579, 693

Misconceptions, 725Missouri, 594Mobility, 667Mode, 8, 127, 427, 432, 541-542, 556Modeling, 206, 218, 230, 233, 256, 284, 307, 355-356,

372, 418, 420, 425, 442-444, 448, 462,572-573, 587, 639, 650, 652, 713, 727

of self-control, 587, 713Model-lead-test, 579Models, 32, 164, 168, 224, 235, 264, 388, 418,

427-433, 441, 443, 469, 628, 639, 650, 685Modification, 28, 32, 82, 94, 120, 203, 215, 242, 383,

439, 445-446, 453, 640, 685, 687-692, 694,696-702, 704-710, 712-714, 716, 718,720-727

Momentary time sampling, 11, 13, 18, 92, 110,113-115, 120-121, 516, 599, 603, 719, 724

Momentum, 3, 9, 498, 505, 509, 709, 713Money, 60, 72, 270, 290, 307-309, 315, 325, 330, 339,

342, 376, 382, 386, 397, 439, 568, 570, 585,605, 618, 623, 626, 628

Monitoring, 16, 69, 71, 75, 89, 94, 130, 238, 241, 244,245, 351, 361, 389, 463, 493, 504, 524, 526,528, 562, 575-576, 579, 581, 583, 590,592-593, 598-610, 612, 615, 618-620, 650,656-658, 680, 683, 685, 689, 692-693,699-700, 705, 707-708, 710, 713, 719, 724,727-728

progress, 16, 69, 89, 463, 562, 581, 603, 605, 680,705

Monitors, 580Morningside model, 703Morpheme, 697Morphemes, 537

grammatical, 537Most-to-least prompts, 420, 425motion, 48, 51, 54, 88, 286, 449, 457, 463, 529-530MotivAider, 112, 528, 605, 696Motivation, 9, 48, 79, 129, 278, 281, 309, 325, 357,

391, 400, 448, 476, 501, 504, 506, 509,520-521, 523, 568, 574, 595, 614, 682, 692,695, 712, 714, 720, 722-723, 728

achievement, 568, 714intrinsic, 325states, 48, 309, 400, 520, 568

Motivation Assessment Scale, 520, 692, 695, 714,720, 722, 728

Motivation to learn, 682Motor skills, 704, 721Mouse, 14, 648Movement, 9, 45-47, 50-51, 54, 56, 86-87, 118,

286-287, 298, 353, 359, 364, 418-420, 427,430-431, 458, 461, 463, 465, 472, 540, 546,548, 552-554, 557, 634, 680

Movement cues, 419Movies, 507, 561, 565, 603

showing, 603Multiple baseline design, 5, 11-12, 16, 215, 217,

220-224, 227, 229, 232-240, 242-244, 249,601, 701-702, 726

Multiple disabilities, 125, 295, 625, 697-698, 705, 708,717

Multiple probe design, 12, 190, 220, 229-232, 235,243

Multiple-baseline designs, 185Multiplication, 63, 78, 97, 170, 231, 463, 632, 635, 702Multiplication and division, 97

sequences, 97muscle relaxation, 17, 614, 620muscles, 46, 49, 53, 472, 613Music, 56, 80, 104, 116, 201-202, 284, 291, 295-296,

325, 353, 376, 382-383, 396, 428, 431, 478,589, 640, 642, 654, 692

listening to, 291, 382rock, 428

Music teachers, 478, 640Myths, 686, 725

NNail biting, 287, 342, 448, 686National Association of School Psychologists, 671,

701, 713National Association of Social Workers, 671, 713National Institute of Child Health and Human

Development, 713National Reading Panel, 78, 713Native language, 287Natural consequences, 670, 680Natural environment, 1, 9, 14, 70, 73, 76-77, 79, 84,

86-87, 90, 129, 153, 259, 263, 274, 337,366, 484, 491, 514, 516, 519, 522, 533, 547,552, 575, 636, 639-640, 657, 680, 716, 723

Natural environments, 308, 340, 342, 366, 514, 666,680

Naturalistic observations, 297Nature, 1, 19, 23, 25, 33, 38-39, 47, 56, 60-61, 64, 93,

95, 102, 120, 123-124, 128, 133-134, 142,144, 164, 169-170, 179, 181-182, 185-187,206, 216-218, 221, 234-235, 238, 243, 248,269-270, 278, 286, 289, 295, 309, 313, 320,345-346, 371-372, 399, 402-403, 410, 456,460, 467, 519, 548, 550-551, 566, 625, 645,649, 661, 669, 676, 678, 688, 695, 698, 703,720

Nausea, 554needs, 38, 71, 74, 76, 78, 82, 89-90, 97, 99, 113, 174,

183, 188, 325, 351, 368, 386, 436-437, 446,453, 478, 494, 497, 524, 540, 551, 555, 571,588, 594, 609, 625, 633, 635, 639, 643, 646,649, 666, 670, 685, 714

Needs assessment, 71, 82, 90, 685Negative reinforcement, 6, 13-14, 44, 56-58, 62, 66,

69, 257, 260, 262, 278, 282, 311-323,347-348, 355-356, 368, 371-372, 383, 410,448, 469-472, 477, 479, 483-484, 501-502,504, 509, 511-512, 515-516, 520, 561, 581,603, 608-609, 620, 667, 688, 691, 702-703,705, 707, 709, 711, 720, 722, 725

Nervous system, 708

Net, 186, 267, 385, 459Neuroscience, 727New Math, 81New York, 439, 680, 686-702, 704-708, 710, 712,

714-716, 718-721, 723-727News, 24NICHD, 713Nicotine, 588Nintendo, 525No Child Left Behind, 94, 725No Child Left Behind Act, 94No Child Left Behind Act of 2001, 94Noise, 19, 75, 104, 152-153, 294, 312, 314-316, 364,

376, 404, 461, 479, 492, 500, 530, 572, 581,640, 686, 702, 707, 710

Nominal scales, 691Nonmaleficence, 669Nonverbal behavior, 258, 539, 541, 543Nonverbal behaviors, 63, 427, 543Norm, 69, 73, 368, 670Norming, 73

population, 73Norms, 79, 368, 463Notes, 74, 77, 101, 141, 180, 221, 261, 297, 428, 574,

577, 587, 599, 623, 681, 709, 718, 720, 727anecdotal, 74

Notetaking, 261, 727Notification, 388Nouns, 32, 537, 540Numbered Heads Together, 647Numbers, 34, 72, 93, 130, 147, 156, 168, 184, 241,

248, 309, 329-330, 333, 377, 383-384, 464,603, 627, 634

Nurse, 34, 43, 95, 568, 686Nutrition, 52

OObesity, 412, 588, 711Object, 1, 7, 11, 14, 18-19, 33, 46, 50, 86, 93, 108,

149, 206, 226, 277, 287, 291, 299, 355,391-392, 397, 405-407, 412, 427, 430, 461,472, 502, 542, 553-554, 568, 570, 585,595-596, 665, 674, 686, 696, 718, 720, 728

Object relations, 108Objective, 6, 23, 25, 29, 32-33, 42, 70, 80, 83, 85,

87-89, 91, 93, 129, 133, 187, 195, 209, 255,264, 412, 429, 432, 458, 464, 519-520, 567,595, 609, 615, 623, 630-631, 633, 656, 659,670, 682, 712

Objective observation, 6, 25, 42, 633Objectives, 148, 206, 591, 608, 667, 670, 672, 675,

711measurable, 670observations of, 591

Objectivity, 25Observation, 1, 3-6, 8, 10, 12-14, 16, 20, 23, 25, 29,

42, 46-47, 61, 68, 70, 73-75, 90, 96, 99-101,110-117, 119-120, 122, 126-131, 133-135,138, 140, 144, 150, 156-158, 169-170,179-180, 184-185, 187, 195, 203, 234, 237,257, 259, 268, 293, 295, 297, 300, 333, 364,374, 376-377, 389, 392, 397, 442, 488, 516,519-520, 529, 532-533, 552, 569, 576, 589,598-599, 601, 626, 633, 641, 650, 652, 659,661-662, 695-696, 700, 704, 711, 719,723-724

focused, 150, 442, 519of students with disabilities, 114, 576quantitative, 10, 120, 724self-monitoring, 16, 75, 598-599, 601, 650, 700,

719, 724Observational learning, 444, 698, 727Observational recording, 120, 140Observations, 6, 8, 14, 23, 61, 69, 73, 75, 82, 89-90,

97-98, 126-129, 132-134, 143, 147, 157,166, 170, 185, 187, 190, 197, 206, 229, 231,239, 254, 258, 268, 271, 278, 297, 310, 502,516, 521, 529, 532, 561, 570, 591, 641,682-683

Observer training, 128-129, 143Occupational therapist, 261Office for Civil Rights, 700Office of Special Education Programs, 594Ohio, 35, 74, 87, 109, 111, 117, 210, 257, 260,

262-263, 422-424, 549, 604, 618, 640, 689,701, 704, 708, 712, 721, 725, 727

Ohio State University, 35, 74, 87, 109, 111, 117, 210,257, 260, 262, 604, 689, 701, 708, 712, 721,725, 727

742

Older adults, 418, 685, 692Ongoing process, 147Onset, 6, 8, 13, 15, 50, 59, 96, 100, 110, 116, 120,

199, 278, 307, 314, 349-350, 372, 376, 387,393, 397-399, 401, 404-406, 435-437, 445,457, 474, 476, 478, 503, 568, 609, 690

Onsets, 312, 398On-task behavior, 99, 110-111, 113-114, 127, 130, 320,

485, 590, 605-606, 609, 613, 689, 691, 693,699-700, 705, 708, 728

Ontogeny, 4, 9, 12-14, 44, 52-53, 60, 63, 66, 689Open-ended questions, 296Openers, 636Operant behavior, 2, 4, 8-9, 12, 15, 18, 28, 30-31, 34,

43, 44-46, 51-53, 55, 62-63, 66, 88, 278,409, 536, 597, 607, 686, 692, 702-703, 712,717, 721-723, 727

Operating system, 119Operational definition, 255, 273Opinions, 6, 25, 93, 120, 258-262, 274, 362, 667Oral reading, 33, 94, 98, 100, 126, 128, 132, 212-213,

360, 695, 721Ordinate, 149, 155Oregon, 36, 637, 709, 727Org, 162, 669-670, 685-686, 692, 706, 713Organization, 63, 160, 162, 537, 580, 585, 587, 591,

667-668, 684, 694Organizational behavior, 673Organizations, 27-29, 368, 370, 667-668, 671, 673,

684Organizing, 23, 45, 85, 592Orientation, 33, 537, 584, 623Orthography, 132Osteoporosis, 418Outcomes, 8-9, 19, 36-39, 52, 68, 76, 79-80, 83, 87,

94-95, 103-104, 115-116, 122, 127, 130,140-141, 155, 170, 177, 179, 193, 250, 253,257-259, 261-263, 270, 274, 282, 345, 387,389, 417, 434, 448, 499, 511, 520, 562, 576,586, 588-591, 594, 616, 623-624, 627,629-634, 658, 660-663, 665, 667, 670, 674,680-681, 684, 687, 698, 700, 704, 717, 726

predicting, 19Outliers, 171Outlines, 27, 104, 677overcorrection, 13, 15, 216, 344, 356, 360-362,

364-365, 367, 372, 375, 610, 666, 676,685-686, 691, 697, 707, 712, 714, 721-722

positive practice, 13, 15, 344, 360-361, 372, 610,686, 721

restitutional, 13, 15, 344, 360, 372Overlap, 69, 174, 206, 209-210, 213, 218, 435, 629Overlearning, 637Overweight, 588, 596Oxymoron, 705

PPacing, 46Painting, 65, 103, 227Palm, 50, 88, 108, 294-295, 357, 479PANs, 601Pantomime, 429Paradigm, 29-30, 45, 52, 142, 270, 276, 689, 720Parents, 2, 9, 36, 39, 42, 65, 70-72, 78-79, 83, 85, 90,

95, 133, 158, 187, 206, 210, 217, 224, 233,238, 240-241, 256-259, 261, 270, 287, 293,297, 325, 337, 360-362, 364, 378, 381,383-385, 387, 409, 428, 432, 470-471,473-474, 477-478, 482, 491-492, 519, 531,533, 546-547, 552-553, 559-560, 562-565,567, 576, 581, 593-594, 596, 609, 617, 623,657, 660, 665-666, 670, 677, 682, 691, 693,699, 701, 706, 713, 717-718, 720

expectations of, 79involvement, 660involvement of, 660participation by, 83, 90, 706response of, 259

Parsimony, 13, 16, 22, 25-27, 42, 179Partial interval recording, 600Partial-interval recording, 13, 18, 92, 110, 112-115,

120Participants, 10, 18, 34, 36, 39, 41, 64-65, 83, 85, 94,

116, 129-130, 140, 150, 152-154, 156, 176,183-184, 206, 213, 217, 221, 224-225, 227,229, 232, 236, 243, 248, 259, 263-264, 271,274, 284-285, 290, 300, 303, 306-307, 314,316, 319, 326-329, 331, 333, 336-338, 341,351, 353, 362, 364-365, 367-368, 389, 414,

418, 429, 483, 486, 490, 496, 505-508, 564,567-571, 573-575, 581, 589, 601, 603, 606,608, 617, 627, 636, 644, 646, 656, 673

debriefing, 673Participation, 83, 90, 113, 267, 277, 369, 398, 431,

433, 446, 531, 562, 570, 677, 691-692, 697,706, 710, 713

Past experiences, 128Path, 2, 5, 19, 146, 149-152, 156-157, 166-171,

176-177, 179, 188, 190, 198, 200, 213, 217,223, 267, 460, 712

Pathology, 95, 539Patience, 648Patterns, 63, 73, 98, 131, 142-143, 146, 158, 162,

169, 172, 175, 184, 187-190, 195, 201, 218,266, 338, 515, 519-520, 522, 629, 641, 656,688

number, 73, 98, 158, 184, 187, 195, 519, 629, 641,656

patterns of behavior, 63, 98, 146, 522Pavlov, Ivan, 30, 698Paying attention, 117, 156, 356, 576, 604, 606Peabody Individual Achievement Test, 73, 710Peabody Picture Vocabulary Test, 694Peak, 617Peck, 97, 199, 207, 281, 382, 384, 387, 411, 415-416,

435, 686, 690, 692Peer assistance, 656, 728Peer interaction, 556, 716Peer pressure, 577Peer review, 674, 680, 683Peer support, 692Peer tutoring, 166, 462, 699, 701, 724

classwide, 701, 724Pencils, 228, 359, 570People with disabilities, 12, 40, 79Perception, 183, 378, 537, 682Perceptions, 169, 378, 717Performance, 5, 9-10, 12-13, 16, 18, 29, 32, 34, 39,

48, 70, 73, 81, 87, 89, 91, 96-98, 101-103,106, 115, 120, 125, 128-129, 132, 134, 140,147-148, 151, 153-154, 156-158, 160-162,165, 167, 169-171, 174-177, 181-183,187-188, 190, 194-195, 200, 204, 209-210,213, 216-218, 221, 223-225, 229, 231-234,242-244, 246-248, 253-254, 256, 258-259,267-268, 270, 273-274, 279, 282-283, 292,305-308, 318-320, 322-323, 332-333,335-337, 339-341, 343, 375, 381, 385, 389,418-420, 425, 428, 430, 432-433, 435-436,440-445, 448, 450, 452-453, 456-460,463-465, 491, 504, 559, 561-563, 567-570,573, 575-576, 578-582, 588-591, 599,601-604, 606-609, 611-613, 615-616, 619,625, 627, 630-631, 634-640, 642-648, 650,652, 656, 659-663, 670, 678, 683, 687, 691,693-695, 697-698, 703, 705-706, 710-712,714-717, 719-721, 724, 728

level of, 10, 12, 48, 70, 97, 101-103, 120, 125, 128,134, 140, 147, 167, 169-171, 174, 177,187, 190, 194-195, 200, 210, 213, 216,221, 223, 229, 231, 233-234, 242, 244,246, 248, 253, 256, 258-259, 270,273-274, 305, 381, 420, 432, 440, 459,463, 491, 562, 580, 599, 602, 611, 650,656, 659, 663

measurement systems, 101, 120, 128, 246Performance criteria, 10, 91, 102, 568, 580, 582,

590-591, 604, 611-612, 615, 619, 638, 648,670

performance feedback, 129, 256, 691, 694Performance goals, 590Performance standards, 590Period, 3, 5-13, 17-18, 20, 34, 56, 74-75, 80, 88, 92,

96, 98-101, 105, 109-113, 115-117, 120,126-127, 130, 134-135, 144, 148, 150,156-158, 160-161, 163, 167-168, 170,176-177, 186-187, 190, 195, 203-204, 209,215-217, 219, 237, 240, 244, 247, 256, 258,266, 268, 278, 281, 284, 293-294, 297, 299,301, 326-327, 329, 345, 347, 349, 366,375-378, 380, 382, 384-385, 402, 407, 410,413, 436, 457, 460, 478, 488, 492, 506, 513,516, 519, 522, 526, 529, 533, 562, 568,570-573, 591, 599, 603-604, 606-611, 613,644-647, 650-651, 657-658, 674

Periodicals, 578Perpendicular lines, 10, 149, 176Perseverance, 27, 128, 477

Persistence, 15, 448, 475-476, 624, 685, 690, 698Personal experience, 190, 666Personal relationships, 683Personal space, 378Personality, 73, 712Personality inventories, 73Personnel, 34, 83, 129, 568, 670, 698Peterson, 32, 73, 95, 204, 250, 255-257, 264, 429,

510, 516, 549-550, 605, 680-682, 687-690,692, 696, 701, 709, 715-716

Philosophy, 3, 12, 23, 27-29, 31-33, 39, 41, 43, 369,538, 547, 585, 666, 682, 691, 701, 705, 710,712

contemporary, 29, 43Phobias, 17, 614, 620Phoneme, 78, 307

segmentation, 78Phonemes, 537, 541-542Phonemic Awareness Skills, 78Phonics, 246Photographs, 63, 116, 263, 566, 654Physical contact, 13, 293, 357-358, 376, 420, 540Physical disabilities, 691, 708Physical education, 376-377, 457-458, 463, 571, 719,

727Physical environment, 116, 326, 417, 511, 540Physical fitness, 463, 716Physical proximity, 411Physical restraint, 349, 364, 366, 507, 585, 721Physical safety, 370, 680Physical science, 35Physicians, 34Physiological factors, 520Picture, 14, 17-18, 24, 69, 75, 114, 123-124, 126, 130,

136, 139, 143, 154, 174, 190, 206, 209,213-215, 227, 231, 240, 248, 299, 414-416,437, 461, 506, 508, 521, 533, 541, 547-548,550, 553-555, 612, 632-633, 689, 694, 699,709, 724, 726

recipes, 724Pictures, 16, 154, 296, 299, 378, 414, 418, 502, 529,

547, 551, 566, 571, 596, 641, 653, 657, 667PILOT, 460, 545P.L, 704Placement, 86, 168-169, 299, 450-451, 483, 574, 670Planned ignoring, 13, 289, 374, 376, 389, 524Planning, 69, 71, 83, 88-89, 148, 174, 180, 182, 187,

193, 195, 228, 245-274, 450, 476, 478, 592,595, 631-632, 644, 662, 667, 720

inadequate, 265of questions, 260, 264, 267, 274, 632

plants, 65, 225, 418, 618Platform, 459Plato, 95, 701Play, 32, 48, 56, 64, 74-75, 79-80, 85-86, 98-99, 101,

108, 110, 131, 143, 153, 199, 203-204, 224,257-258, 265, 267, 278, 281, 288, 291-292,296-298, 301, 308, 315, 317, 322, 337-338,383, 405, 427-429, 459, 479, 489, 493,514-515, 522-523, 525-530, 532, 540, 543,549, 554, 559, 563, 571, 573, 580, 584, 589,629, 642, 646, 649, 652, 667, 674, 680, 686,694, 702, 710, 712, 718, 726

absence of, 110, 288, 308, 315, 322, 427-429, 489,589, 642, 674

blocks, 80, 86, 204, 308, 493, 629pretend, 652

Play area, 338Plays, 25, 56, 170, 209, 218, 271, 319, 327, 348, 394,

401, 409, 413, 428, 448, 537, 541, 543, 546,553, 614, 644

Plot, 166, 168, 209, 463, 704, 724Plots, 208Poetry, 549Point of view, 186Pointing, 20, 52, 82, 98, 296, 346, 419, 427, 538, 553Policies, 363, 369-370, 381, 571, 677-678Policies and procedures, 369Policy, 369-370, 373, 379-380, 666, 669Political activity, 283, 286Pooling, 249Population, 52, 73, 181-182, 184, 246, 264, 274, 636Populations, 83, 381, 674, 712Positioning, 416Positive behavior, 29, 278, 388, 523, 564, 578, 596,

610, 686, 707, 721, 723-724Positive behavioral support, 718Positive correlation, 350Positive learning environment, 504

743

Positive Practice, 13, 15, 344, 360-361, 372, 610, 686,721

positive practice overcorrection, 13, 15, 344, 360, 372,610, 721

Positive reinforcement, 13-14, 18, 40, 44, 56-58, 62,66, 69, 202, 213, 226, 276-310, 312-313,316-320, 322-323, 347, 360, 368-369, 371,373, 374-376, 381-382, 385, 388-389, 401,407, 437, 460, 466, 470, 477, 479, 483-484,490, 501, 509, 511-512, 515, 560-561,584-585, 607-608, 676, 687, 716

schedules of, 13, 40, 57, 276, 296, 300-301, 308,310, 368, 374

secondary, 66, 289, 437Positive reinforcers, 18, 61, 315-316, 375, 378, 385,

387-389, 402, 407, 482Postassessment, 430-431Post-test, 444Posttest, 102, 246-247, 253Pot, 126, 601Potential, 3, 10, 24, 39, 42, 48, 60, 69-70, 72-73,

75-77, 79, 82-84, 90, 100, 119, 128-129,179-180, 184, 187, 194, 204-205, 209, 218,229, 234, 237, 244, 250, 252-253, 256-257,265, 267, 273, 278, 289, 294-297, 300,303-304, 310, 337, 357, 361, 369, 371, 380,385, 387, 389, 414, 446, 449, 466, 484, 490,494, 508, 512, 514, 519-523, 549, 551,564-565, 588-589, 591, 615-616, 640, 653,660, 665-668, 671, 675-679, 683, 687, 696,711, 723

Poverty, 64power, 38, 190, 234, 248, 290, 293, 403, 414, 580,

607, 696, 722, 724destructive, 696

Practice, 11, 13, 15, 19, 25-27, 31-32, 36, 40-41, 43,59-60, 78, 81, 85, 94-96, 102, 108, 115-116,118, 124-125, 128-129, 139, 143, 149, 164,166, 175, 178, 190, 195, 202, 209, 211, 216,224-225, 231, 234, 236-238, 242, 244, 249,254-257, 264, 267, 272, 279-280, 282, 290,299, 313, 318-319, 323, 325, 335, 337, 339,344-345, 351, 360-361, 369, 372, 417, 428,458-459, 461, 465, 478, 499, 516, 519, 524,533, 561, 564, 574, 579, 583, 589, 591,593-594, 606, 609-610, 612, 614, 617,619-620, 626, 635-636, 639-641, 648-649,652, 659-660, 662, 664-671, 674, 678, 680,682, 684, 686, 689-691, 700, 702-703, 706,710, 715, 720-721, 726, 728

acts, 118, 282termination of, 94, 313, 524, 678

Practice effects, 13, 178, 190, 195, 216, 242, 254, 267Praise, 23, 34, 56, 59-60, 86, 109, 132, 140-141, 156,

167, 183, 186, 203-206, 227, 230, 268, 282,287, 290, 293, 295, 307-310, 315, 318, 325,333, 350, 356-357, 361, 385, 402-403, 407,430-433, 436, 442, 460, 483, 488, 491-492,494, 505-506, 512, 526-527, 541-542,553-555, 561, 569, 573-574, 580, 586, 589,596, 600, 603, 608-610, 615-616, 618,628-629, 635, 645, 649-653, 656-657, 660,663, 691-692, 705, 712, 720, 723-724

Precision teaching, 28, 160-163, 688, 692, 706-708,716, 727

Predicates, 537Predicting, 15, 19, 26, 172, 190, 209, 213, 291, 310Prediction, 2, 13, 15-16, 20, 23-24, 27, 29, 39, 42,

62-63, 93, 120, 178-179, 187, 189-195,197-201, 209, 218, 221-223, 233, 235-236,238-239, 250, 269, 273, 369, 547, 615-616,699, 705, 722

Predictive validity, 699Preference assessments, 154, 295-296, 299-301, 306,

310, 691, 693, 701, 708, 714, 719prejudice, 683Premack Principle, 13, 15, 276, 291, 297, 309, 561,

589Preplanning, 580Preposition, 710Prepositional phrases, 537Prepositions, 227, 537, 540, 552, 554Preschool, 24, 34, 102, 108, 126, 203-204, 233, 289,

293, 308, 350, 412, 436, 444, 456, 477, 479,482, 505, 612, 629, 650, 653, 673, 686-687,689, 692-694, 697, 700, 711, 715-719,722-723

Preschool children, 24, 34, 102, 108, 204, 233, 289,308, 350, 505, 650, 653, 686-687, 692, 694,

715, 718-719, 723Preschool classrooms, 24, 686, 697Preschoolers, 250, 439, 505, 566, 590, 650, 687, 689,

691, 705-706, 714, 728aggression, 714

Prescribing, 668Presence, 6-7, 11-14, 17-19, 23, 26, 55, 61, 66, 75,

79, 90, 110, 113, 118-119, 127, 129,149-150, 174, 179, 183-184, 186, 189,191-193, 197, 209, 216, 223, 227, 230, 236,242, 248, 253, 256, 267, 273, 281, 284, 295,301, 303-304, 307, 310, 312, 314-316,322-323, 347-349, 353, 355, 372, 393, 395,397-398, 400-406, 409-412, 416-417, 419,424-425, 435-437, 442, 444, 449-451, 453,456, 458, 466, 500, 518-519, 525, 537,540-541, 543, 546-549, 553, 557, 566, 589,597, 616, 624-625, 627, 630, 639-640, 642,644, 653, 662, 674, 680, 690, 692, 727

Presentation, 2, 6, 8-9, 12-13, 15, 19, 24, 47, 50,56-57, 59-61, 63, 66, 78, 98, 102, 132, 154,156, 169, 185, 200, 203, 217, 256-257,278-280, 284, 287, 298-299, 301, 303-305,309-310, 312-316, 320, 322-323, 339,344-373, 401, 420, 427, 429-430, 432, 442,451, 482, 503, 524, 570, 574, 597, 685, 693,704, 715, 727

Presuppositions, 130Pretest, 102, 129, 209, 246-247, 253, 430Prevalence, 475, 707prevention, 39, 42, 362, 511-513, 533, 694, 703, 711,

714, 727Preventive maintenance, 129Primary sources, 682Priming, 460, 466Principle of reinforcement, 59, 278Principles of Behavior Modification, 688Print, 76, 594Printing, 168-169, 592Privacy, 669, 680, 684Private practice, 671Probability, 1, 3-4, 9, 13, 16, 20, 23, 42, 56, 79, 163,

179, 181, 192-195, 208, 217, 242, 246, 255,264, 270-271, 278, 291, 297-298, 309, 343,348, 356, 378, 382, 389, 451, 456, 469, 479,498, 501, 504-506, 509, 510, 518-519, 527,585, 587-588, 615, 619, 640, 642, 686, 693,705, 721, 727

Probe sessions, 229-230, 263, 626, 638-639Problem behavior, 4-6, 8-9, 12-13, 15, 20, 24, 29, 63,

82, 84, 99, 152-153, 170, 198-199, 227,250-251, 257, 259-260, 262, 316-318, 320,335, 340, 346, 351-352, 354-355, 357-367,370, 372, 375, 401, 403, 406-407, 469-472,474, 476-478, 480, 482-494, 496-497,500-504, 506-509, 511-516, 518-534,608-610, 613, 620, 686, 695, 697, 700, 704,706-707, 711, 714, 717, 722, 724-725

self-injurious, 153, 170, 262, 351-352, 354, 359,362-365, 367, 471-472, 478, 485-486,489, 492, 494, 497, 501-502, 506, 511,513, 700, 706-707, 722

stereotypic, 250, 320, 359-360, 489, 494, 707Problem solving, 15, 102, 409, 537, 592, 594, 616,

710Problem-solving, 592, 702Procedural Safeguards, 369-370, 373Procedures, 3, 9, 12, 14, 16-17, 28, 31, 33-34, 37-40,

42-43, 58-59, 62, 66, 71, 73, 75, 79, 83, 85,89-90, 92, 94, 99, 103, 108-110, 114-115,118-120, 122-124, 126-132, 134, 142-143,150, 156, 167, 179, 184-185, 195, 211-216,228, 246, 250, 255-256, 258-262, 264-268,270-274, 276, 284, 289, 293-295, 297, 299,303-304, 307, 310, 311, 318-320, 324-325,327, 329, 333, 335, 343, 344, 361-370, 373,374-375, 379-382, 384-385, 388-389,402-403, 407, 408, 410-414, 416-422, 425,429-431, 434-437, 442, 446-448, 452-453,454, 459-460, 465-467, 468-471, 475-477,479-480, 481-483, 485-489, 491-497,498-501, 503, 507-509, 513, 516, 519, 521,524, 530, 534, 536, 547-548, 551-556,558-560, 564, 568, 570, 572-573, 576,581-582, 583, 586-587, 589, 592, 594, 601,603, 606-607, 609, 611-615, 618, 622, 624,626, 634-635, 638, 641-642, 646-647, 656,661, 663, 665-667, 669-670, 672, 674-678,680, 683, 686-689, 691-692, 696-699,

702-704, 706, 708, 711-712, 714-715,717-718, 720-721, 723-724, 726-728

process of teaching, 541Processing, 32, 537Product, 2, 4, 8, 11-13, 18-19, 24-25, 40-41, 49, 51-52,

59, 64, 66, 79, 81, 92, 97, 101, 115-116,118-119, 121, 133-135, 142-143, 147, 176,254, 266, 280, 284, 286-287, 289, 309, 325,348-349, 415, 424, 437, 452, 455, 466, 488,541-542, 550, 556-557, 567, 587, 594, 606

Productivity, 39, 111, 118, 156, 183, 254, 350, 388,590-591, 598, 601-602, 605-606, 613, 653,689, 691-692, 698, 700, 707-709

Products, 19, 24, 61, 63, 81, 86-87, 115-119, 121, 129,133-134, 139-140, 142, 254, 287-288, 360,544, 566, 587, 606, 624, 689, 703, 712

Professional associations, 7Professional behavior, 684Professional boundaries, 683Professional competence, 664, 671, 674, 684Professional development, 664, 672, 674Professional ethics, 713Professional issues, 35Professional organizations, 27, 368, 370, 667-668,

671, 684Professional Psychology: Research and Practice, 710Professional relationships, 677Professional standards, 68, 664, 667-668, 674, 684,

706Professionals, 80, 83, 85, 368, 381, 666, 671, 678,

682-683, 701, 720Profiles, 674Program effectiveness, 88Program planning, 83Programmed instruction, 34, 689Programming, 14, 154, 156, 215, 308, 335, 366, 466,

493, 501, 569, 611, 622, 627, 631, 636,640-642, 644, 647-648, 652-654, 657, 660,662-663, 667, 677, 685-686, 690-691,693-695, 697, 704, 711, 717, 720, 723

Programs, 20, 29, 34, 36, 39-41, 56, 69, 71, 76, 78,80-81, 83, 85, 89-90, 94-95, 116, 168-169,243, 256-258, 272, 292, 294, 312, 322,325-326, 345, 369, 379, 386, 402, 405, 407,414, 432, 536, 550, 554, 556, 559, 569,574-576, 586, 589, 591-594, 609, 611-612,615-616, 620, 639, 642, 656-657, 660, 663,666, 669-672, 680-681, 686, 688, 702, 712

regular classrooms, 574remedial, 69, 686, 712

Progressive time delay, 420, 701Project, 162, 308, 422-424, 483, 595, 608, 616-617,

704Projection, 162-163, 594Projects, 64, 161, 296, 577, 591-592, 611, 616, 708Promotions, 328Prompts, 5, 16, 46, 102, 133, 227-230, 287, 291, 307,

310, 359, 361, 417-420, 422, 425, 429,431-433, 443, 445, 448, 450, 452, 455, 471,493-494, 500, 502, 507, 514, 527, 543,552-556, 573, 586, 595-597, 604-605,612-613, 618-620, 628-629, 642, 650,652-654, 656-657, 660, 663, 688-689, 692,694-695, 697, 701, 713-714, 716, 723, 726

topics, 697, 723Property, 16, 45, 56, 60-62, 149-150, 263, 281, 286,

306, 320, 412, 428, 479, 492, 513, 524-526,547, 550, 557

Propositions, 93Prosocial behavior, 576, 721Prosody, 537Protecting, 677-678Protocol, 6, 118, 240, 297, 427, 429, 433, 679, 686,

716Psychiatric disorders, 336, 492Psycholinguistics, 537-538Psychological measurement, 691Psychologists, 30, 668-669, 671, 681, 684, 685-686,

699, 701, 713Psychology, 27-29, 33, 42, 57, 101, 181, 184, 203,

246, 261, 264, 293, 391, 471, 516, 537, 539,578, 671, 685, 687, 689-691, 693-700,704-722, 724, 726-727

social psychology, 471, 698, 712, 727Psychotherapy, 31, 707, 728Psychotic disorders, 34Psychotropic medication, 676Publication Manual of the American Psychological

Association, 685

744

Publications, 379, 439, 445, 640, 674, 705, 713, 727Publishing, 34, 36, 281, 462, 673, 688, 705Punctuation, 561Punctuation marks, 561Punishment, 2, 4-5, 8-9, 12-15, 18-19, 34, 44, 53-54,

56-59, 61-62, 64, 66, 76, 82, 200-201, 206,244, 250, 266, 270, 272, 281-283, 293,313-315, 322, 338, 344-373, 374-389,390-391, 393, 396-400, 406, 459, 462, 470,482, 487, 491-492, 499-500, 513, 516, 533,536, 544, 557, 594, 603, 607-610, 616, 620,631-632, 662, 666, 672, 686-687, 689-691,694, 696-699, 702, 707-708, 710, 712-714,718-719, 722, 724-726, 728

Punishment procedure, 8, 14, 359, 361, 364, 368-370,373, 375, 385, 389, 390, 393, 396-398, 406,491

Purchasing, 591, 595Puzzles, 716

QQuadriplegia, 667Qualifications, 54, 278, 678, 720Quality, 16, 65, 93-94, 101, 122-144, 255, 265, 267,

274, 284, 304, 306, 310, 327, 337-338, 364,372, 412, 455, 466, 476, 480, 502, 506, 551,587, 595, 601, 619, 643, 648, 650-651, 659,680, 695-696, 702, 712-713, 720

Quality control, 128, 712Quantities, 13-14, 17-18, 95-99, 101, 115, 120, 187,

337Quantity, 13, 19, 93-95, 101, 103, 120, 170, 184, 194Quasi-experimental designs, 690Questioning, 42, 190Questionnaires, 9, 71, 90, 258, 274, 519-520, 533Question(s), 249, 273Questions, 7, 22-23, 31, 33, 35, 37, 41, 45, 68, 70-71,

73-76, 79-80, 82, 88-90, 92-93, 95, 100,108-109, 116, 118, 120, 122, 124, 126,139-140, 142, 146, 169, 177, 178-180, 193,195, 196, 216, 220, 227-229, 235, 238,245-246, 250, 257-260, 263-264, 267-270,272, 274, 276, 296, 311, 324, 344, 371, 374,386, 390, 401-402, 407, 408, 417, 426, 434,454, 466, 468-469, 477-478, 481, 485, 494,498, 510, 520-521, 526, 528, 536, 539-540,542-543, 551-553, 555-556, 558, 571-572,583, 602, 605, 612, 615-616, 625-626,632-633, 642, 650, 665-668, 673, 675, 677,679-683, 715, 725, 727

closed, 169, 626easy, 75, 108, 238, 528, 551-552, 555, 571, 632,

725formulating, 70generating, 572harder, 417, 555inferential, 235investigating, 180, 238leading, 89leads, 71, 79, 666, 680moving on, 551on tests, 269poor, 124, 296, 417, 668scaling, 169, 267subject matter of, 539what if, 126, 193

Quizzes, 180, 332Quotation marks, 654

RRace, 29, 339, 588Radiation, 409Radical behaviorists, 33Radio, 201, 362, 618Random numbers, 329, 333Random selection, 211Range, 3-4, 8, 13, 17, 19-20, 42, 50, 52, 54, 63, 66,

69, 72-73, 75-76, 79, 89, 97-98, 100-101,103, 106, 116, 124, 128, 135, 137, 140-142,148-149, 154, 165-171, 174, 176-177, 186,188-189, 195, 209-210, 229, 241-242, 246,249, 259, 261, 267, 273, 278-279, 287, 290,305, 315, 322, 328, 350, 362, 372, 379,446-448, 456, 458, 460, 482, 484, 486, 489,515, 521, 531, 561, 590, 595, 601-602, 616,629, 634-639, 641, 646, 650, 654, 662, 666,675, 713, 727

Rape, 728

Rash, 512, 546, 555Rates, 3-6, 8, 10-11, 16-17, 19, 23-24, 66, 82, 96-100,

109-110, 119, 126-127, 138-139, 144, 149,156-157, 160, 173, 206-207, 211, 226, 240,243-244, 268, 282, 284, 289, 291-293,302-303, 310, 316, 318, 320, 324, 327-329,331-332, 334-336, 340, 343, 355, 357-358,362, 366, 379, 383, 471, 481-482, 485,490-494, 496-497, 503, 508, 514, 522, 526,529, 581, 590-591, 601, 606, 613, 636, 708,726-727

Rates of learning, 211Rating scales, 9, 72, 519-521, 533, 688Ratio scale, 159ratio schedules of reinforcement, 326, 335ratios, 14, 96, 101, 327-330, 571, 590Reaching, 33, 79, 307, 331, 337, 348, 404, 418, 540,

551, 606, 638-639, 674Reactivity, 12, 14, 68, 75, 90, 119, 122, 130, 143, 164,

237, 254, 268, 599, 689, 700, 704-705, 708,713

reactivity to measurement, 254Readiness, 29, 32Reading, 33, 45, 53, 55, 69, 73, 78-80, 94, 96, 98,

100, 102, 109, 116, 124, 126, 128, 132, 160,166, 179, 181, 211-213, 216, 239, 243, 280,282, 291-294, 297-298, 306, 318-319, 330,360, 382-383, 414, 463, 538-539, 542, 556,566, 570, 574, 578-579, 590, 598, 605, 608,632, 634, 643, 646-650, 655, 667, 674, 686,689, 692, 694-695, 704-705, 713, 719-721,728

acceptable, 128, 463, 598, 667, 695assisted, 705cumulative, 166, 211, 319difficulties, 280, 719effective instruction, 69, 318pointed, 69, 538to students, 73, 294, 632, 694, 705, 721wide, 73, 79, 98, 116, 578, 590, 632, 634, 646, 650

Reading comprehension, 109, 132, 181, 542, 605Reading fluency, 128, 695Reading instruction, 713Reading performance, 216, 319Reading skills, 73, 78, 566Readings, 704, 709, 719Reasoning, 1-2, 13, 16, 20, 27, 183, 187, 190, 192,

194-195, 216, 232, 236-237, 247, 249, 264,269, 280, 309, 357, 585, 619

Recall, 221, 315, 401, 520, 596, 647, 667, 689Receiving, 4, 18, 46, 273, 281, 289, 300, 302, 325,

330-332, 338, 345, 376, 402-403, 405, 407,429, 441, 459, 572, 608, 613, 631, 639,641-642, 645, 648, 651, 657, 659, 669, 677,684

Receptive language, 405, 471, 539, 542, 545, 556Recess, 376, 382, 385, 387-388, 436, 523, 577-578,

649Reciprocal inhibition, 728Reciting, 625Recognition, 421, 653Recommendations, 68, 128, 139, 164-165, 371, 450,

476, 502, 611, 665, 685, 717, 726Reconstruction, 28, 728Record keeping, 669Recorders, 129, 131, 183, 598Recording, 1, 7, 10-11, 13, 18-20, 26, 68, 73-74, 90,

92, 98-100, 104, 108-117, 119-121, 122,127-130, 132, 135-137, 140-142, 144,150-151, 157, 161, 164, 168, 184-186, 199,257, 268, 293, 297-298, 440-441, 458,516-519, 526, 533, 537, 560, 562, 570, 578,580-581, 590, 598-601, 603-604, 606, 608,618-619, 650, 657, 689-690, 692, 695, 700,705, 707-710, 712, 717, 724

Recordkeeping, 592Records, 1, 5, 13, 16, 20, 73, 97, 110, 112-113, 117,

121, 134-135, 138, 148, 155-158, 176,297-298, 316-317, 381, 388-389, 516, 565,580, 586, 598, 602, 615, 672, 679-681, 684

strength of, 316, 580Recruitment, 508, 608, 650-652Recycling, 86, 116, 585, 690Redirection, 359, 362, 442, 507, 699Reference, 3, 45-46, 51-52, 56, 61-63, 65, 87, 96, 98,

100-101, 189, 278, 286, 399, 586, 619Referral, 71, 240, 673, 681Referrals, 664, 672Reflecting, 669

Register, 35Regression, 172, 177Regrouping, 78, 97, 528, 603, 635, 694Regularity, 72, 506Regulations, 673, 677, 681Rehabilitation, 376, 686Reinforcement, 1-20, 28, 34, 40, 44, 52, 54, 56-62, 64,

66, 69, 75-80, 82, 84, 90, 100, 103, 108, 111,130, 155-157, 159, 167, 183, 186-187, 195,200-206, 212-213, 218, 221, 226, 229, 233,239-240, 242, 244, 248, 250-251, 253-254,257, 259-260, 262-263, 270, 276-310,311-323, 324-343, 345-348, 350-364,366-369, 371-373, 374-383, 385-389,391-407, 409-417, 424-425, 427-429, 432,435-437, 442-443, 445-446, 448, 450,452-453, 454-456, 458-460, 462-467,469-480, 481-497, 498-504, 506-509,511-516, 518, 520-524, 527, 530, 533-534,536, 538-544, 546, 548-549, 551-557,560-561, 568-569, 571-576, 580-581,584-590, 594, 601-603, 606-613, 616,619-620, 622, 624, 626, 628-635, 641-650,652-656, 658, 660, 662-663, 666-667, 672,674, 676, 680, 685-728

consequences and, 469differential, 2-6, 8, 10, 13, 15-17, 20, 59, 66, 100,

103, 186, 195, 218, 248, 250-251, 304,307, 310, 318, 321, 324, 334-336, 340,343, 353, 362, 366-367, 369, 373,379-381, 385, 393, 406, 410, 412-413,416, 424-425, 454-456, 458-459,465-466, 477, 481-497, 499-501, 503,506, 509, 524, 538, 540, 549, 551-552,569, 580-581, 586, 601, 606, 616, 655,666, 674, 690, 692-694, 705, 707-709,715-718, 726

extinction, 4-8, 14-17, 44, 54, 57, 59, 66, 186, 289,303-305, 310, 325, 331, 336, 340, 343,346, 348, 353, 357-358, 366-367, 369,373, 379, 381-382, 392-393, 396,401-407, 411-412, 455-456, 459, 463,466, 469-480, 482-483, 485-486, 490,497, 499-502, 504, 507-509, 512, 522,524, 538, 553, 611, 624, 628, 644-646,688, 693, 696, 698-699, 702-704,706-707, 710, 715, 717-719, 722,726-727

of alternative behaviors, 367, 477, 500of incompatible behaviors, 64, 693thinning of, 351

Reinforcement history, 2, 75, 90, 389, 500, 540Reinforcement ratio, 324, 326Reinforcement schedules, 14, 108, 159, 300, 310,

334, 342-343, 465, 501, 576, 692, 702, 704,707, 713, 725

Reinforcers, 2-4, 9-10, 13-15, 17-19, 47, 53, 56,58-61, 66, 69, 76, 78-80, 83, 90, 101, 253,276, 278, 280-282, 287, 289-291, 293-297,299-303, 306-310, 311, 315-316, 322, 328,336-337, 340, 345, 347, 349-350, 353,364-367, 375-378, 380, 382, 385-389,391-398, 402-403, 405-407, 416, 425, 431,433, 435, 462, 468-470, 476, 479-480,482-485, 490, 496, 500-503, 507-508, 511,520, 523, 530, 533-534, 539-540, 546,550-553, 561, 568-574, 577, 580-582, 588,591, 594-595, 602, 610, 631, 633, 635, 643,647-650, 657, 660, 663, 672, 681, 685, 691,694-696, 698, 708, 717-718, 722, 725, 728

reinforcing successive approximations, 455, 458Rejection, 298, 715Relapse, 707RELATE, 340, 377, 409, 473, 515, 526, 533, 538, 579,

671Relationship, 2, 4, 17, 23, 72, 77, 84, 95, 125, 143,

149, 254, 264, 270, 279, 301, 340, 361, 377,400, 409, 412-413, 425, 436, 453, 461, 559,566-567, 680, 683, 709, 712, 719, 721

benefit, 680Relationships, 7, 29, 41, 62, 101, 147-148, 164, 176,

264, 303, 339, 552, 664, 672, 677, 683-684,706, 727

healthy, 29relaxation, 17, 287, 597, 614, 620Releasers, 32Reliability, 14-15, 17, 26, 36, 42, 119, 122-125,

128-133, 142-144, 172, 193, 195, 256,265-266, 268, 271, 274, 520, 688-689, 692,

745

700, 704, 708, 714, 720, 722, 728illustration, 172interrater, 520, 714, 720meaning, 142, 708measurement error, 124-125, 128-132, 143of assessments, 142

Religion, 31, 537, 668Remembering, 57Reminders, 595, 604Repeated reading, 282Replacement behavior, 8, 80, 259, 320, 323, 482, 524,

534Replay, 116, 133Replication, 2, 6, 13, 15-17, 20, 22, 25-27, 42, 85, 178,

186-187, 191, 193-195, 197-199, 201, 209,218, 221, 223, 229, 233, 235-236, 239-240,245-246, 248, 250, 265-266, 269, 271-274,358, 384, 682, 686, 689, 697-698, 700, 714,717, 722, 724

Reporting, 5, 96-97, 99, 126, 132-135, 137, 139-142,144, 200, 267, 616, 670, 678-679, 689

Reports, 12, 37, 82, 102, 125, 130-131, 142-143, 148,182, 209, 296, 320, 338, 362, 457, 607, 615,670, 672, 682, 689, 692, 698, 708, 713-714,721

library, 296Representation, 51, 95-96, 100-101, 126, 130, 142,

155, 157, 166, 169, 171, 184, 208, 267, 450,687

Representations, 50, 57, 664Representativeness, 267reproduction, 14, 49-50, 53, 65, 394Research, 7, 10, 13, 16-17, 20, 23-24, 26-28, 30-31,

34, 36-37, 40-43, 50, 54, 59, 63, 75, 78-79,82, 89, 94, 101, 113, 117, 124-126, 130-131,133, 139-140, 142, 147-148, 155, 160, 162,174-175, 179-187, 189, 193-195, 197, 207,213, 224, 233-235, 238, 245-274, 277-279,290, 293, 305-306, 312-314, 317, 320, 322,326, 329, 340-343, 345, 348, 350-351,356-357, 363, 367, 369-373, 383, 400-401,414, 419, 421, 439, 446, 448, 465, 469,473-474, 476, 492, 499, 504, 508, 511,514-516, 519-520, 538, 551, 554, 575, 586,590, 598, 605, 612, 614-616, 628, 636, 645,650-651, 654, 661, 666, 669, 671, 673-675,677, 681-682, 686-711, 713-728

findings, 7, 17, 23-24, 26-27, 34, 36, 40, 42, 63,148, 180, 184, 193, 246, 255, 263-267,270-271, 274, 277, 342, 350-351,673-674, 682, 697

instrumentation, 24on writing, 718sampling, 10, 13, 20, 113, 126, 140, 264-266, 516,

716, 719, 721, 724scholarly, 30, 598, 616theory and, 13, 41-42, 688, 695, 703, 710

Research design, 233, 715Research literature, 207, 448, 666, 674, 681, 713Research methods, 139, 197, 246, 263-264, 692, 697,

715, 723-724Research on teaching, 266Research reports, 37, 82, 131, 142, 708Research-based methods, 682Resistance, 3, 15, 186, 359, 378, 419, 468-469,

475-476, 480, 624, 646, 688, 724end of, 15

Resolution, 616, 686Resource rooms, 261, 717Resources, 3, 42, 65, 69-70, 83, 89, 94, 127, 142,

182, 189, 202, 224, 233, 238, 243-244, 253,265, 295, 365, 442, 460, 467, 566, 574, 591,593, 633, 641, 660, 667

Respect, 5, 9, 15, 18, 33, 47-48, 52, 95, 100, 125-126,162, 176-177, 186, 236, 246, 248, 265, 278,281, 296, 299, 309, 312, 316, 356, 375, 377,391, 393-394, 396-400, 402-403, 406, 437,447, 449, 455, 465, 512, 514, 533, 541, 555,568, 582, 612, 667, 669, 678, 680, 682-684

Respondent conditioning, 4, 14-15, 44-45, 47, 49-51,60, 66, 409, 543

Responding, 1-4, 6-7, 10-17, 20, 54, 60, 64, 66, 81,89, 93, 96-98, 100, 103-104, 112, 120,126-127, 147-148, 151, 157-159, 171, 174,178, 181, 186-193, 195, 197-198, 200-202,205-210, 212-213, 215-216, 218, 221-225,227, 231, 236-237, 239-240, 242-244, 248,252-254, 273, 278, 281, 283, 297, 300,302-310, 312, 314-315, 322, 326-327, 329,

331-332, 334-341, 343, 348, 351-352,355-359, 361, 365-366, 372, 392, 396, 401,405, 409, 411-412, 414, 416, 421, 424-425,445, 447, 452, 463, 470, 474-475, 477,481-482, 484, 487, 492-497, 505-506, 508,515, 534, 539, 555-556, 570, 587, 619,624-625, 628, 632, 635-636, 639, 641,644-646, 654, 662, 670, 683, 686, 690, 692,694, 696, 702, 705, 707, 711, 713, 721,727-728

context for, 100Response, 2-20, 27, 29-31, 37, 39, 42-43, 44, 46-66,

70, 74, 79, 81, 85-88, 90, 92-93, 95-98,100-104, 106, 108-110, 112-115, 117-118,120-121, 126-128, 131, 133-134, 136-137,142, 144, 146, 148, 151, 153-154, 156-157,159-160, 162, 165, 170, 173, 176, 183-184,189-190, 192, 194, 197, 202-203, 208-209,211-212, 216, 221, 227, 229-230, 233, 236,238, 240, 242-243, 247, 251, 259, 262, 264,267, 271, 276, 278-284, 286-288, 291-292,295, 297, 300-310, 312-316, 319-322,326-335, 337-341, 343, 344-362, 365-366,368, 372-373, 374-375, 381-389, 391-393,396-398, 401-407, 409-422, 424-425,427-433, 435-438, 440-445, 447-453,454-467, 469-470, 473-476, 478-480,482-486, 491-497, 498-501, 504-509, 512,516, 519, 521, 523, 537-557, 561, 566-567,570, 572-573, 585-586, 588-591, 595-597,606-614, 618-620, 622-625, 627-630, 632,634-636, 638-640, 643-646, 648-650,652-656, 659-663, 668, 670, 676-677, 683,685-688, 691-695, 697, 699-703, 705-711,713, 716-720, 722-728

Response cards, 114, 183, 267, 691, 693, 697, 706,710, 713, 718

Response cost, 3, 15, 58-59, 142, 347, 374-375,381-389, 397, 406, 482, 567, 570, 572-573,610, 676, 686, 695, 703, 716, 726-727

Response cost procedure, 383, 386-388, 397, 570,572, 610

Response generalization, 8-9, 12, 15, 271, 622-624,628-630, 632, 634, 636, 654, 661-663

Response maintenance, 8-9, 11, 15, 622-625,627-630, 634, 644, 646, 656, 659, 661-662,719

Response patterns, 338Response prevention, 362Response prompts, 16, 133, 307, 310, 361, 418-420,

422, 425, 431-433, 445, 448, 452, 493-494,573, 586, 595-597, 612, 618-619, 629, 650,653-654, 660

Responses, 3-8, 10-17, 19-20, 29, 31, 33-37, 40,46-47, 49, 51-54, 56, 60, 63, 65-66, 78-80,85, 87, 89-90, 93, 96-98, 101-104, 106-109,112, 115, 118-120, 125-126, 131, 135-137,140-141, 147, 149-152, 155-159, 161-162,176, 183, 190, 200, 211, 221, 226-234, 236,238, 240, 243, 249, 258-259, 264, 273,278-279, 281, 284-285, 288-289, 296, 298,300-303, 305, 307, 309, 312-319, 322,326-343, 345, 348-349, 351, 353-358,360-362, 365-367, 372, 389, 392-393,398-399, 409-414, 416, 419-420, 424-425,429-432, 435-438, 440, 442, 444, 446,449-453, 455-457, 459-460, 463, 465-466,470, 473-477, 483, 486-487, 490-497, 501,503-506, 509, 520, 538-539, 541-543,545-551, 553, 555-557, 567-568, 570, 574,584-586, 588-591, 596-597, 602-603,605-606, 609, 612, 620, 623, 626, 628-630,635-636, 644-645, 650, 652-655, 660-663,689, 697-698, 707, 712, 720, 728

selection, 16, 52, 63, 65-66, 78-79, 85, 211, 249,278, 303, 333, 337, 416, 494, 497, 588,636, 655, 698

Responsiveness, 9, 571, 724Restatement, 572Restating, 257restitutional overcorrection, 13, 15, 344, 360, 372Retention, 157, 267, 282, 670, 689-690, 713Retrieval, 32Reversal design, 1, 12, 15-16, 20, 183, 191, 196-208,

215, 218, 221-222, 229, 233, 238, 243-244,310, 321, 362, 530, 646

Reversal designs, 196, 218, 221Reversals, 197, 200, 207, 218, 242, 250Reversibility, 17, 197, 292, 414

Revision, 63, 449, 573, 694Rewards, 10, 59, 209, 254, 265, 290, 296, 489, 524,

559, 561-565, 567, 579, 581, 587, 589-590,608-609, 611, 615-616, 644-646, 648, 662,687, 695

Rewriting, 156Right to Effective Behavioral Treatment, 668-669, 684,

725Ripple effect, 630Risk management, 678, 691risks, 95, 270, 369, 513, 676-678Robotics, 465-467, 719Role playing, 129, 626Role-play, 652Role-playing, 625, 641, 650, 652Roles, 197, 209, 218, 540, 543, 584, 642, 680Roots, 516Rounding, 348Routines, 8, 40, 47, 60, 455, 514, 526, 531, 651Rubber band, 362, 596, 609Rubric, 110Rules, 16, 25, 93, 96, 115, 118, 120-121, 164, 169,

182-183, 246, 249, 252, 259, 280, 307, 333,368, 380, 382, 385-386, 388, 437, 466, 492,496, 566-567, 570-572, 576-577, 582, 588,606, 616, 618, 630, 666, 668, 677, 679,683-684, 691, 709

Rush, 298, 331, 521, 699, 715

SSadness, 547Safety, 38-39, 42, 82, 86, 167, 186, 233, 249, 266,

370, 458-459, 461, 486, 614, 630, 659, 665,667, 678, 680, 684, 688, 692, 697, 701, 712,728

of self, 233, 659, 697, 728plans, 370precautions, 370

Safety issues, 678Salaries, 83Saliva, 353Samples, 110-111, 114, 117, 127, 154, 259, 651, 699Sanchez, S., 727SAT, 104, 204, 212, 287, 357, 539, 562, 584, 598, 629Satiation, 3, 5, 15, 44, 59, 66, 155, 284-285, 303, 306,

309, 394, 398-399, 406, 432, 552, 649, 704Satisfaction, 81, 258, 325, 576Scale, 3, 46, 72, 75, 128, 149, 151, 156, 159, 165-170,

172, 176, 209, 218, 232-234, 243, 258, 262,264, 268, 447, 456, 513, 520-521, 570, 586,606, 618, 692, 695, 703, 706, 714, 720, 722,728

Scales, 9, 72, 126, 128, 143, 159, 166, 169, 176, 258,274, 519-521, 533, 589, 688, 691

ordered, 533Scaling, 151-152, 157, 160, 164, 166-167, 169, 267

dimensional, 151Scanners, 119, 129Scanning, 128Scapegoats, 576Scatterplot, 15, 146, 162, 164, 176, 519, 521Scatterplots, 148, 162, 164, 516, 519, 533Schedules, 1, 3-4, 6, 10-14, 28, 40, 57, 108, 110, 127,

155-156, 158-159, 212, 227, 266, 276,295-296, 300-301, 308, 310, 311, 324-343,344, 351-352, 355, 357-358, 366, 368, 374,418, 421, 431, 465, 467, 475, 480, 488-490,493-494, 496-497, 498, 501, 503-504, 507,524, 568, 575-576, 633, 644-645, 648,653-654, 662, 680, 686, 688, 690-693,695-697, 702, 704, 707-710, 713, 716,724-726

of reinforcement, 1, 3-4, 6, 10-14, 28, 40, 155-156,212, 276, 296, 300-301, 308, 310,324-343, 355, 357-358, 366, 368, 465,475, 480, 489-490, 493-494, 496-497,498, 501, 503-504, 507, 524, 568, 575,633, 644-645, 648, 654, 662, 686, 688,693, 696, 702, 704, 708, 724-725

Scheduling, 31, 126, 139, 185, 623Schemes, 633Schneider, M., 718School, 36, 72, 78, 81, 83, 88, 94, 99, 104, 110, 113,

125, 131, 147, 165-166, 168, 180, 183-184,200, 202-203, 209-210, 217, 227, 233-234,237, 240, 246, 248, 254, 261, 266, 277, 281,286, 289, 291, 293, 296-298, 313-315, 330,335, 337, 350, 353, 359, 376, 380, 382-383,385, 388, 427, 457, 462, 488-489, 507, 516,

746

520, 522, 525, 550, 559-561, 563-567,570-571, 574, 576-578, 580-581, 584,590-592, 594, 599-600, 602, 604, 606, 611,613, 617-618, 623, 626-627, 631-632,635-637, 639, 641, 645, 649, 652, 656,659-660, 665, 667, 670-671, 676-679,682-683, 685-689, 692, 695, 697-707,709-710, 712-714, 716, 718-723, 726-728

School activities, 113School board, 380School counselor, 599School day, 209, 227, 293, 522, 559, 599-600, 606School district, 246, 385, 665, 667, 677School districts, 94School lunch, 565School psychologist, 261, 701School psychologists, 671, 701, 713Schooling, 60, 595Schools:, 727

descriptions of, 33Science, 2-3, 7, 11, 16, 22-31, 33-36, 39-43, 47,

62-63, 123, 125, 143, 179-183, 194, 229,246, 248-249, 264, 267, 283, 342, 345, 371,455, 465-466, 537-538, 543, 577, 585, 591,682, 686, 688, 691, 695, 697-698, 700,703-706, 716, 719, 721-722, 724, 727-728

in the curriculum, 125new, 3, 7, 26-27, 29-31, 34, 36, 63, 183, 229, 264,

267, 455, 465-466, 585, 686, 688, 691,695, 697-698, 700, 704-706, 716, 719,721, 724, 727

Science instruction, 697Sciences, 24-25, 30, 52, 181, 184, 246, 264, 647, 658Scientific knowledge, 23, 25-26, 29, 85, 93, 120,

124-125, 672Scientific method, 25, 27, 591, 682-683, 721Scientific research, 28, 94, 179, 181, 248, 713, 720Scientists, 23-27, 33, 42, 47, 50, 85, 93, 110, 120, 148,

179, 181, 183, 189, 269-270, 371Scope, 34, 36, 41, 69, 89, 247, 257, 371, 437, 556,

559, 633, 659, 662Scores, 48, 73, 84, 86, 102, 132, 140, 144, 169, 171,

180, 183, 209-211, 231, 246-248, 262-263,520, 561, 604, 607, 617, 700, 718

Scoring, 73, 87, 116-117, 129, 132-133, 233, 607Screening, 69, 89, 148, 233, 521, 703, 706screws, 104, 404-405Script, 141, 227-229, 256, 274, 421, 706, 719Sculpting, 103Search, 33, 38, 60, 179, 193, 236, 247, 593, 607Seating, 48, 75, 294, 417, 499, 512Seating arrangements, 48, 75, 512Seatwork, 109, 111, 329, 457, 477-478, 492, 579,

604-605, 643, 646-647, 650-651, 653, 690,708

Second Step, 573Secondary school, 710Secondary students, 83, 250, 260, 262-263, 592, 650,

685, 691, 723Section, 27, 45, 78, 82, 164, 189, 246, 256, 258, 274,

278, 289, 295, 299, 317, 345, 356, 391, 397,412, 428, 438, 441-442, 493, 554, 568, 571,587, 605, 607

Security, 26, 404, 615Segmentation, 78Seizures, 240Self, 9, 11, 16, 26, 36, 38, 64, 71-72, 75, 79-81, 83, 87,

90, 94, 96, 117-119, 125, 149, 151, 153,164-165, 167, 170, 179, 182, 184, 186, 189,200, 207, 209, 212, 225, 227, 233-234,237-239, 254, 256, 259, 262, 287, 289, 296,316-317, 320, 326, 336-337, 341-342,351-354, 357, 359-360, 362-365, 367-369,373, 378, 380, 403, 411, 418, 437-438,446-448, 452, 469, 471-472, 474-475,478-480, 485-486, 489, 492, 494, 497,501-502, 504, 506, 508-509, 511, 513,526-528, 534, 543, 548, 558, 562, 564-565,569-570, 575-576, 579-581, 583-620, 625,630, 633-634, 636-637, 642, 650-654,656-659, 663, 665-668, 670, 674, 680,683-684, 685-700, 702-728

constructing, 149, 151, 153, 164-165, 167, 170,437, 452, 614, 625, 702

discovering, 182Self-assessment, 16, 418, 601, 608, 651Self-awareness, 9, 614Self-concept, 81, 254Self-control, 16, 336-337, 583, 585-587, 591-592, 594,

607, 612-613, 617, 619, 654, 688, 690, 694,698, 713-714, 716-718, 720, 722, 724, 726

Self-control training, 336-337, 694Self-determination, 586, 680, 685, 696, 702, 717Self-determination skills, 685, 702Self-directed learning, 685, 702self-esteem, 589self-evaluation, 16, 259, 583, 601-602, 606, 608, 612,

616, 618-619, 698, 704-705, 717, 719, 723Self-injurious behavior, 96, 118-119, 149, 153, 170,

189, 200, 209, 262, 287, 342, 351-354,362-363, 367-368, 411, 471, 478, 486, 489,492, 501-502, 511, 513, 665-667, 692-694,696, 703, 706-708, 710, 713, 718-719,721-723, 727

Self-injurious behaviors, 359, 364, 472, 506Self-instruction, 16, 583, 596, 612-613, 620, 702, 706,

718-719Self-instructional training, 636, 697, 703, 715Self-knowledge, 585Self-management, 16, 75, 569, 575, 581, 583-620,

630, 634, 652, 654, 656-657, 659, 663, 687,690-692, 702-703, 705-706, 708-710,712-714, 716

generalization and, 589-590, 614, 619, 630, 634,652, 654, 656-657, 659, 663, 692, 702,705-706

self-monitoring, 16, 71, 75, 526, 528, 579, 581, 583,590, 592-593, 598-610, 612, 615, 618-620,650, 656-658, 683, 689, 692-693, 699-700,705, 707-708, 710, 713, 719, 724, 727-728

Self-monitoring skills, 605, 692Self-monitoring techniques, 593, 603self-observation, 16, 598Self-recording, 184, 186, 526, 570, 598-601, 603-604,

606, 608, 650, 657, 689-690, 692, 705,707-708, 712, 724

Self-reinforcement, 607-608, 612-613, 616, 620, 688,691, 698, 700

Self-stimulatory behavior, 357, 380, 508, 697, 717self-talk, 548, 593Semantics, 537Sensation, 609senses, 47Sensitivity, 193, 248, 341-342, 365, 456, 667, 709, 713Sensory reinforcement, 521, 717, 725Sensory systems, 298Sentences, 58, 250, 524, 537, 539, 561, 608, 623, 701Separation, 209, 216, 378, 494, 692Sequence, 1-3, 7, 9, 11-12, 16-17, 20, 25, 48, 54,

69-70, 73-74, 97, 111-112, 127, 132-133,167, 196, 200, 202, 208-209, 212, 215-216,218, 226, 229-232, 236-237, 243, 248-250,256, 267, 273-274, 286, 298-299, 329-330,339, 361, 365, 367, 372, 401, 403-404, 407,418-419, 430, 432-433, 435-438, 440-450,452-453, 458-459, 463, 488, 498, 501,503-506, 509, 514, 555, 559, 574, 579, 591,604, 612-614, 625, 641, 652, 659, 693

Server, 403Setting, 5, 7-16, 18, 37, 48, 51, 61, 65, 74, 80, 83, 89,

91, 101, 117, 126, 131, 134-135, 140, 182,184-187, 189, 193, 195, 197, 218, 221,225-227, 229, 233, 236-238, 240, 242-243,249, 252-253, 256, 263, 265, 267, 271,273-274, 282, 305, 337-340, 350, 361,375-376, 378, 381, 383, 388, 391, 416, 442,445, 448-450, 453, 457, 469, 484, 491-492,497, 503, 507, 512-514, 521, 562-563,568-569, 571, 574, 576-577, 587, 589, 591,593-594, 596-598, 601, 605, 608, 611, 613,616, 619, 622-632, 634-637, 639-646,648-650, 652-654, 659-663, 689-691,694-695, 697, 700, 707-708, 710-711, 714,717, 719

Severe disabilities, 83, 250, 260, 262, 296-297, 316,359, 418, 439, 445-448, 455, 507, 660, 665,690-692, 698, 721-722

Sex, 395-396, 399Sexual behavior, 394Sexual relations, 666Shapes, 455, 487Shaping procedure, 456-458, 462, 464, 467, 696Shared control, 592Sharing, 47, 132, 412-413, 450, 455, 466, 615, 629,

660, 678, 726Shock, 19, 60, 315, 349-350, 355, 401, 410, 609, 665,

690, 708, 719, 725, 727Siblings, 71, 297

Sidman, Murray, 182, 294, 308Sight words, 211, 307, 326, 688, 701, 720Sign language, 8, 13, 16, 213-215, 430, 541, 545, 551,

557, 690, 724Signals, 16, 36, 133, 143, 188, 301, 350, 376, 382,

404, 598, 604Significance, 23, 29, 38, 69, 75-77, 90, 148, 150, 166,

169, 176, 237, 247, 258, 262, 267, 269-272,274, 277, 588, 615, 668

statistical, 38, 148, 247, 269-270, 274Significant others, 3, 36, 69, 71-72, 76, 82-84, 89-90,

187, 270-271, 295-297, 300, 310, 370, 476,478, 508, 574, 618, 650, 658, 660, 663

Signifying, 62, 197Signs, 86, 95, 152, 227, 249, 263, 333, 395, 418, 422,

489, 506, 523, 531, 540, 545, 584, 641, 667,691, 724-725

Silence, 281, 461Silent reading, 647, 689Simplification, 256Simulations, 465Simultaneous prompting, 720Singing, 287, 541, 546, 618Single-subject studies, 265Size, 7, 17, 39-40, 48, 61, 125, 160, 217, 242, 265,

326-327, 329, 411-412, 419, 421, 450, 463,540, 548, 557, 584, 588, 639, 642

Skill acquisition, 97, 231, 680, 703Skill development, 82-84, 90, 97, 465, 482, 711Skill mastery, 670, 682Skill training, 685Skill(s), 224Skills, 3, 24, 38, 40-41, 47, 53, 63, 69-70, 72-73,

75-76, 78-80, 82-83, 86, 89-90, 101, 104,109, 125, 127, 129, 141, 164, 170, 181, 190,213-215, 224, 227, 229, 231-232, 238, 243,246, 253, 256, 258-259, 261, 263, 266, 296,307, 315-316, 322, 337, 360-362, 367, 369,378, 416-418, 425, 427-430, 432-433,436-438, 441-443, 447, 451-452, 455, 460,463, 465, 484, 499, 507, 516, 520-521, 524,531, 541, 551, 556, 559, 562, 566, 571-572,577-578, 580, 584, 589-594, 604-605, 612,619, 623, 625-627, 632-636, 639-642, 646,649, 653-656, 658-661, 663, 666-667,669-671, 674-675, 680-682, 685, 689-692,694, 697-699, 701-706, 711-716, 718-719,721, 724-725

attending, 70, 417-418, 429-430, 571, 674, 690conceptualization, 465fine motor, 83, 429-430practicing, 129, 591, 627, 636, 666, 671, 674, 682prosocial, 721receiving, 429, 441, 572, 639, 641-642, 659, 669retention of, 689self-help, 83, 633, 642, 654sending, 315speaking, 296

Sleep, 48, 117, 320, 384, 387, 394-396, 399, 406,470-471, 500, 520, 555, 598, 686, 714, 716

Slides, 625, 641Slips, 291, 330, 382, 549, 574, 600-601Slope, 5, 17, 98, 157, 160, 175-176, 332, 411Small group, 18, 46, 109, 339, 576, 581, 594, 641,

662, 725Small groups, 337, 574, 647, 694Small-group instruction, 339, 477-478SMART, 26, 87, 117smiles, 48, 136, 289, 308, 362, 402, 660Smith, S., 722Snacks, 46, 227, 290, 489, 512, 521, 571, 646Social environment, 2, 52, 551-552Social history, 402Social interaction, 82, 141, 206, 284, 313, 322, 377,

402, 407, 517-518, 537, 539-540, 550, 556,697, 706, 723

peer interaction, 556Social issues, 29, 713Social learning, 32, 712Social needs, 368Social problems, 698Social psychology, 471, 698, 712, 727social reinforcement, 202-204, 206, 233, 277, 287,

293, 316, 333, 521, 548, 626, 685, 700, 722,726

Social reinforcers, 13, 293, 309, 376, 511, 588Social relationships, 552Social sciences, 184, 246, 264Social skill instruction, 577

747

Social skills, 3, 38, 76, 79, 129, 258, 337, 416,577-578, 625, 649, 680-681, 685, 694, 699,705, 711

Social skills instruction, 577Social skills training, 258, 699Social studies, 261, 267, 297, 580, 685Social validity, 16, 38, 68, 79, 89, 91, 256-259,

261-263, 267, 270, 274, 370, 387, 448, 508,609, 616-617, 665, 667, 687, 696, 700, 720,727-728

Social values, 680Social workers, 671, 713Socialization, 499Sociology, 181Software, 110, 116, 119, 121, 168-169, 465, 591-594,

723ability to evaluate, 591

Software programs, 116, 168-169, 591-592Solid foods, 317Solitary play, 99Solutions, 34, 39, 42, 202, 667, 683, 719Songs, 541, 555Sony, 201Sorting, 108, 229, 336-337, 649Sound, 1, 4, 27, 38, 45, 48-51, 54-55, 59, 78, 95, 99,

102, 104, 113, 185, 189, 195, 201, 208, 212,217, 237, 269, 281, 284-285, 287-288, 349,371, 398, 403-404, 409, 427, 432, 444, 447,453, 455, 458, 460, 466, 479, 483, 530, 543,546, 553, 613, 635, 652, 667

Sounds, 33, 37, 51, 54, 63, 78, 102, 112, 281, 284,287, 368, 397, 409-410, 458, 471-472, 537,546, 554-555, 635, 640, 728

speech, 287, 458, 471, 537, 546Space, 2, 8, 45-46, 65, 97, 112, 115, 152, 160-161,

167-168, 185, 272, 353, 378, 380, 496, 585,680, 682

Spanish language, 248Spanking, 356Speaking, 32, 59-60, 102, 204, 242, 281, 283, 286,

296, 347, 409, 456, 458, 462, 548, 611, 645Special education, 35, 73-75, 78, 83, 183, 267, 292,

320-321, 463, 526, 573-574, 590-591, 594,604, 608, 627, 649-650, 656, 677, 686, 693,697-698, 701-702, 708-712, 715-716, 719,722, 724, 727-728

Special education programs, 594Special education services, 83, 608Special education students, 78, 267, 292, 320-321,

591, 650, 711Special education teacher, 463, 573, 627, 649-650Special education teachers, 75Special educators, 79Special needs, 714Specific hypotheses, 522Speech, 79, 104, 213, 287, 360-361, 447, 455-456,

458, 463, 471, 477-478, 507, 511, 524, 537,539, 546, 551, 553, 593, 610, 627-628, 691,703, 707, 712, 720

speed, 5, 62-63, 123, 129, 157, 164, 193, 212, 293,345, 348, 362, 436, 453, 460, 656, 670, 725

Spelling, 96, 102, 116, 131-132, 141, 154, 156-157,169, 183, 209-211, 267, 278, 360, 427,432-433, 539, 542, 561, 580, 605-606, 645,650, 691, 711-714, 719, 722, 724

transitional, 724Spelling tests, 169, 209Spillover, 630, 723Spontaneous recovery, 7, 17, 468, 474-475, 480Sports, 42, 80, 286, 306, 571Spreads, 38, 589, 623Stability, 94, 170, 172, 174, 177, 189-190, 193, 233,

237, 242-243, 267, 303, 520, 692Staff, 71-72, 83, 90, 200, 233, 248, 259, 261, 308,

351, 370, 457, 462, 492, 524, 531-533, 635,649-650, 656, 660, 675, 677, 679-680, 709,719-720

credentials, 675Staffing, 520Stages, 18, 131, 254, 307-308, 325, 343, 431, 456,

460, 464, 467, 507, 516, 547, 648, 690, 700Stages of learning, 307-308, 325, 343Standard setting, 700Standard units, 8, 14, 99, 166

important, 99Standardized assessment, 259, 274Standardized assessments, 551Standardized tests, 69, 73, 262Standards, 7, 68, 89, 95, 103, 120, 131, 255, 373,

519, 590, 664-665, 667-669, 671, 674-675,683-684, 685, 694, 696, 706, 725

Stanford University, 728Starches, 131State and local, 571States, 4, 13, 27, 29, 36-37, 45-46, 48, 87, 125, 131,

190, 291, 296, 309, 322, 334-335, 400, 459,466, 519-520, 540, 548, 554, 568, 588, 616,640, 665, 671, 674-678

court system, 677courts, 676

Statistical methods, 269-270, 696Statistics, 135, 148, 235, 248, 699Statutes, 370, 677Stereotypic behavior, 342, 360, 494, 685, 692, 707Stimulus, 1-19, 27, 29-30, 40, 42, 44-51, 53-66, 78,

90, 96, 98, 100, 110, 118, 120, 153, 155,203, 208, 213, 250, 276, 278-281, 283-285,287-289, 295-307, 309-310, 312-316, 322,337-339, 344-373, 374-389, 390-393,395-407, 408-425, 427, 432, 435-437,444-446, 449-453, 457, 459-460, 466, 469,471-472, 479, 484, 486, 492, 499-500, 508,512, 516, 520, 530, 538, 540-544, 546-557,566, 585-586, 595, 597-598, 609, 614, 619,622, 624-625, 627, 629-630, 632, 634-636,638-642, 648, 652, 654, 659, 661-663, 680,686, 689, 691-694, 699, 701, 703-706,708-709, 711, 714-715, 717-720, 724,726-728

Stimulus control, 2, 4, 6-7, 11-13, 17, 44, 55, 61, 66,281, 307, 348, 408-425, 445, 499-500, 508,520, 538, 543, 546-547, 549-557, 566, 586,597-598, 627, 629-630, 632, 639-641, 654,661-662, 689, 694, 705, 708-709, 711, 717,720, 724, 726-727

Stimulus events, 47-48, 60-61, 65, 96, 100, 350Stimulus fading, 40, 153, 421-425, 459, 466, 486, 492,

686, 704Stimulus generalization, 4, 8, 17, 408, 410-412, 414,

424, 546, 548, 624, 640, 692, 706Stimulus shaping, 422-424, 703Stomach, 46-47, 49, 547Stop, 55, 74, 86, 109, 111, 249, 277, 283, 315,

355-357, 360-362, 386, 388, 432-433, 469,473, 489, 521, 525, 545, 597, 599, 609, 644,649, 692, 697, 699-700, 705, 709, 718, 720,725

Storage, 32, 537Stories, 166, 541, 565, 721Story writing, 687Strategies, 28, 39, 68, 81, 83, 94, 125, 174, 178-195,

246, 248, 260-261, 318, 320, 361, 380, 434,442, 499-500, 512, 520, 533, 552, 562, 573,580, 583, 592-593, 610, 612, 623, 633-634,650, 654, 659, 661-662, 665, 682, 685, 701,704-705, 707, 711-712, 715, 728

deliberate, 361intended, 81

StrategyTools, 592Stress, 350, 412Strips, 240-241, 570Strokes, 8, 85, 98, 104, 108, 248, 291Structural linguistics, 537Structuralism, 32-33Structure, 33, 53, 59-60, 63, 109, 202, 378, 537-538,

556, 576, 592, 678, 689, 695, 702, 709Structured interviews, 9, 519, 533Student achievement, 670Student behavior, 87, 130, 277, 340, 504, 600, 710,

725Student input, 594Student performance, 34, 73, 323, 570, 698, 710Student progress, 419Student success, 670Students, 3, 23, 25, 31, 35, 39-40, 43, 48, 55, 57, 61,

65, 70-71, 73-74, 78, 80-81, 83, 87, 93-97,101-104, 108, 111-114, 116, 127, 131, 133,141, 152, 154, 156, 180-181, 183-185, 190,198-199, 202, 206, 209-214, 217, 221,229-232, 237-238, 242-243, 247-250, 254,259-263, 266-269, 277, 282-283, 285,291-294, 296, 306-307, 312-313, 316,319-321, 329-333, 335, 356-357, 367,376-378, 380, 382, 384-388, 413, 417-418,420, 436, 438, 440-445, 447, 450, 455-458,462, 465, 469, 477-479, 483-485, 488,499-500, 507, 520, 525, 527-528, 562,569-573, 575-581, 589-594, 598-600,

603-609, 611, 613, 615-616, 619, 625-627,631-632, 636-643, 646-654, 656-657,659-660, 670, 672, 675, 680, 685-686,688-695, 697-699, 701-703, 705-708,710-714, 716-717, 719-724, 726-728

antisocial, 356, 722conferences with, 599differences between, 209, 230, 627, 705distinctions, 96, 579exceptional, 438, 443, 485, 594, 605, 685, 690,

692-694, 697, 701-702, 705, 707-708,710, 712, 714, 716-717, 719-722, 724,726-728

in regular classrooms, 650reluctant, 649self-evaluations, 606

Students with disabilities, 83, 114, 443-444, 576, 626,631, 637, 640, 651, 727

Studies, 23-24, 28, 34, 36, 39, 42, 75, 85, 90, 94-95,98, 104, 113, 127, 133, 139, 170-171,182-185, 191, 194, 197, 200, 207-208, 224,227, 236-237, 249, 253, 261, 265-267,271-272, 277, 289, 291-293, 295, 297, 306,318, 325, 351, 353, 355-356, 359, 361-362,365-367, 385, 419, 429, 447, 449, 453, 463,483, 515, 519-520, 580, 590, 605-606,608-609, 613, 631, 636, 642, 646, 654, 666,682, 685, 689, 705, 724

D, 39, 94, 98, 170-171, 183, 237, 261, 277, 365,429, 449, 453, 580, 609, 685, 689, 705,724

G, 23-24, 34, 42, 75, 90, 94, 98, 104, 113, 133,170-171, 182-184, 191, 197, 200,207-208, 224, 227, 236, 249, 253, 261,265-267, 271, 277, 289, 291-293, 295,297, 351, 353, 355-356, 359, 365-366,385, 419, 429, 447, 463, 483, 519-520,580, 590, 605-606, 608, 613, 631, 636,642, 646, 654, 666, 682, 685, 689, 705,724

Style, 103, 107, 264, 450, 499, 543, 549, 639-640,654, 685, 702, 706

Subtraction, 56, 63, 102, 170, 231, 422-424, 528, 603,605, 627, 694, 703

Successive approximation, 16, 229, 243, 454, 457,464, 553

Suggestions, 62, 73, 123, 387, 603, 615, 617, 620,630, 650

delivering, 62Superego, 9, 32Supervision, 24, 207, 370, 389, 437, 579, 591, 643,

654, 664-665, 672, 680, 694-695, 714Support, 9, 29, 58, 94-95, 180-181, 206, 224, 245,

256-257, 259, 261, 266, 287, 349, 371, 431,477, 499, 520, 593, 595, 608, 614, 616, 643,665, 667-668, 670, 675, 677, 680-681,683-684, 691-692, 717-718, 721-722

Supporting, 68, 75, 191, 203, 559, 665, 716Surveys, 71, 90, 296, 391, 714Susceptible, 19, 49, 51-52, 202, 218, 254, 289, 309,

347, 349Suspension, 522, 524Sustained silent reading, 647, 689Sustained Silent Reading (SSR), 647, 689Swallowing, 445-446Swanson, 282, 723Switch, 110, 158, 199, 215, 284, 300, 302, 314, 345,

376, 388, 447, 450-451, 472, 499, 530-531,655, 702

Symbols, 150, 159, 161, 167-169, 177, 234, 519, 566,637, 720

Symmetry, 14, 17-18, 408, 414, 425Synonyms, 282, 289, 349-350Syntax, 35, 537, 546, 691Synthesis, 695, 705System, 3, 10, 12, 18, 32-33, 50, 64, 83, 110, 116-117,

119, 124-125, 128-131, 133-134, 143-144,150, 162, 170, 182, 185, 191, 193, 197, 200,214, 249, 253-254, 267-268, 290, 345,362-363, 386, 445, 460, 465-466, 494, 558,566, 568-575, 580-582, 591, 599, 608-609,612, 659, 665, 667, 677, 686-687, 689-690,698, 701, 708-709, 711-712, 717-719,722-724

Systematic instruction, 591, 629, 644Systematic replication, 15, 17, 245, 265-266, 271-272,

274, 358, 384, 689, 697, 714Systems, 9-10, 33-34, 37, 47, 68, 101, 109, 119-121,

128, 139, 143, 245-246, 298, 409, 447, 506,

748

537, 558, 568-569, 574-575, 614, 664, 678,682, 688, 691, 696, 700, 704, 709, 714, 719

breakthrough, 682dynamic, 47, 128, 246human, 33-34, 119, 128, 143, 537, 688, 704, 709,

714

TTables, 95, 353, 449, 529, 601, 639-640, 642, 652Tactics, 2, 16, 28, 31, 37, 39-40, 43, 58-60, 63, 164,

170, 174, 179, 185, 187, 189, 191, 195, 197,199, 206, 215, 221, 233, 235-236, 238, 242,246, 248-252, 269, 272-273, 304, 308, 345,347, 356, 368, 500, 507-508, 524, 537, 579,584-586, 588, 590, 592, 595, 607, 612, 615,617-620, 623-624, 631, 633-634, 640, 643,652, 654, 656, 658, 660-663, 704, 720

Talking, 1-2, 53, 55, 79, 81, 93-94, 109, 117, 204, 206,259, 282, 293, 335, 342, 383-384, 409, 458,485, 494, 521, 525-526, 548, 566, 568, 576,579, 585, 601, 604, 611, 613, 620, 627, 634,651, 699, 717

Tangible reinforcers, 291, 431, 433, 552Tangles, 683Tantrums, 37, 99, 316, 470-471, 473-474, 476,

478-479, 492, 511-513, 516, 518-519,522-526, 551-552, 561

tape recorders, 129Tardiness, 485Target behaviors, 16, 38, 54, 68-91, 93, 99, 108-109,

112-113, 127, 129, 188, 200, 206, 218, 238,242, 246, 254, 265, 270-271, 290, 304, 379,386, 568, 570-573, 579-582, 589, 596,605-606, 615-616, 627, 631-633, 641,645-646, 651, 662, 674, 681, 687, 689, 700,725

TASH, 686Task analysis, 2, 8, 18, 64, 68, 229, 256, 259,

434-435, 437-446, 449-450, 452-453, 459,556, 625, 665

Task performance, 307, 320, 322, 562Tasks, 9, 16, 24, 38, 47, 65, 73, 78, 81, 93, 108, 117,

121, 129-130, 198-199, 259, 262, 284, 296,306, 316, 320, 326, 329-330, 380, 437, 442,446-447, 452, 459, 465, 471, 486-487, 500,504-506, 508-509, 511-512, 514, 525,527-529, 531, 533, 562-565, 573, 587-590,592-593, 597-598, 601, 603-605, 612-613,616, 619, 646, 650, 654, 671, 711, 725-728

Taxes, 257Taxonomy, 727Teacher, 6, 23-24, 34, 40, 48, 62, 70-73, 75, 78, 80-81,

83, 86-87, 89, 94-95, 97-100, 102-103,108-110, 113, 115-118, 130, 132, 140-141,154, 156, 181-184, 187, 198, 201, 203-206,210, 215, 227-229, 233, 236-237, 254-256,259, 261-262, 277-278, 280, 282, 291,293-297, 301, 303, 305-308, 316, 325,327-331, 333, 339-340, 345-347, 356-357,362, 375-376, 378-383, 385, 387-388, 410,412-413, 417-420, 427, 429-430, 436-437,440-445, 456, 458, 460, 462-463, 469,477-478, 483, 485, 488-489, 492-494,499-500, 502, 504-506, 509, 511, 513, 518,520-521, 524-529, 541, 554-555, 561,566-567, 570-571, 573-580, 586, 589-591,593-594, 599-605, 607-609, 612-613, 620,623, 625, 627, 629-630, 632, 634-635, 639,641-643, 646-653, 655, 658-660, 665, 668,672, 678, 683, 685, 690-695, 698-699, 701,705-708, 710-712, 716, 720, 723, 725, 727

Teacher aides, 71Teacher education, 692, 698, 716Teacher training, 236Teachers, 9, 23-24, 36, 39, 42, 46, 58, 70-73, 75, 80,

87, 94-95, 98, 108, 117, 126, 162, 187,203-204, 206-208, 215, 217, 224, 227, 233,236-238, 244, 258-259, 261, 264, 268,293-294, 296, 306, 318, 325, 329-330, 333,342, 356, 364, 366, 378-379, 381, 383, 388,409-410, 417-418, 430, 442, 455, 459, 463,473-475, 477-480, 482, 491, 499, 506, 519,522, 524, 526-527, 529, 533, 553, 561-564,572, 576, 580, 591, 593-594, 599, 604, 617,625, 631, 634, 636, 640-643, 647, 649,651-652, 656, 660, 668, 670, 681-682,685-686, 699, 701, 704, 708-709, 712, 721

caring, 670educators, 75, 95, 381, 591

experience of, 36general education, 259, 261, 526, 576, 636, 649,

651, 686, 708, 712head, 87, 98, 224, 418, 474, 478highly qualified, 94individualized instruction, 670participation of, 562Spanish-speaking, 409substitute, 649

Teaching, 2-3, 8, 12, 18, 28, 34, 36, 39, 52, 69, 73, 76,78-80, 82-83, 94-95, 102-103, 141, 148, 156,160-163, 173, 214, 216, 228, 234, 237, 250,256, 266, 337, 345, 362, 367, 370, 402, 407,412-414, 421-422, 429, 432-433, 436-443,445, 448, 455, 458, 462, 465, 473, 485, 494,507-508, 512, 522, 524, 533-534, 541, 547,551-556, 584, 587, 589, 591, 593, 605-607,616-617, 619, 622-623, 626, 630-636,638-643, 650-653, 657, 659-660, 662-663,667, 670, 672, 680-681, 685, 687-703,705-716, 718-720, 722-728

Teaching:, 643, 708, 727Teaching Children to Read, 713Teaching skills, 507, 670Teaching styles, 237Teaching team, 660Teaching Tolerance, 720Teams, 209, 267, 578-580, 590Teasing, 520, 608Techniques, 9, 26, 34, 37, 59, 63, 83, 92, 133, 137,

144, 152, 165, 197, 204, 206, 208, 218, 221,243, 255-256, 278, 295, 304, 366, 369, 371,379, 419, 435, 448, 482, 487, 504, 549, 553,566, 569, 575, 585, 587, 589, 591-595, 603,614, 617, 624, 631, 634, 644, 674, 686, 688,692, 694-695, 702, 710, 715, 719, 726

Technology, 23-24, 27, 29, 32, 37, 39-43, 64, 75, 85,94, 119, 142, 148, 164, 179, 181-182,193-194, 236, 257, 263-264, 269-272, 274,345-346, 371, 467, 593, 617, 623, 659,661-662, 685, 689, 695-696, 698-699,703-704, 706-707, 714-715, 719, 722-724,726

assistive, 593computers, 119, 695, 723-724

Telephones, 332Television, 297-298, 312, 376, 459, 464, 511, 559,

571, 597-598, 649temperature, 7, 48-49, 59, 95, 126, 185, 312, 315,

349, 394-396, 400, 406, 643Terminal behaviors, 80Termination, 6, 12, 56-57, 59, 66, 94, 101, 110, 120,

192, 278, 294, 305, 312-314, 316, 322, 347,370, 376, 380, 397, 401-402, 404, 512,522-524, 568, 657, 672, 678-679

Terminology, 58, 283, 315, 404, 586, 623-624, 709Test, 8, 17, 24, 27, 37-38, 46, 48, 69, 73, 96, 101, 113,

116, 125, 132, 148, 209-210, 224, 231, 233,240-241, 246, 254, 263, 269-270, 294-295,300, 302, 304, 307, 411, 413, 415, 425,437-439, 443-446, 514, 521-522, 524-526,528, 534, 551, 570, 572-573, 579-580, 590,599, 604, 632, 659, 667, 682, 689, 694, 699,710, 724, 727

Test items, 101Test scores, 73Testimonials, 23, 95, 673testing, 3, 7, 43, 50, 88-89, 235, 258-259, 267, 274,

411, 414, 425, 522, 525, 528-529, 532,550-551, 582, 676, 708, 720, 727

field, 274, 582, 720Tests, 34, 42, 69-70, 73, 81, 89, 148, 169, 183, 209,

235, 240-241, 246, 261-263, 269-270, 274,296, 307, 411, 414, 425, 438, 539, 551, 567,592, 675, 684, 702

essay, 592Expressive Vocabulary, 551Problem with, 270select, 81, 235, 414, 675studying for, 567

Text, 5, 8, 13, 24, 31, 33-34, 45, 47, 53, 58, 88, 96,164, 166, 168-169, 181, 183, 189, 221, 228,246-247, 257, 313, 482, 536-537, 539, 541,544, 548, 557, 575, 596, 705

Textbooks, 57-58, 477, 499, 537, 692, 724The Parent, 70, 315, 356, 360, 369, 427-428, 432,

456, 458, 541, 678Theater, 640Theatre, 47

Theme, 546theories, 27, 32, 95, 537, 585, 695, 712Theories and philosophies, 95Theory, 7, 13, 26-27, 32, 41-43, 235, 272, 334, 537,

550, 585, 587, 636, 682, 688, 690-691, 695,699-700, 702-703, 710, 712-715, 720

Therapist, 15, 40, 69, 75, 81, 86, 88, 149, 153, 183,226, 254-255, 261, 291, 295, 306, 320, 351,353-354, 357, 364, 372, 381, 411, 417,471-472, 483, 490, 492, 494, 500, 502-504,508-509, 514, 562, 586, 589, 614, 627, 683,712, 714, 727

Therapy, 17, 29, 80, 94-95, 342, 370, 376, 397,561-562, 568, 614, 620, 626, 628, 644-645,669, 676, 686-687, 689-690, 693-694,696-697, 700-701, 703-706, 709-710,713-714, 717-718, 722-723, 726-728

depth, 95Think, 27, 35, 45, 57, 65, 88, 183, 371, 402, 462, 542,

550, 585, 591-593, 598-599, 609, 627, 629,649, 661, 677, 700, 717

Thinking, 20, 33, 43, 45, 53, 183, 375, 537, 542-543,547-549, 591, 652, 712

thinning, 15, 159, 266, 324, 333, 336, 343, 351-352,366, 486, 491-492, 495-496, 503-504,507-508, 699, 707

Thomas, 34, 357, 458, 590, 604, 606, 609, 664, 688,698, 703, 709, 725

Thompson, 24, 42, 85, 127, 133, 149, 158-159, 197,200, 203-204, 229-231, 287-288, 303-306,353-354, 356, 364, 366, 369, 418, 472, 482,501, 503, 506, 508, 521, 644, 691, 695-696,699, 702-704, 711, 717, 719, 723-724, 727

Threats, 123-125, 128, 143, 180, 252, 255, 257, 524Threshold, 94, 99, 104, 362, 376, 456, 458Thrust, 286Time, 1-3, 5-20, 26, 30-32, 34, 38, 40, 42-43, 45-46,

48-50, 53-60, 64-65, 71-75, 81, 83, 89, 92,94-101, 104, 108-121, 123-124, 126-137,139-140, 142, 144, 148-153, 155-161,163-166, 168-170, 172, 174-177, 181-182,185-195, 198-204, 206-209, 214-217, 219,221, 223, 228, 232, 236-240, 242-244,247-248, 250-251, 253, 255-259, 263,266-271, 277-279, 282-284, 286, 289-291,295-301, 303-304, 306, 308-310, 312-315,322, 325-327, 329-335, 337, 340, 342-343,345, 347, 349-351, 353, 359-361, 364-366,368-369, 372, 374-382, 384-389, 391-392,394, 397-399, 401-404, 406, 409-410, 412,417, 419-420, 422, 425, 427, 429-430, 432,436, 439-442, 446, 448-450, 452, 455-457,459-465, 470-471, 473-476, 478-479,482-483, 485-497, 498, 501-509, 511,513-514, 516-519, 521-530, 533-534, 538,541-542, 546, 548, 551-552, 559-562, 564,567-568, 570-578, 580-581, 584, 586-591,593, 596-601, 603, 605-613, 615-617, 620,623, 629-630, 633, 635, 640-641, 643-646,648-649, 652, 654-657, 661, 663, 666-668,675, 677, 679, 683, 685, 689-691, 695, 697,699-702, 704, 709-714, 716-717, 719-720,724-726

elapsed, 5, 10, 15, 96, 100-101, 120, 130, 150,237, 244, 291, 326, 330, 334-335, 340,351, 366, 457, 489, 645

engaged, 13, 40, 99, 108, 110-111, 113-114, 121,209, 250, 259, 263, 277, 297-298, 333,353, 364, 376, 457, 483, 486, 488,517-518, 524-529, 533, 570, 589, 598,609, 615-616, 655-656

for responses, 11, 334-335, 455, 465, 644on task, 109-111, 457, 478, 528-529, 599, 606,

612, 717to think, 462, 598, 649, 677units, 8, 14, 18, 81, 97, 99, 126, 151, 158, 166,

442, 452, 548, 551, 655-656Time out, 186, 482, 689, 691, 717, 726Time sampling, 10-11, 13, 18, 20, 92, 108, 110,

112-115, 120-121, 126-128, 135, 137, 140,516, 599, 603, 716, 719, 724

Title, 81, 704Toileting, 362Token, 10, 18, 34, 46, 59, 130, 186, 229, 254, 259,

282, 284-285, 290, 488, 491, 558-582, 607,630, 632, 645, 647, 657, 686, 689-691, 693,695, 697-698, 701-702, 711, 713, 716,718-719

Token system, 254, 571-572, 574, 718

749

Token systems, 574Tone, 47-48, 289-290, 314, 348, 350, 361-362, 439,

538, 604, 606, 642-643Tone of voice, 361, 642-643Tools, 33, 39, 93, 124, 264, 274, 403-404, 407, 465,

591-594, 676, 695, 710for teaching, 407

Toothbrushing, 707Topics, 20, 537, 544, 559, 617, 677, 686, 693, 697,

703, 705, 723, 727Topography, 5, 10, 15, 18, 46-48, 50, 52-54, 64-66, 68,

86-88, 90, 92, 103-104, 106-107, 120, 151,187, 233, 278, 427-428, 432, 437, 455-460,465-466, 473, 480, 512, 520, 522, 533-534,537, 543, 556, 639, 643, 662, 682

Total Communication method, 250touch, 47, 49, 53, 287, 289, 337, 419, 447, 461, 472,

513, 597, 614, 628, 644, 659-660, 698, 715Touching, 86, 104, 117, 254, 257, 259, 289, 346,

419-420, 427, 430, 456, 483, 485, 526, 531,533, 597, 644

Tourette syndrome, 183, 728Toys, 79, 186, 213, 225, 287, 291, 297, 318, 337, 384,

396, 484, 489, 522, 525-527, 540, 552, 568,572, 654

Traffic patterns, 641Training, 1, 4-6, 8-9, 11, 14, 17-18, 29, 124, 128-130,

133-134, 143, 150-152, 154, 156-157, 159,167, 169, 183, 206, 211-215, 227, 229-231,233-234, 236-237, 250, 252, 255-258, 260,262, 266, 268, 271, 293, 300-301, 336-337,356, 360-361, 367, 369-370, 393, 402-403,405-406, 408, 411-416, 418-420, 424,427-433, 435, 437-438, 440, 442-443,445-453, 454-456, 460-463, 465-466,483-484, 487, 498, 501, 505-509, 516, 519,536, 540, 547, 550-557, 562, 572-573, 592,594, 601-602, 608, 613-614, 622-629,631-632, 635-642, 644, 650-654, 656, 659,662-663, 664, 666, 669-674, 680, 684,685-690, 692-699, 702-707, 710-716,718-720, 722-725, 727-728

peer tutor, 256script, 227, 229, 256, 706, 719self-instructional, 636, 689-690, 697, 703, 715

Traits, 27, 64Transcription, 13, 18, 536, 539, 542-544, 548, 556-557Transfer, 80, 400, 419-421, 425, 547, 551-556, 624,

650, 689, 707-708, 714, 720, 722, 724, 727Transfers, 110, 547Transition planning, 592Transitions, 98, 650-651, 686Transitivity, 14, 17-18, 408, 414-415, 425Translation, 400, 691Transparencies, 256Travel, 168, 436Treatment, 1-4, 6, 9-14, 17-20, 28, 38, 40, 63, 71, 73,

75-76, 79, 83, 89, 94-95, 102, 114, 117-118,120-121, 124, 127, 130-131, 140, 143,148-150, 152-153, 159, 165, 167-168, 170,175-176, 178, 180, 182-195, 196-198,200-202, 204-219, 220-223, 225-227, 232,235-242, 244, 245-247, 249-251, 253-264,270-274, 277, 290, 292, 313, 318, 320-321,325-326, 334, 336, 340, 346, 351, 353,356-357, 359, 362-370, 373, 376, 378,380-384, 391, 409-411, 413, 421, 445, 457,471-474, 477-480, 482, 484-486, 488-495,497, 500-504, 507-509, 512-513, 515, 520,563, 569, 581, 589, 598, 614-615, 620, 624,626, 630, 646, 659, 665-670, 672, 674-684,685-687, 690-699, 701-702, 704-720,722-728

lifestyle changes, 680Treatment acceptability, 258-260, 370, 714Treatment failures, 369Treatment methods, 89Treatment plan, 83, 381Treatment settings, 492, 589Treatments, 1, 11, 16, 27, 39, 94-95, 102, 120, 148,

164, 183, 186, 196-219, 249-250, 252, 255,257, 270, 273-274, 289, 304, 318, 321, 353,362-363, 368-370, 391, 469, 477, 499,501-502, 516, 524, 608, 636, 668, 676-677,682, 685, 688, 701, 704, 711, 714, 718-719,721

Trend, 1-3, 5, 16-17, 19-20, 92, 98, 146-150, 168,170-175, 177, 185, 188-189, 195, 197, 209,212, 235, 294, 316, 346, 474, 727

Trend lines, 98, 171-173, 177, 727Triangles, 151, 444, 626Truth, 19, 23, 32, 257, 269, 631, 661Tube feeding, 317Tuning, 355Turns, 203, 205, 314, 371, 398, 403, 428, 435, 461,

511, 612, 632, 651Tutors, 653, 688-689Type II errors, 269

UUnconditioned stimulus, 4, 12, 15, 19, 44, 49-51, 66,

393, 395, 399, 409, 469, 540Understanding, 7, 16, 20, 23-24, 27, 31-33, 35, 38-40,

42-43, 45, 47, 55, 58, 60-61, 63, 70, 93-94,109, 113, 134, 154, 164, 169, 177, 179-180,182, 194, 246-247, 249, 260, 264, 271-272,278, 280-281, 286, 313, 342-343, 395, 403,405-407, 409-411, 414, 427, 465-466, 473,499-500, 513, 516, 524, 537, 542-543, 548,551, 573, 581, 585-587, 595, 607, 616, 619,648, 662, 677-678, 681, 688, 695, 701, 715,719-720

Undoing, 14United States, 36, 131, 548, 588, 671Units, 8, 14, 18, 63, 81, 93, 97, 99, 126, 151, 158, 166,

437, 442, 452, 537, 539, 548, 550-551,655-656, 674

Universities, 36, 671, 674U.S. Department of Education, 94, 594Utensils, 82, 403, 407Utilization, 710, 728Utterances, 87, 458, 483-484, 539

VVacation, 82, 168, 371Validity, 7, 10, 13, 15-17, 19, 38, 42, 68, 79, 85, 89, 91,

122-127, 142-143, 178, 180, 182, 184,193-194, 215, 242, 246, 250-253, 256-259,261-267, 270-271, 273-274, 345, 370, 387,448, 508, 520, 609, 616-617, 665, 667, 687,689, 696, 699-700, 720, 727-728

criterion, 10, 16, 19, 38, 142, 184, 242, 258, 616,700

face, 182, 262threats to, 123-125, 143, 180, 252, 257

Values, 1, 7, 10, 12-14, 17, 19, 24-25, 61, 98-99, 124,126, 128, 130-133, 142-144, 149-151, 159,162-163, 166-168, 171, 174, 176-177, 178,185-186, 188-189, 195, 209, 211, 295, 310,336, 519, 642, 668, 680, 721

philosophy, 12Variables, 2-4, 7-10, 14-16, 19, 23-24, 26, 30, 32-34,

36, 38, 40-43, 45, 47-48, 50-51, 56, 58-59,62-66, 69-71, 75, 83, 93, 104, 120, 128, 133,140, 147-148, 150, 152, 158, 162, 165,169-170, 174-176, 178-189, 191, 193-195,196-197, 199, 201-202, 205-206, 208,215-216, 218, 220, 223, 227, 234-238, 240,244, 245-258, 263-267, 269-274, 278, 281,283, 341-342, 350, 356, 363-364, 366,368-369, 371, 389, 391, 393, 395, 400, 403,405-407, 409, 427-428, 432, 470, 472, 475,480, 498, 512, 514-516, 520, 531, 533,537-539, 542-544, 547-551, 553-557, 561,568, 584-587, 603, 607-608, 619-620, 641,643, 656, 663, 681-682, 692, 707-709, 711,715, 725

antecedent, 2-4, 7-9, 14-15, 19, 30, 42, 48, 51, 62,65-66, 71, 185, 187, 191, 195, 281, 342,366, 369, 391, 393, 405, 409, 427-428,432, 498, 512, 514, 516, 533, 539, 542,548, 553-557, 584-585, 619, 641, 656

isolation of, 180measurement, 3, 7, 9-10, 14, 19, 24, 36, 38, 41, 43,

45, 69, 75, 93, 104, 120, 128, 133, 140,147, 150, 158, 165, 169-170, 176, 179,181, 185, 187, 189, 191, 193-195, 216,218, 223, 234-238, 240, 244, 246-247,249, 252-254, 257, 267, 271-273, 432,475, 692

ranked, 83scaled, 16, 128, 152, 169, 176, 267

Variance, 127, 698, 700Vegetables, 290Ventilation, 378, 571Veracity, 291Verbal ability, 316

Verbal behavior, 2, 9-11, 16, 20, 28, 63, 66, 70, 80-81,90, 213, 280, 288, 405, 407, 427, 477,483-484, 536-557, 561, 620, 690-692,694-695, 699, 703, 707-709, 711-712, 715,717-723, 726, 728

content and, 556Verbal cues, 652Verbal praise, 23, 204, 333, 431, 433, 573-574, 616,

656Verbal reprimand, 293, 353, 357, 364Verbal self, 296, 692Verbal skills, 79, 307, 531, 541, 566Verbs, 32, 133, 147, 537, 540, 552Vermont, 685vicarious reinforcement, 630, 704Victim, 345, 360, 479, 679Video, 116-119, 129-130, 201-202, 224, 257, 259,

261-262, 297-298, 427, 594Video games, 297-298Videos, 552Videotape, 104, 116-117, 132, 254, 427, 463, 695, 723Videotapes, 117, 133, 257, 645Videotaping, 116, 130Vignettes, 129Virtue, 54, 279, 281, 682Vision, 47, 240, 417Visual aids, 696Visual cues, 604Visual discrimination, 712Visual impairment, 64, 472, 492

causes, 64Visual impairments, 102, 263, 715, 724Visual prompts, 419Visualizing, 62Vitamins, 94Vocabulary, 35, 78, 281-282, 548, 551, 694, 726

sight, 694Vocal tics, 728Vocational training, 623, 650Voice, 55, 104, 117, 158, 254, 287, 325, 357, 361,

429, 456-459, 530, 546, 593, 629, 641-643,696, 710, 719

Voice volume, 104, 117, 254, 456-458, 696Voicing, 287Volume, 31, 104, 117, 254, 286, 293, 312, 445,

456-459, 492, 516, 538, 616-617, 645, 670,696, 709, 727

volunteering, 259Volunteers, 128, 478Vomiting, 49, 445VORT, 439Vort Corporation, 439Vowel sounds, 102Vowels, 413, 707

WWages, 331Walden Two, 28, 31, 721Walls, 37, 249wants, 57, 78, 80, 82, 95, 130, 180, 182, 284, 297,

444, 449, 459, 524, 540-541, 551, 567, 574,581, 584-585, 596-598, 603, 606, 612, 629,651, 665

Warmth, 287, 289, 394, 546Warning signs, 584Washington, 36, 293, 379, 676, 685-687, 689,

700-701, 703, 713-714, 721, 725Watson, 27, 29-30, 42, 227, 236, 249, 585-586,

597-598, 615, 724, 726Wealth, 381Weed, 64, 629Weight, 45, 48, 81, 95, 451, 559, 584, 589, 605,

617-618, 711, 722Welfare, 78, 631, 666, 668-669, 684, 713Well-being, 666, 673, 683West Virginia, 36, 700White, R., 694Whole, 15, 18, 20, 52, 92, 97, 110-112, 114-115, 120,

130, 137, 183, 201, 248, 386, 442, 461-462,489, 547, 566, 576-577, 587, 593, 646-647,652, 683, 691

Whole-class instruction, 652Windows, 119, 353, 594Wisdom, 661Withdrawal, 1, 18, 20, 49, 56, 58, 61, 66, 150,

192-193, 196-198, 200, 205-206, 218, 244,278, 310, 318, 375-376, 381-383, 387-389,476, 551, 553, 574, 588, 656-658, 663, 673,676, 686-687, 710, 719

750

Withdrawal designs, 196, 719Women, 150-151, 336, 502, 613, 636, 667, 705Word analysis, 212-213Word identification, 421word problems, 562, 627, 630, 635Word reading, 319, 694, 705, 728Words, 8, 16, 25-26, 31, 35, 37-38, 45, 56, 58, 60, 71,

87, 94-96, 98, 102-104, 109, 116, 123-124,126, 131-132, 140, 150, 156-158, 161, 166,181-183, 203, 209-212, 255, 264, 267, 283,286, 307-308, 318-319, 326, 329, 356, 358,367, 401, 405, 414, 417-418, 421, 431, 449,455-458, 483, 502, 524, 531, 533, 537,539-543, 546-547, 549, 551-555, 587, 603,606-608, 611, 616, 625, 627-628, 630, 632,634-635, 642-643, 656, 661, 667, 677, 688,701, 720

base, 26, 102, 661classes of, 60, 109, 421exception, 45, 156, 367, 642generative, 587manipulation of, 26, 37, 405, 455, 502

Work, 2, 23, 25, 35-37, 39-40, 43, 50, 55-56, 59-60,64-65, 74, 76, 80-81, 83, 87, 93, 96, 108,118, 123, 129, 140-141, 147, 181, 197-199,238, 246, 248, 257, 259-260, 262-263,267-269, 274, 277-278, 281, 283, 289-290,296-297, 302, 307-308, 313-314, 316-318,320, 323, 327, 329, 331, 336, 340-341, 350,355-356, 366-367, 371, 388, 400, 404, 410,413, 419, 427, 455, 462, 465, 469, 483,486-487, 499, 521, 524-529, 531, 533,537-538, 543-544, 552, 556, 560-562, 564,584-585, 588, 590-591, 596, 598, 601-611,613, 616, 623-625, 634, 639, 641-644,646-647, 650-652, 661, 666, 668, 671-672,674, 678, 680, 683, 692-693, 695, 698,703-704, 706-707, 713, 715-717, 719-720,724-725

Work habits, 623Workbooks, 87Workplace, 39, 185, 371, 589, 673Worksheets, 74, 116, 129, 134, 141, 208, 268-269,

291, 561, 605-606, 646, 650Wrist counter, 108, 361, 610, 708Writers, 282, 477, 598, 607Writing, 31, 55, 59, 63, 74-76, 87, 95, 97, 100, 103,

117, 128, 156, 168, 272, 277, 281-283,292-294, 296, 485, 522, 524, 539, 542, 545,556, 561, 564, 566, 572, 582, 587-588, 598,603, 608, 616, 623, 646, 648, 651, 682, 687,689, 694, 696, 701, 709-710, 712, 718

as punishment, 59explanatory, 587form of, 59, 95, 282-283, 293, 522, 556, 587, 616kid, 55, 603right, 55, 95, 103, 564, 598, 651talking and, 282to solve, 603

Writing skills, 566, 701Written language, 83, 537, 561, 714, 728

XX axis, 149, 158-160, 165, 167, 176

YY axis, 16, 149, 152, 156, 158-160, 167, 171, 176,

487, 489Yale University, 712Yoga, 589, 699Young children, 39-40, 109, 234, 356, 376, 418, 432,

540, 546, 604, 606-607, 646, 671, 687-688,694, 703, 716-720, 725-726

ZZero, 6, 19, 99, 102, 104, 116, 149, 160-161, 171-172,

177, 190, 226, 231, 240, 314, 348, 357, 359,363, 376, 412, 456, 469-472, 479, 487-488,490, 492, 494, 497, 500, 608, 657, 721

Zoloft, 531

751

752