Metacomponential development in a Logo programming environment

9
Journal of Educational Psychology 1990, Vol. 82, No. 1, 141-149 Copyright 1990 by the American Psychological Association, Inc. 0O22-O663/90/$O0.75 Metacomponential Development in a Logo Programming Environment Douglas H. Clements Graduate School of Education State University of New York at Buffalo Effects of a theoretically based Logo environment on executive-level abilities were investigated. Forty-eight third graders were tested to assess pretreatment level of achievement and were randomly assigned to one of two 26-week treatments: Logo computer programming or control. Posttesting with a dynamic interview instrument revealed that the Logo programming group scored significantly higher on the total assessment of executive processing. Features of the instructional environment, such as explicitness and completeness, help account for these effects. Structural coefficients were meaningful for three of four individual processes. The type of Logo environment used may have less effect on planning processes than on those processes that construct elaborated mental schemata for problems. Classroom tasks may provide substantial experience only with the former; in contrast, the Logo environment may have emphasized the expression of the latter in general, as well as domain-specific, terms. Contradictory research results regarding the use of the Logo computer programming language to enhance higher-order thinking abilities appear in no small part attributable to differences in instructional environments. Unfortunately, these environments are not usually described in sufficient detail, based on a theoretical foundation, or closely linked to expected cognitive consequences. The purpose of this study was to investigate the effects of a theoretically based Logo environment on the executive-level abilities of third-grade children. A Theoretical Foundation A felicitous theory on which to base a Logo environment intended to promote metacognitive abilities would contain specifically identified components and a hierarchical organi- zation that would facilitate its application and interpretation. Sternberg (1985) hypothesizes that different types of problem- solving processes are carried out by separate components of people's information-processing systems. Components are el- ementary processes that operate on internal representations of objects. Highest in the cognitive hierarchy are metacom- ponents—executive processes that control the operation of the system as a whole and plan and evaluate all information processing. They include deciding on the nature of the prob- lem, choosing and combining performance components rel- evant to the solution of the problem, selecting a representa- tion, and monitoring solution processes. I gratefully acknowledge the cooperation of the students and staff of the Hudson Local School District and the comments of James Hiebert, Bonnie Nastasi, Steven Silvern, and two anonymous review- ers on a draft of this article. Correspondence concerning this article should be addressed to Douglas H. Clements, State University of New York at Buffalo, 593 Baldy Hall, Buffalo, New York 14260. Sternberg has posited that cognitive development results to a large extent from the metacomponents' ability to adjust their functioning on the basis of feedback they receive from other components. They gather information about where, how, and especially when the various components might be best applied. The cognitive monitoring metacomponent plays a central role in this process. Logo Environments and Metacomponential Functioning The proposal that certain Logo programming environments can strengthen metacomponential abilities is based on two complementary rationales (Clements, 1986b). The first is that Logo environments can serve as catalysts of (unconscious) componential use. The second is that the environments can encourage children's explicit reflection on their own problem- solving processes. The first rationale is based on the assumption that there are features of particular Logo environments that educe me- tacomponential processing. For example, both the theory on which Logo was based (Minsky, 1986; Papert, 1980) and Steinberg's (1985) theory attribute central importance to the role of cognitive monitoring in learning. Monitoring is prev- alent when children actively infer consequences of causal sequences, enact instructions, and find and fix problems (cf. Markman, 1981). Logo programming involves operations of transforming incoming information in the context of con- structing, coding, and modifying such causal sequences. Al- though the nature of programming errors ("bugs") and their rectification are often not palpable, Logo does provide aids for such activity in its graphic depiction of errors, explicit error messages, and simple editor. Certain educational envi- ronments can be expected to facilitate the implicitly required cognitive monitoring—for example, those in which teachers model the processes of debugging, encourage children to use Logo's aids in finding and correcting errors rather than to 141

Transcript of Metacomponential development in a Logo programming environment

Journal of Educational Psychology1990, Vol. 82, No. 1, 141-149

Copyright 1990 by the American Psychological Association, Inc.0O22-O663/90/$O0.75

Metacomponential Development in a LogoProgramming Environment

Douglas H. ClementsGraduate School of Education

State University of New York at Buffalo

Effects of a theoretically based Logo environment on executive-level abilities were investigated.Forty-eight third graders were tested to assess pretreatment level of achievement and wererandomly assigned to one of two 26-week treatments: Logo computer programming or control.Posttesting with a dynamic interview instrument revealed that the Logo programming groupscored significantly higher on the total assessment of executive processing. Features of theinstructional environment, such as explicitness and completeness, help account for these effects.Structural coefficients were meaningful for three of four individual processes. The type of Logoenvironment used may have less effect on planning processes than on those processes thatconstruct elaborated mental schemata for problems. Classroom tasks may provide substantialexperience only with the former; in contrast, the Logo environment may have emphasized theexpression of the latter in general, as well as domain-specific, terms.

Contradictory research results regarding the use of the Logocomputer programming language to enhance higher-orderthinking abilities appear in no small part attributable todifferences in instructional environments. Unfortunately,these environments are not usually described in sufficientdetail, based on a theoretical foundation, or closely linked toexpected cognitive consequences. The purpose of this studywas to investigate the effects of a theoretically based Logoenvironment on the executive-level abilities of third-gradechildren.

A Theoretical Foundation

A felicitous theory on which to base a Logo environmentintended to promote metacognitive abilities would containspecifically identified components and a hierarchical organi-zation that would facilitate its application and interpretation.Sternberg (1985) hypothesizes that different types of problem-solving processes are carried out by separate components ofpeople's information-processing systems. Components are el-ementary processes that operate on internal representationsof objects. Highest in the cognitive hierarchy are metacom-ponents—executive processes that control the operation ofthe system as a whole and plan and evaluate all informationprocessing. They include deciding on the nature of the prob-lem, choosing and combining performance components rel-evant to the solution of the problem, selecting a representa-tion, and monitoring solution processes.

I gratefully acknowledge the cooperation of the students and staffof the Hudson Local School District and the comments of JamesHiebert, Bonnie Nastasi, Steven Silvern, and two anonymous review-ers on a draft of this article.

Correspondence concerning this article should be addressed toDouglas H. Clements, State University of New York at Buffalo, 593Baldy Hall, Buffalo, New York 14260.

Sternberg has posited that cognitive development results toa large extent from the metacomponents' ability to adjusttheir functioning on the basis of feedback they receive fromother components. They gather information about where,how, and especially when the various components might bebest applied. The cognitive monitoring metacomponent playsa central role in this process.

Logo Environments and MetacomponentialFunctioning

The proposal that certain Logo programming environmentscan strengthen metacomponential abilities is based on twocomplementary rationales (Clements, 1986b). The first is thatLogo environments can serve as catalysts of (unconscious)componential use. The second is that the environments canencourage children's explicit reflection on their own problem-solving processes.

The first rationale is based on the assumption that thereare features of particular Logo environments that educe me-tacomponential processing. For example, both the theory onwhich Logo was based (Minsky, 1986; Papert, 1980) andSteinberg's (1985) theory attribute central importance to therole of cognitive monitoring in learning. Monitoring is prev-alent when children actively infer consequences of causalsequences, enact instructions, and find and fix problems (cf.Markman, 1981). Logo programming involves operations oftransforming incoming information in the context of con-structing, coding, and modifying such causal sequences. Al-though the nature of programming errors ("bugs") and theirrectification are often not palpable, Logo does provide aidsfor such activity in its graphic depiction of errors, expliciterror messages, and simple editor. Certain educational envi-ronments can be expected to facilitate the implicitly requiredcognitive monitoring—for example, those in which teachersmodel the processes of debugging, encourage children to useLogo's aids in finding and correcting errors rather than to

141

142 DOUGLAS H. CLEMENTS

quit, and elicit cognitive monitoring in its most general sensethrough questioning.

An emphasis on turtle graphics allows children to poseproblems of varying levels of complexity, generate ideas fortheir own projects, represent these as goals, and identify thespecific problems involved in reaching these goals. In addi-tion, these problems lie between those that are clearly for-mulated and amenable to solution with a known algorithm(and for which there is therefore no need to decide on theirnature; e.g., simple mathematics word problems) and prob-lems that lack both a clear formulation and a known solutionprocedure (and which thus have almost no constraint frame-work). Logo problems are embedded in a context in whichthe range of problems and solutions is constrained, but deci-sions regarding specific problem formulation remain the chil-dren's responsibility. Environments most likely to support thedevelopment of the pertinent metacomponent, deciding onthe nature of the problem, are those in which children createa substantial proportion of their own projects and in whichthey are challenged to analyze and compare problem types.

Turtle graphics problems allow representations of the prob-lem goal, as well as partial solutions (and errors), in formsthat are accessible to children, because such problems haveanalogues in children's noncomputer experiences, such asmoving their bodies or drawing (Papert, 1980). Programmingin turtle graphics also promotes representation of the solutionprocess internally as an initial and a goal state (often expressedin pictures), as an intended semantic solution whose organi-zation is frequently verbalized for others (e.g., when workingin pairs), and as machine-executable code. Such opportunitiesfor the development of the metacomponent of selecting arepresentation would be enhanced through encouragementand support for the use of a multiplicity of representations.

Programming requires the explicit selection and orderingof instructions in solving problems. Logo's modular natureallows students to combine procedures that they develop invarious ways to solve graphic problems. Therefore, Logoprogramming may support children in choosing and combin-ing performance components, especially if they were encour-aged to use analysis and advanced planning.

The second rationale for the proposal that Logo mightstrengthen metacomponential processing is that these envi-ronments can foster componential cognizance. Children arenormally not conscious of their own componential function-ing. A unique claim is that Logo fosters explicit awareness ofcognition. Papert (1980) maintains that while programming,children reflect on how they might do the task themselves andtherefore on how they themselves think. It may be possiblefor children to learn simple notions about the components,then use that knowledge in solving problems, and finallybegin to use the knowledge automatically, without consciousdirection. Their use of these processes—initially unconsciousand ineffective—may become first conscious and more effec-tive (albeit slow), and ultimately, unconscious and expert.That is, metacognitive experiences fostered by learning aboutthe metacomponents and by programming in Logo wouldprovide declarative knowledge that would originally be inter-preted by general procedures (Anderson, 1983;Minsky, 1986;Sternberg, 1985).

Characteristics of certain Logo environments may facilitatethe occurrence of metacognitive experiences (Flavell, 1981):(a) children consciously solve problems using strategies unfa-miliar to them; (b) they "communicate" their organization ofthe task and solution processes to each other (if working inpairs), to the teacher, and to a machine; (c) problems are oftenself-selected; children feel that they "own" Logo problems;and (d) errors are salient and frequent, but correctable (Clem-ents, 1986a, 1986b). In addition, the isomorphism betweenthe information-processing framework in which Steinberg'scomponential theory is embedded and Logo's computer sci-ence framework allows the act of procedural programming toserve as a metaphor for componential functioning. First,children's solutions in Logo have been externalized; they arenow the turtle's solutions. Logo procedures can be used asmetaphors for mental schemata representing solutions toproblems; thus, the latter become "more obtrusive and moreaccessible to reflection" (Papert, 1980, p. 145) and more likelyto encourage "thinking about thinking," or, in Piagetianterms, reflective abstraction. Second, this process-oriented useof procedures itself can serve as a metaphor for componentialfunctioning; for example, "debugging" Logo procedures servesas a metaphor for cognitive monitoring. Educational environ-ments would, of course, have to encourage such reflection,make metacomponential processing salient, and guide theconstruction and application of Logo programming/cogni-tive-processing metaphors.

Previous and Present Research

Although some research indicates that metacomponentialfunctioning can be enhanced through programming, there isalso evidence that such Logo environments may not affect allcomponents equally. Across several studies using similar Logoenvironments, evidence is consistent only for enhancementof the metacomponent of cognitive monitoring (Clements,1986a; Clements & Gullo, 1984; Lehrer & Randle, 1987;Miller & Emihovich, 1986; Silvern, Lang, McCary, & Clem-ents, 1987). There is mixed support for the development ofthe metacomponents, deciding on the nature of the problemand selecting a representation (Clements, 1986a; Lehrer &Randle, 1987; Silvern et al., 1987) and little support forchoosing and combining performance components (Clem-ents, 1986a; Silvern et al., 1987). It may be that classroomtasks provide relatively more practice with the latter metacom-ponent. Research is needed that compares the efficacy of asingle theoretically grounded treatment in developing differ-ent metacomponents.

Weaknesses of the tests used in previous studies also needto be ameliorated. For example, most assessed a constrainedapplication (e.g., comprehension monitoring) of a particularmetacomponent (cognitive monitoring). Parallels betweenprogramming and such tasks indicate another potential limi-tation in generalizability, in that both Logo and the compre-hension monitoring tasks involved sequences of directions.

For these reasons, I investigated the effects of a Logoprogramming environment based on Sternberg's componen-tial theory on the metacomponential abilities of third-gradechildren. It was hypothesized that these effects would be

METACOMPONENTIAL DEVELOPMENT IN A LOGO ENVIRONMENT 143

stronger for three metacomponents: deciding on the natureof the problem, selecting a representation, and monitoringsolution processes. A new assessment of metacomponentialfunctioning was used.

Method

Subjects

Subjects for the study were 48 children from a middle-class schoolsystem. From a pool of all children who returned a parental permis-sion form, 20 boys and 28 girls in the third grade (mean age = 8 years9 months) were randomly selected from the classrooms of seventeachers. Children were randomly assigned to one of two conditions,Logo computer programming or control.

Procedure

Scores from a standardized test administered schoolwide were usedto determine pretreatment level of achievement. The computer activ-ities were implemented over a period of 26 weeks (children came in45 min shifts during the last period of the school day). At the end ofthese sessions, after a delay of 1 week, children were interviewedindividually to determine their use of metacomponential processes.Interviews lasted from 60 to 90 min.

Instruments

Metacomponential assessment. Clements and Nastasi (in press)designed a dynamic interview instrument to measure metacompo-nential functioning of third-grade children in problem-solving situa-tions. The basic strategy is to use problems whose successful solutiondepends on intensive use of a single metacomponent. Children readeach problem. They are allowed to solve it with no help. If they areunsuccessful, they are provided a series of five successively morespecific prompts. Their raw score is the number of prompts required;these scores are transformed so that a higher score indicated thatfewer prompts were required (i.e., 6 indicated success without the aidof prompts; 0 indicated lack of success after all prompts were pro-vided). Transformed scores are summed over items assessing eachmetacomponent; these raw scores are also converted to z scores tofacilitate interpretation. Items measuring each metacomponent arepresented to children in random order.

Two basic assumptions concerning the prompts are made. First, ifchildren are successful on an item emphasizing a certain metacom-ponent or if they are successful given one or more prompts, then theyare using that metacomponent. Second, the number of promptsneeded with a given metacomponent is inversely related to the degreeof retrievability of that metacomponent. Two different measures aretaken for each item: (a) utilization—the number of prompts necessaryfor the children to exhibit use of the metacomponent (i.e., to "get theidea")—and (b) correctness—the number of prompts necessary forthe children to respond correctly.

Scoring for correctness is straightforward: The score is the promptnumber at which children give the correct answer. The criteria forscoring the utilization measure are as follows. For deciding on thenature of the problem (nature), children must exhibit a sign that theyare asking the right question (or questions) to solve the problem andthat they understand the structure of the specific problem (or type ofproblem). They may subdivide the problem, redefine goals in keepingwith the problem, or start a correct solution process. For choosingand combining performance components relevant to the solution of

the problem (performance), children must start to choose and com-bine processes (i.e., two or more steps) that may lead to a correctsituation. They must indicate a systematic strategy for combiningselected processes. For selecting a representation (representation),children must show evidence of using a mental model related to theproblem—for example, mental imagery or a drawn figure, or asemantic or arithmetic structure. For cognitive monitoring (monitor-ing), children must show evidence indicating their belief that some-thing is wrong.

Criteria for selecting items to measure each component weretwofold. First, a logical criterion was established for each metacom-ponent. Second, empirical data had to demonstrate that childrenpresented with each item benefited most from prompts directed atthe metacomponent to be measured. (This was established during apilot study.) The logical criteria, along with example items, follow.

The criterion for nature was that items be difficult to solve "becausepeople tend to misdefine their nature" (Sternberg, 1985, p. 44). Forexample, young children often use association instead of mappingrelations in analogy tasks; that is, they misdefine the problem (cf.Clements, 1987; Sternberg, 1985). One analogy problem was: boypulling wagon is to girl pushing child on swing as car pulling traileris to (ski lift, bulldozer pushing dirt, horse pulling cart, dogs pullingsled). (Note these were presented to children in a pictorial, matrixformat.) The prompts were as follows: (a) "What do you think I wantyou to do?" (b) "What kind of problem is this?" (c) "Look! Thesepictures are related, or go together, in a certain way." (d) "We giveyou these two pictures. You need to find what goes here so that thesetwo (indicate bottom two, globally) go together in the same way asthese two (indicate top two)." (e) "The boy pulling the wagon andthe girl pushing the swing are related to each other; they are doingthe opposite. The car pulling the trailer goes through in the same waywith one of these (indicate answers)." The instrument included threeitems measuring this metacomponent.

The criterion for performance was that items demand not just thechoice of performance components but also the combination of theseinto a workable strategy. For example, several mathematics problemswere included that required children to choose both the operationsto use and the order in which to execute them. One problem asked,"John wanted to know how much his cat weighed. But the catwouldn't stay on the scale unless he was holding it. How could hefigure out the cat's weight?" The prompts were as follows: (a) "Whatcould you do to solve the problem?" (b) "What plan would you use?"(c) "Could John get on the scale with the cat and get on again alone?"(d) "How would John find out how much the cat weighed alone?" (e)"Think of the difference between his weight with the cat and hisweight alone." The instrument included six items measuring thismetacomponent.

The criterion for representation was that successful solutions becontingent on the construction of an appropriate representation. Forexample, syllogisms frequently are solved easily if there are but threeelements. But an internal, or most probably external, representationsuch as a vertical or linear array is required by young children tosolve those with more than three elements. One syllogism used was,"Bill is faster than Tom. Pete is slower than Tom. Jack is faster thanBill. Jack is slower than Fred. Who is fastest?" The prompts were asfollows: (a) "What could you picture in your head or on paper tohelp solve the problem? Think of pictures or words that tell or showyou which is the fastest." (b) "What picture or diagram could youmake on the paper? How would you tell which one was the fastest?"(c) "Could you make a picture or a line for Bill? Could you put hisname next to it to remember which child is which?" (d) "Could youmake a line or picture for Tom? When someone is faster thansomeone else, would the picture or line be shorter or longer?" (e)"Make a picture of all the children. Each line will be a child with aname next to it. Longer lines are faster children."

144 DOUGLAS H. CLEMENTS

Another example is the problem, "Four children, A, B, C, and Dcall each other on the telephone. Each talks to every other child onthe phone. How many calls are there?" The five prompts ranged from"What could you picture in your head or on paper to help solve theproblem? Or, what numbers could you use?" to "Would it help toput the children in a square? Could you show all the calls with lines?Each pair of children make one call. Keep track of what you foundin a chart or table if you like." The instrument included seven itemsmeasuring this metacomponent.

The criterion for monitoring was that items induce errors. Childrenwere purposely misled in some way by erroneous information. Forexample, "When Albert was 6 years old, his sister was 3 times as oldas he. Now he is 10 years old and he figures that his sister is 30 yearsold. How old do you think his sister will be when Albert is 12 yearsold?" The prompts were: (a) "Do you have to watch out for mistakeswhen you do this problem?" (b) "Is there something in the problemthat could trick you if you weren't careful?" (c) "Is Albert right whenhe figures that when he is 10 years old his sister is 30 years old?" (d)"Will his sister always be 3 times as old as Albert? Is that a mistake?Should you multiply or add years?" (e) "Don't make a mistake. WhenAlbert was 6, his sister was 18. Twelve years older. What would hissister be 1 year later, when Albert was 7? One year later? (continue)."The instrument included five items measuring this metacomponent.

Pretreatment assessment of achievement. The California Achieve-ment Test, Level 13 (CAT; CTB/McGraw-Hill, 1979) is a series oftest batteries in reading, language arts, and mathematics. The math-ematics scores of the CAT were recorded (K-R 20 reliability = .95)along with the total battery scores (r = .98) because items on themetacomponential assessment tended to be logical or mathematical.

Treatments

Logo children met for three sessions per week for a total of 78sessions. Children's absences ranged from 0 to 15 with a mean of 5.1sessions. During each session, six pairs of children worked on sixApple computers under the guidance of one or two adults (the"teacher"—a graduate assistant experienced in teaching with Logoand other computer tools—and the author). Both adults were presentfor about two-thirds of the lessons; one or the other was present forthe remainder. It was planned that sessions would consist of fourphases: (a) An introduction included a review and discussion of theprevious day's work, questions, and, about once per week, a "progam-mers' chair" (in which a pair of students presented a completedprogram); (b) a teacher-centered, whole-group presentation offerednew information (e.g., a new Logo command) or a structured prob-lem; (c) a phase in which students worked independently on eitherteacher-assigned problems (about 25%) or self-selected projects (in-cluded projects for which the teacher introduced "themes" but stu-dents were responsible for selecting the specific problem); and (d) aphase in which the teacher provided a summary and encouragedsharing with the whole group.

The sessions commenced with an explanation of the purpose ofthe treatment (to develop problem-solving abilities) as well as thesubstance of the treatment (programming in Logo). Children playedgames (first off, then on, computers) that familiarized them withbasic turtle movements and estimation of the measures of thesemovements. Children were challenged to determine the exact lengthand width of the screen in turtle steps and to create as many ways asthey could to get to a given location. For all challenges, studentsdiscussed their solutions and other situations in which such strategieswould be useful.

Procedural thinking was introduced—first through discussions ofchildren's experiences of learning and teaching new routines, ideas,and words, then through the notion of teaching Logo procedures.Children used a support program that allowed them to define aprocedure and simultaneously watch it being executed while editing

whenever necessary (Clements, 1983/84). They were challenged tocreate a stairway via dramatizations, then paper-and-pencil, andfinally with Logo, constructing a stair procedure as a subprocedurefor stairway. Thus, problem decomposition and the use of procedureswere introduced from the beginning. Different solutions were com-pared, and children were encouraged to construct and discuss varia-tions of their procedures (e.g., What was altered in the procedure tocreate what effect?). In this way, there was an attempt to help childrenconstruct mappings between components of procedures and theireffect. Finally, students were asked to plan what they could makewith stairway (i.e., how stairway itself could be used as a subproce-dure). These programs were in turn analyzed.

At this point, children were introduced to the "homunculi," car-toon anthropomorphisms of the metacomponential processes. Thehomunculi were represented and introduced as follows:

1. The problem decider was a person thinking about what aproblem means (via a "think cloud"). The problem decider oftenasked questions such as, "What am I trying to do?," "Am I doingwhat I really want to do?," "Have I done a similar problem before?,""How do the parts of the problem fit together?," and "What infor-mation do I have or need?"

2. The representer was an artist with her arm extended in front ofher and thumb raised, looking off into the distance. She was sur-rounded by a piece of paper with a graph or chart, another piece ofpaper with writing, a drawing, and a three-dimensional model. Theseserved as metaphors for various ways to represent a problematicsituation. Specific representations were introduced when appropriate(e.g., drawing a diagram or picture).

3. The strategy planner was an intelligent-looking man with pencilsand pens in his pocket, holding a notebook. Spaced over the remain-ing sessions, useful strategies in the strategy planner's repertoire wereintroduced, such as specific programming steps (described subse-quently in this section), decomposing a problem, and guessing andtesting (systematically).

4. The debugger was an exterminator—a metaphor for cognitivemonitoring (which is more omnipresent in problem-solving than is"debugging" proper). To develop this more general cognitive moni-toring, students were frequently asked, "What exactly are you doing"("Can you describe it?") "Why are you doing it?" ("How does it fitinto the solution?") "How does it help you?" ("What will you do withit when you're done?") "Does this make sense?" (from Schoenfeld,1985).

These homunculi were introduced as a part of the Logo-program-ming and problem-solving process. They aided four teaching meth-ods: explication, modeling, scaffolding, and reflection. The goal ofexplication was to bring problem-solving processes to an explicit levelof awareness for the children. The teacher used the homunculi todescribe processes in which one had to engage to solve many types ofproblems. When the whole class solved problems, the teacher woulduse the homunculi metaphor to describe problem-solving processesand make them salient. The teacher would also model the use of thehomunculi-based processes in solving actual problems. In scaffolding,students' independent work with Logo, the teacher would try toascertain that process with which the student was having difficultyand would offer prompts and hints focusing on this particular process(e.g., "Might there be a pattern you could find that would help? Whatcould you write down to try to find it?"). If necessary, the teacherwould model the use of the process directly. Finally, reflection wasused in the first phase of the lesson, as teachers elicited groupdiscussion about a pair of students' use of homunculi in solvingprogramming problems, and in the third phase, as students wereasked to reflect their use of strategies in terms of the homunculi.

A general programming strategy for the planner's repertoire wasintroduced briefly to the group, then elaborated as teachers workedwith pairs:

1. Make a "creative drawing"—a free-hand picture of your project.Remember to keep it simple and label its parts.

METACOMPONENTIAL DEVELOPMENT IN A LOGO ENVIRONMENT 145

2. Make a planning drawing. Using a planning sheet (paper turnedbroadside with a turtle drawn at home), draw the turtle where it startsthe procedure, draw and label each line, turn, or procedure, use aruler and circular protractor for measurements, and have the turtleend in the same location and same heading at which it started (i.e.,construct a "state transparent" procedure). For each new procedure,make a separate planning sheet (i.e., start at the beginning of Step 2for each new procedure).

3. Have one partner read the instructions in order as the otherrecords them at the right-hand side of the planning sheet to constructprocedures.

4. Type these procedures into the computer. Use of the metacom-ponents was reviewed and encouraged as children worked on projectsthat emphasized basic genometric figures (e.g., square, equilateraltriangle, and rectangle), variables and regular polygons; seasonalinterests (e.g., writing valentine heart procedures using arcs); contests(e.g., writing the shortest, or most elegant, program for a "stackedrectangle pyramid" or duplicating given figures and using them asmany ways as possible in the creation of a picture); list processingprojects (e.g., writing a "madlibs" generator or a simple conversation-alist program); and collaborative work on a mural.

The purpose of the comparison group was to serve as a placebocontrol; that is, because children were volunteers treated as special bythe school, there was a threat of a Hawthorne effect. Thus, thesechildren also received computer experience under the same condi-tions as the experimental group (i.e., six pairs of children workingwith the same teachers), with two important differences. First, thecontent, designed to develop creative problem solving and literacy,included composition using Milliken's Writing Workshop (an inte-grated package of prewriting programs, a word processor, andpostwriting, or editing, programs), as well as drawing programs. Acomposition process model (based on Calkins, 1986), including theprocess of prewriting (e.g., brainstorming), writing, revision (confer-encing), and editing (e.g., checking spelling and grammar), served asa framework for instruction. Thus, several characteristics of the Logotreatment were paralleled with the control group, including self-selection of topics and interpersonal interaction; however, the inte-gration of Logo programming and anthropomorphic instruction inmetacomponential functioning was unique to the experimentalgroup. The second difference between the groups was that the controlgroup met only once per week for a total of 26 sessions (meanabsences = 0.9). All children received minimal exposure to Logo (2weeks at 20 min per day) as part of the regular school program. Onecontrol child acquired a computer equipped with Logo at homeduring the study.

Results

Pretreatment Achievement

Table 1 presents the means and standard deviations of thepretreatment scores of the two groups, Logo programmingand control. Pretreatment mathematics achievement, asmeasured by the CAT's mathematics score, was nearly iden-tical, f(46) = -.04, p = .96. There was also no significantdifference on the CAT's total battery score, t(45) = -1.22, p= .23. Correlations between the total battery score and thetotal metacomponential score were moderate (r = .42, p <.01 for utilization; r = .61, p < .001 for correctness). Sharinga moderate amount of variance with a measure of academicachievement is typical of a measure of intelligence and pro-vides evidence of criterion-related validity for the metacom-ponential assessment.

Table 1Means and Standard Deviations for Treatment Groups onPretreatment Achievement (CAT)

Measure

MathematicsTotal Battery

Logo

M

692.46696.78

SD

38.2026.76

Control

M

692.92706.29

SD

32.2926.67

Note. CAT = California Achievement Test; scores are standardizedto a single, equal-interval scale from 000 to 999. For each group,n = 24.

Metacomponents

Table 2 presents the means and standard deviations of theposttest scores of the two conditions, as well as reliabilityestimates. Results revealed high interrater agreement for allinterview scores; internal consistency was high for total testscores, but lower for subtest scores (Table 2). Intercorrelationsamong the subtests were moderately high, indicating from14% to 49% shared variance between pairs of subtests (Table3).

Differential use of metacomponents on items within thesubtests was validated by categorizing according to metacom-ponent children's problem-solving behaviors (recorded duringtest administration). For each metacomponential category,the highest percentage of behaviors was elicited by the corre-sponding subtest (Table 4; see Clements and Nastasi, in press,for complete descriptions of these analyses). This providesevidence of the construct validity of the instrument and itspotential for differentiating metacomponential processes.Moderate to low reliability on some subtests and moderateamounts of shared variance among the subtests, however,suggest one must exercise caution in interpreting results con-cerning individual subtests.

To test differences between the groups on the four scoresof the metacomponential assessment simultaneously, a mul-tivariate analysis of variance (MANOVA) was performed on thestandardized scores for each measure, correctness and utili-zation. For correctness, analyses revealed a significant omni-bus treatment effect, F(4, 43) = 3.15, p < .05, in favor of theLogo group. To identify specific variables on which the groupsdiffered meaningfully and to indicate the relative contributionof subtests to the treatment effect, a stepwise discriminantanalysis was performed (Pedhazur, 1982). This analysis isappropriate for mutually correlated variables in that onevariable is included in the discriminant function at each step(this variable being the one that results in the most significantF value after variables already included in the model havebeen adjusted for it). This would indicate if a subtest becomessuperfluous because of the relationship between it and subtestsalready in the model. Structural coefficients greater than orequal to .30 were considered meaningful (Pedhazur, 1982).Structural coefficients for the four correctness variables weremonitoring, .58; representation, .43; nature, .26; and perform-ance, - .11. On this basis, it was determined that only moni-toring and representation had meaningful structural coeffi-cients on the correctness measure.

146 DOUGLAS H. CLEMENTS

Table 2Means, Standard Deviations, and Reliability Estimates for Treatment Groups on Metacomponential Measures

Measure

NaturePerformanceRepresentationMonitoring

NaturePerformanceRepresentationMonitoring

Logo

M

0.154-0.057

0.2220.296

0.3650.1060.3430.284

Standard score

SD

1.0880.9380.8181.048

0.9150.9430.6250.906

Control

M

-0.1540.057

-0.222-0.296

-0.365-0.106-0.343-0.284

SD

0.9001.0761.1280.874

0.9641.0641.1861.027

Logo

M

Correctness6.839.90

21.4613.33

Utilization11.3323.0836.7516.58

Raw score

SD

4.855.887.455.61

4.337.623.454.98

Control

M

5.4610.2517.4210.17

7.8821.3832.9613.46

SD

4.016.73

10.284.68

4.568.606.565.64

Totalpossible

raw score

18364230

18364230

Reliability

a

.29

.55

.73

.49

.36

.70

.62

.44

%

999098

100

88888886

Note, a = Coefficient alpha; % = percentage of interrater agreement.

toring and representation had meaningful structural coeffi-cients on the correctness measure.

For utilization, analyses revealed a significant omnibustreatment effect, F(4,43) = 3.35, p < .05, in favor of the Logogroup. Structural coefficients for the four variables were:nature, .71; representation, .66; monitoring, .54; and perform-ance, .19. Thus, it was determined that nature, representation,and monitoring had meaningful structural coefficients on theutilization measure.

To validate the implementation of the treatment, informalobservations of the teacher's interaction with the childrenwere conducted no less than once per week. It was found thatthe sessions followed the four-phase plan with one exception:The fourth phase, summary and sharing with the whole group,was seldom realized (less than 10 sessions), as students left fortheir classrooms or homes at different times. The observationsalso confirmed that children were encouraged to create theirown problems, think through these problems on their own,and use metacomponential processing as an aid to posing andsolving problems (via the "homunculi" and the teachingmethods of explication, modeling, scaffolding, and reflection).For example, the teacher used the homunculi metaphor tomake metacomponential processes salient by remarking,

"That drawing seems like a good way to view this problem.Your representer made an excellent choice." Additional op-portunities for using the teaching methods were noted anddiscussed with the teacher at the end of each week.

It was also observed that students occasionally talked toeach other in these terms; for instance, one said to another,"Your debugger is working overtime, 'cause your problemdecider missed the point." An example of students' use of thismetaphor outside of the Logo environment transpired follow-ing the completion of a metacomponential interview. Theinterviewer asked, "Do you think of the homunculi whenyou're solving problems?" The boy replied, "No.. .but youalways do use them. Like when a problem pops up. But thereshould be another homunculi (sic). He would get a problemfrom the problem decider, drop it off to the representer andto the planner. You could even be writing the answer downand this guy could start the problem decider working on anew problem." This boy had independently constructed afunction of the cognitive monitoring component that hadnever been mentioned to him but that had been posited bySternberg: directing the actions of the metacomponents them-selves. Thus, there is supporting evidence that students usedthe homunculi metaphor in and out of the Logo environment.

Table 3Intercorrelations of Metacomponential Interview SubtestScores

Subtest

PerformanceRepresentationMonitoring

PerformanceRepresentationMonitoring

Nature Performance

Utilization.42*.50** .69**.38* .66**

Correctness.40*.54** .69**.39** .52**

Representation

.67**

.53**

**/><.001.

Discussion

In this study, I investigated the effects of a Logo program-ming environment based on Sternberg's componential theoryon children's metacomponential abilities. Overall, the Logogroup scored significantly higher on two measures (i.e., scor-ing systems) of an assessment of metacomponential process-ing—correctness of response or use of an individual metacom-ponent.

What accounted for these positive effects? Significant fea-tures of the Logo environment included explicitness andcompleteness. To be universally accessible, metacomponen-tial processing must be decontextualized. To this end, theseprocesses were articulated as explicitly and thoroughly aspossible when they arose in different contexts. The homunculi

METACOMPONENTIAL DEVELOPMENT IN A LOGO ENVIRONMENT 147

Table 4Percentage of Children's Behaviors Categorized by Metacomponent

Subtest

NaturePerformanceRepresentingMonitoring

Nature

21.852.232.731.14

Performance

51.1684.8540.2529.17

Representation

1.020.00

48.079.09

Monitoring

7.026.153.68

37.88

Other

18.956.785.28

22.73

Note. Mean number of behaviors coded for each subtest = 114.

instructional device served to focus attention on the processes.In addition, children were asked to verbalize their goals andsolution procedures, as well as their use of metacomponentialprocesses, before overtly attempting a solution. Such attentionto explicit awareness of metacomponential processes can becontrasted with the usual pedagogical emphasis on conveyinga large corpus of factual knowledge, which often obfuscateshigher-level thought processes.

The environment was complete, in that (a) children engagedin all phases of the problem-solving process, (b) both generalknowledge and domain-specific knowledge were addressed,(c) a comprehensive set of pedagogical approaches was used,and (d) social and emotional aspects of learning were consid-ered. The project approach to Logo engaged children in allaspects of problem solving, including determination of thenature of problems, representation in different modes, strategyselection, and cognitive monitoring. In addition, both themodeling and scaffolding teaching methods allowed the fulltask to be perceived by the student. Modeling provides chil-dren with a schema for the application of the homunculi-based processes. This schema includes information concern-ing when, how, and why these processes are used to solveproblems. Scaffolding encourages successive approximationof the entire range of skills involved in completing the task.

Both domain-specific knowledge and general methods foroperating on that knowledge are critical to problem solving(Nickerson, Perkins, & Smith, 1985; Simon, 1980). Indeed,"You can't think seriously about thinking without thinkingabout thinking about something" (Papert, 1980, p. 10). Sub-stantive content in the realms of mathematics and computerprogramming was an essential aspect of the treatment andwas inextricably interwoven with the emphasis on generalmetacomponential functioning.

The function of each of the four pedagogical methods—explication, modeling, scaffolding, and reflection—has al-ready been discussed. One question might remain. Was itnecessary or efficient to have children spend the majority ofthe time solving problems independently? Affirmation can befound in the theory that, because children must build theirown schemata, direct teacher instruction is insufficient; stu-dent initiation and use of metacomponential processes isrequired (Simon, 1980; Wagner & Steinberg, 1984). In addi-tion, while a greater number of specific strategies could havebeen taught directly, this would not have been consonantwith the goal of having children learn to use general categoriesof processes as represented by the homunculi. In fact, themain function of these general processes may be to constructand activate appropriate task-specific processes. For example,general monitoring processes may instantiate themselves as

part of a local system that also includes relevant domain-specific knowledge (cf. Sternberg, 1985). The monitoringprocess in the global system gains information both aboutweaker but generally applicable strategies (e.g., a decision tostop and continuously assess progress and goals either period-ically or when one has a feeling of being caught in a loop)and about situations in which a domain-specific instantiationis more applicable and thus should be activated (e.g., indebugging a program). This is consonant with the theory thatgeneral components (i.e., those shared across domains) arethose that are at the lowest level (e.g., encoding) and at thehighest level (e.g., the most general instantiation of the me-tacomponents, which may also have domain-specific elabo-rations) of information processing (Simon, 1980). Thus, theLogo treatment may have taught children not so much howto apply specific cognitive skills as how to acquire, retrieve,and apply needed metacomponential skills in specific situa-tions. The homunculi metaphors may have served as organi-zational frameworks for this learning.

Finally, the Logo environment was characterized by com-pleteness in that social and emotional aspects of learning wereconsidered. Children worked in pairs and were encouraged tosolve problems cooperatively. Moreover, the nonroutine na-ture of the project tended to create a sense of shared problemsand processes used to solve those problems.

What, then, is the role of Logo? That is, is Logo necessaryto promote metacomponential development? Not entirely, ofcourse. It did, however, play a major part in this treatment.Arguments have already been proffered that Logo is particu-larly suited to implicit evocation of metacomponential pro-cesses and to the promotion of metaphorical thinking in aidof those processes. Thus, it can serve as a tool that facilitatesthe role of the teacher as a mediator of metacognitive expe-riences. However, future research is needed to ascertainwhether other tools can serve a similar function. It might beargued that componential processes should be integratedthroughout the curriculum rather than emphasized withinLogo sessions. Transfer effects in this study may have beenattenuated by the lack of infusion of the metacomponentialprocesses into children's usual school lessons. Although suchintegration is advisable, infusion often leads to diffusion andthereby often severely limits the focus and impact of a pro-gram. Therefore, a program that emphasizes metacomponen-tial development, such as the Logo environment describedhere, may make a significant instructional contribution.

A second hypothesis of this study was that the effects of theLogo environment would differ across the metacomponents.Nature, representation, and monitoring were determined tohave meaningful coefficients for the utilization measure, and

148 DOUGLAS H. CLEMENTS

monitoring and representation were determined to havemeaningful coefficients for the correctness measure. Thissuggests schema construction—that is, a more complete con-struction of a mental schema for the problem, including acritic (monitor) that assesses consonance of ongoing problem-solving processes with that schema. Thus, the hypothesis thatthe Logo environment may have been less efficacious indeveloping the performance metacomponent was supported.It may be that regular classroom tasks and tests alreadyprovide substantial experience choosing and combining per-formance components. On the other hand, such skills asdeciding on the nature of the problem, selecting a represen-tation, and cognitive monitoring are emphasized far lessfrequently. (As one example, the protocol for most standard-ized tests ensures that the nature of the problems is explainedto children before they complete each section. Most classroomassignments share this characteristics; in addition, what isfrequently paramount for children is finishing the assignmentand having it corrected, rather than monitoring their ownsolution attempts.) This is consonant with other research thathas reported a limited impact of Logo on planning skills (e.g.,Pea & Kurland, 1984).

Another possibility is that the elaborations (e.g., knowledgeabout when and how to apply processes) created by childrenin using each general metacomponent differed in applicabilityacross domains. When children were taught to choose andcombine performance components in the Logo environment,specific strategies (rather than general planning skills) consist-ently were emphasized, such as making a "planning drawing"and writing procedures based on this drawing. The metacom-ponential processes may have instantiated themselves as partof this local system but if so, would be linked closely to thespecific strategies stored there and would not be available forexecutive functioning in the global system. This local systemwould have been of limited use in the solution of other typesof problems, such as the tasks that made up the assessmentinstrument. In contrast, the teaching of the other metacom-ponential processes tended to be expressed in general termsand to be anchored in domain-specific applications; they thusmay have been more applicable to the assessment tasks.Consider, for example, the self-questioning strategies taughtfor nature (e.g., "What am I trying to do?") and monitoring(e.g., questions such as "What are you doing what you'redoing?" and "Does this make sense?" always preceded ques-tions focused on specific debugging actions). A similar argu-ment may explain results for the representation items. Thosefive problems on which the Logo children performed substan-tially better were spatially oriented problems amenable tosolution through a relatively straightforward translation of theverbal problem into a picture or diagram (e.g., "A dog walksaround a rectangle-shaped fence that is 12 yards around inall. If the rectangle is two times as long as it is wide, how longis each side?"). Such direct, but nevertheless generally appli-cable, translations between verbal and spatial representationswere emphasized in the Logo treatment. The two representa-tion problems on which the difference between the groupswas small required a less manifest translation to a moreabstract representation. (For example, scores for the twogroups were identical on the "how many phone calls" problem

previously described.) An observational study of videotapesof these children working in the Logo environment is pres-ently being conducted to provide additional information re-garding these possibilities.

An increase in children's cognitive monitoring followingtheir experience with Logo is one of the more consistentfindings in the literature (Clements, 1986a; Clements & Gullo,1984; Lehrer & Randle, 1987; Miller & Emihovich, 1986;Silvern et al., 1987). Examination of the results for individualproblems suggests that the present treatment developed gen-eral solution-monitoring processes rather than other aspectsof problem solving such as domain-specific knowledge, orother componential processes. Detecting the misleading in-formation in the two problems for which there was the leastdifference between the groups also required scientific knowl-edge and insight (i.e., sensitive use of knowledge-acquisitionprocesses; Sternberg, 1985). For example, the erroneous andmisleading information in one such problem was the doublingof the weight of a girl standing on one foot to determine herweight standing on two feet. In contrast, the three problemson which the Logo children scored substantially higher re-quired less the possession of substantive scientific knowledgethan the evaluation of internal consistency (e.g., the "sisterwho was 3 times as old as Albert" problem).

Four caveats should be noted. First, conclusions regardingrelative effects on separate metacomponents are tentativebecause of limitations in (a) the subtests of the assessment(e.g., low reliability of some subtests) and (b) stepwise discrim-inant analyses conducted with such subtests on a small samplesize. Second, little can be inferred regarding the efficacy ofthis treatment in comparison with that of other programswith similar goals (especially when one considers the greaterinstructional time provided the experimental group). Third,although a rationale has been provided for the multifacetedtreatment, a corollary of this complexity is the indeterminacyof those aspects of the Logo environment responsible for thedifferences on the metacomponential measures. Similarly, itcannot yet be determined precisely what was learned—specificcognitive skills (only) or knowledge about when to use andhow to learn specific metacomponential skills. Fourth, al-though transfer to problems not encountered within the treat-ment was achieved, it should not be concluded that metacom-ponential skills of wide generality were developed. Assessmentitems were largely logical or mathematical. Considering, how-ever, (a) the accepted difficulty of attaining transfer of prob-lem-solving processes, (b) the strong indications that executiveskills are major determinants of success in mathematics prob-lem-solving, and (c) the failure of conventional schooling toaddress these particular skills (Schoenfeld, 1985), a moderatedevelopment of skills of (possibly) moderate generality mightbe considered educationally significant.

References

Anderson, J. R. (1983). The architecture of cognition. Cambridge,MA: Harvard University Press.

Calkins, L. (1986). The art of teaching writing. Portsmouth, NH:Heinemann.

METACOMPONENTIAL DEVELOPMENT IN A LOGO ENVIRONMENT 149

Clements, D. H. (1983/84). Supporting young children's Logo pro-gramming. The Computer Teacher, 11(5), 24-30.

Clements, D. H. (1986a). Effects of Logo and CAI environments oncognition and creativity. Journal of Educational Psychology, 78,309-318.

Clements, D. H. (1986b). Logo and cognition: A theoretical founda-tion. Computers in Human Behavior, 2, 95-110.

Clements, D. H. (1987). Longitudinal study of the effects of Logoprogramming on cognitive abilities and achievement. Journal ofEducational Computing Research, 3, 73-94.

Clements, D. H., & Gullo, D. F. (1984). Effects of computer program-ming on young children's cognition. Journal of Educational Psy-chology, 76, 1051-1058.

Clements, D. H., & Nastasi, B. K. (in press). Dynamic approach tomeasurement of children's metacomponential functioning. Intelli-gence.

CTB/McGraw-Hill. (1979). California Achievement Tests, Forms Cand D. Level 13. Monterey, CA: Author.

Flavell, J. H. (1981). Cognitive monitoring. In W. P. Dickson (Ed.),Children's oral communication skills (pp. 35-60). New York: Ac-ademic Press.

Lehrer, R., & Randle, L. (1987). Problem solving, metacognition andcomposition: The effects of interactive software for first-gradechildren. Journal of Educational Computing Research, 3,409-427.

Markman, E. M. (1981). Comprehension monitoring. In W. P. Dick-son (Ed.), Children's oral communication skills (pp. 61-84). NewYork: Academic Press.

Miller, G. E., & Emihovich, C. (1986). The effects of mediatedprogramming instruction on preschool children's self-monitoring.Journal of Educational Computing Research, 2, 283-297.

Minsky, M. (1986). The society of mind. New York: Simon andSchuster.

Nickerson, R. S., Perkins, D. N., & Smith, E. E. (1985). The teachingof thinking. Hillsdale, NJ: Erlbaum.

Papert, S. (1980). Mindstorms: Children, computers, and powerfulideas. New York: Basic Books.

Pea, R. D., & Kurland, D. M. (1984). Logo programming and thedevelopment of planning skills (Tech. Rep. No. 16). New York:Bank Street College of Education, Center for Children and Tech-nology.

Pedhazur, E. J. (1982). Multiple regression in behavioral research(2nd ed.). New York: Holt, Rinehart and Winston.

Schoenfeld, A. H. (1985). Metacognitive and epistemological issuesin mathematical understanding. In E. A. Silver (Ed.), Teaching andlearning mathematical problem solving: Multiple research perspec-tives (pp. 361-379). Hillsdale, NJ: Erlbaum.

Silvern, S. B., Lang, M. K., McCary, J. C, & Clements, D. H. (1987,April). Logo, teaching strategies, and computer effects on metacog-nition. Paper presented at the annual meeting of the AmericanEducational Research Association, Washington, DC.

Simon, H. A. (1980). Problem solving and education. In D. T. Tuma& F. Reif (Eds.), Problem solving and education: Issues in teachingand research (pp. 81-96). Hillsdale, NJ: Erlbaum.

Sternberg, R. J. (1985). Beyond IQ: A triarchic theory of humanintelligence. Cambridge, MA: Cambridge University Press.

Wagner, R. K., & Sternberg, R. J. (1984). Alternative conceptions ofintelligence and their implications for education. Review of Edu-cational Research, 54, 179-223.

Received July 14, 1988Revision received June 23, 1989

Accepted August 29, 1989