How do teachers promote their students' lifelong learning in class? Development and first...

12
This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution and sharing with colleagues. Other uses, including reproduction and distribution, or selling or licensing copies, or posting to personal, institutional or third party websites are prohibited. In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier’s archiving and manuscript policies are encouraged to visit: http://www.elsevier.com/authorsrights

Transcript of How do teachers promote their students' lifelong learning in class? Development and first...

This article appeared in a journal published by Elsevier. The attachedcopy is furnished to the author for internal non-commercial researchand education use, including for instruction at the authors institution

and sharing with colleagues.

Other uses, including reproduction and distribution, or selling orlicensing copies, or posting to personal, institutional or third party

websites are prohibited.

In most cases authors are permitted to post their version of thearticle (e.g. in Word or Tex form) to their personal website orinstitutional repository. Authors requiring further information

regarding Elsevier’s archiving and manuscript policies areencouraged to visit:

http://www.elsevier.com/authorsrights

Author's personal copy

How do teachers promote their students’ lifelong learning in class?Development and first application of the LLL Interview

Julia Klug*, Noreen Krause, Barbara Schober, Monika Finsterwald, Christiane SpielUniversity of Vienna, Austria

h i g h l i g h t s

� We developed the LLL Interview� It measures teachers’ competence to promote aspects associated with lifelong learning� We illustrate possible results in a small first sample� We discuss its properties� It allows for the identification of aspects that need more attention

a r t i c l e i n f o

Article history:Received 25 March 2013Received in revised form13 August 2013Accepted 18 September 2013

Keywords:Lifelong learningCompetenceAssessmentTeacherInterview

a b s t r a c t

Lifelong learning’s (LLL) cornerstones are laid at school. Especially teachers’ behaviour is central. As ofyet, there is no instrument to measure teachers’ efforts in promoting aspects associated with LLL withintheir students. We present a new interview to measure teachers’ competence concerning this matter. Toillustrate possible results, we interviewed N ¼ 40 teachers. While teachers did very well in arousinginterest in a new topic, they did worse in supporting students while planning. The LLL Interview is apromising new instrument that allows for identifying which aspects related to LLL need more attention.

� 2013 Elsevier Ltd. All rights reserved.

Lifelong learning (LLL) has been a central issue in Europeaneducation policy since the beginning of the century (Commission ofthe European Communities, 2000; European Commission, 2001).The rapid alterations in occupational and technical processes thatoccur in an individual’s environment necessitate permanent ad-justments and further qualification (Schober et al., 2007). A centralplace to teach the necessary prerequisites for these adjustments isthe classroom (Spiel et al., 2011), where especially teachers’ pro-fessional competence is central (Guskey, 2002). However, researchevidence is disillusioning. The longer students have remained inschool, the more commonly motivational problems are diagnosed(e.g. Gottfried, Fleming, & Gottfried, 2001; Pintrich & Schunk, 1996;Spiel, Lüftenegger, Wagner, Schober, & Finsterwald, 2011). That inturn can lead to a negative attitude towards learning in later stages

of life (Hargreaves, 2004). Some students, especially female ones,systematically underestimate their abilities (Ziegler & Heller, 1998)and consider their abilities to be stable and not capable of beinginfluenced (Ziegler, Schober, & Dresel, 2005). Consequently, it is notastonishing, that interest in school and learning decreases withincreasing age (Todt & Schreiber, 1998). Teachers as the centralagents for promoting LLL consider their influence in fostering stu-dents’ LLL to be low (Spiel et al., 2011). Even if teachers do promotestudents’ LLL, there is no evidence yet as to how they are doing it.Furthermore, to our knowledge there is so far no instrumentavailable for measuring teachers’ competence in promoting stu-dents’ LLL. Thus, the aim of this study was to develop an instru-ment, which measures teachers’ competence in promoting LLLlinked to a theoretical model in an ecologically valid way.“Ecological validity has typically been taken to refer to whether ornot one can generalize from observed behaviour in the laboratoryto natural behaviour in the world” (Schmuckler, 2001, p. 419),meaning in our context that behaviour recorded by the instrumentshould reflect behaviour that actually occurs in the natural

* Corresponding author. University of Vienna, Faculty of Psychology, Departmentof Applied Psychology: Work, Education and Economy, Universitätsstraße 7 (NIG),1010 Vienna, Austria.

E-mail address: [email protected] (J. Klug).

Contents lists available at ScienceDirect

Teaching and Teacher Education

journal homepage: www.elsevier .com/locate/ tate

0742-051X/$ e see front matter � 2013 Elsevier Ltd. All rights reserved.http://dx.doi.org/10.1016/j.tate.2013.09.004

Teaching and Teacher Education 37 (2014) 119e129

Author's personal copy

classroom setting. We present the instrument, its properties andunderlying theory. Moreover, to illustrate possible results that canbe gained by applying the instrument, we explore in a small firstsample how teachers try to promote students’ LLL according to amodel and how their efforts rate according to scientific standards.

After providing a theoretical framework for successful LLL andthe measurement of teachers’ competences, we explain thedevelopment of and our instrument itself, and report first resultsfor illustration reasons.

1. Theoretical framing of successful LLL

The term LLL stems from a political rather than an academicdiscussion. Already in the 1960s, the discourse about LLL began. InEuropean countries, especially demands to rearrange the educa-tional system arose. The different segments of education like pre-school, compulsory school, secondary school, vocational educa-tion and further education should get linked to a higher degree.This issue was discussed using the catchphrase lifelong learning inthe 1970s (Hof, 2009). In 1973, the OECD, focussing on economicalaspects of the changing society, concluded that only the expansionof educational offers assures economic growth (Istance, 2003). Inthe 1990s LLL was further announced as a goal of educationalpolicy. In 1996, the EU proclaimed the year of lifelong learning(Hake, 1999). In 2000, the European Commission developed the“Lisbon strategy” aiming at stimulating economic growth byinvesting in education (Dewe &Weber, 2007). At the same time, theEuropean Commission defined LLL as “all learning activities un-dertaken throughout life, with the aim of improving knowledge,skills, and competence within a personal, civic, social, and/oremployment-related perspective” (European Commission, 2001, p.9). In Australia, the conception of the EU is seen as a benchmark andis borrowed as a frame of reference for LLL (Kearns, 2005). However,the key finding of a project on future directions for LLL commis-sioned by Adult Learning Australia (ALA) was that LLL was poorlyunderstood in Australia and that acts as a barrier to concertedpartnership action by all stakeholders in progressing opportunitiesfor learning throughout life for all Australians, in many contexts(Kearns, 2005). Since the UNESCO follows a similar understandingto the EU, the term lifelong learning is used very much alike indifferent parts of the world, e.g. the Asia-Pacific Regional Forum forLifelong Learning (2004) writes that LLL builds upon the foundationof universal literacy including early childhood education, formalschooling, higher education, continuing education and distanceeducation with the ultimate goal “to build a learning society toprovide for today, while planning for tomorrow” (p. 2). Similarly,the Regional Synthesis Report about trends, issues and challengesin youth and adult education in Latin America and the Caribbean(Torres, 2009) enlightens the local situation concerning LLL as “astrategy for preparing the required human resources for the ‘in-formation society’ and the ‘knowledge-based economy’” (p. 16).Nevertheless, “the lifelong learning terminology appears to bemore widespread e and more embedded in recent policies andplans e in English-speaking Caribbean countries than in LatinAmerican ones” (Torres, 2009, p. 17). The author explains this dif-ference in popularity with the fact that LLL has not been properlyand consistently translated into Spanish and Portuguese, where theequivalent is a long phrase.

In the sense of preparation for successful LLL, what happens inthe classroom needs special attention. The European Commission’sterminology does not fit to describe and explain the promotion ofLLL in the classroom, because of its focus on economical aims andthe importance of adult education. That is why we need a specialapproach to LLL from the perspective of educational psychology. Inthe international educational psychology literature, two

determinants of successful LLL are consistently mentioned. Theseare: (1) motivation for and interest in education (¼educationmotivation), and (2) the competence to successfully apply these inconcrete learning situations (Finsterwald, Wagner, Schober,Lüftenegger, & Spiel, 2013; Pintrich & De Groot, 1990; Schober etal., 2007; Weinstein & Hume, 1998). We base our definition of LLLupon these two components. In our approach, LLL means the ca-pacity for learning across a lifetime that requires education moti-vation and competence in self-regulated learning. We considerboth as modifiable e.g. by training, experience and reflection.

Self-regulated learning and motivation show a clear overlap inliterature reviews (Lüftenegger et al., 2012). Motivational models arecharacterized by their process character e just as models of self-regulated learning (e.g. Gollwitzer, 1996; Heckhausen & Gollwitzer,1987; Schmitz & Wiese, 2006; Zimmerman, 2000). They usuallyconsist of three phases: (1) initiating and planning a learning action(predecisional and preactional phase ¼ forethought), (2) actuallearning (actional phase ¼ performance/volitional control), and (3)assessing the learning action and outcome (postactionalphase¼ self-reflection). SRLmodels commonly contain motivationalaspects in the first phase, when students initiate a learning action ifthey consider it important and if they believe they can be successful(expectancy-value theory, Wigfield & Eccles, 2000). Motivationalmodels contain SRL aspects as well, e.g. planning in the preactionaland assessment in the postactional phase. Because both, motivationand self-regulated learning are mentioned as the two core compo-nents associated with LLL, we based the development of our in-strument on a modified action phase model combining aspects ofmotivation and self-regulation (Schober et al., 2007, modifiedfollowing Heckhausen & Gollwitzer, 1987; Schmitz & Wiese, 2006).The model is illustrated in Fig. 1. It is a theoretical model to measureoutcomes assumed to comprise LLL. The numbers in parenthesesrefer to the numbers of the corresponding questions in the interviewguide, which we explain later on (see Table 1).

Themodel is of a cyclical nature like the aforementionedmodelsof SRL andmotivation including a deliberating, planning, acting andreflecting phase. In the deliberating phase, expectancy and value(Eccles, 1983) are of special relevance. The learning action will onlybe initiated if the expectancy to reach a goal as well as the value ofthe goal is high enough. A student’s interest in the learning actionand his or her goal orientation, the purpose for engaging inachievement behaviour (Ames & Maehr, 1989), serve as indicatorsfor the subjective value of the learning action. The expectancy of astudent to reach his or her goal depends on his or her self-efficacy,the subjective belief in being able to reach something (Zimmerman& Kitsantas, 2007), and implicit theories, theories e.g. about thereason for an achievement or failure that lie within the heads of thetheorists, i.e. teachers or parents (Sternberg & Davidson, 1986).

The more detailed and precise the planning in the followingplanning phase, the easier the implementation of the learning action(Rheinberg, 2008). After the planning phase, the challenge is to reachthe goal in the acting phase (Achtziger & Gollwitzer, 2007). Whileacting, i.e. learning, using suitable learning strategies is important.

In the final phase of evaluating/reflecting, the student compareshis or her achievement with the initial goals, leading to satisfactionor dissatisfaction (Ziegler, 1999). The set-actual comparison in-fluences future learning processes, especially motivational factorslike self-efficacy. Important aspects in the evaluating phase arefeedback e.g. from a teacher or parent, attributions and a frame ofreference which are beneficial for a student’s motivation. Attribu-tion refers to how we infer the causes of events or actions, i.e.achievement or failure. After failure a flexible and variable style ofattribution is beneficial (Dresel, 2010), whereas after achievement astable and internal style is preferable. Frame of referencemeans thebenchmark referring to which an achievement is evaluated

J. Klug et al. / Teaching and Teacher Education 37 (2014) 119e129120

Author's personal copy

(Heckhausen, 1974). A social frame of reference means e.g. that theachievement of every student is evaluated in relation to the gradepoint average, whereas a criterion-related frame of reference refersto a priori defined criteria for achievement (Rheinberg, 2006) likee.g. in standardized tests. An individual frame of reference meansreferring to former achievements for intra-individual comparison,which is the most beneficial for students’ motivation.

2. Measuring teachers’ competences

In recent years, teachers’ competences have gained the atten-tion of educational psychologists. Models of teachers’ competenceshave been developed (e.g. Kunter et al., 2011) and various compo-nents have been investigated: e.g. teachers’ diagnostic competence

(e.g. Klug, Bruder, Kelava, Spiel, & Schmitz, 2013; McElvany,Schroeder, Baumert, & Schnotz, 2012), adaptive teaching compe-tence (e.g. Vogt & Rogalla, 2009), and counselling competence (e.g.Bruder, 2011). However, teachers’ competence in reference to stu-dents’ LLL has not been systematically investigated so far. There isan existing study on a training programme to encourage LLL inschool (Finsterwald et al., 2013; Schober et al., 2007), but neither astudy on how and how well teachers promote students’ LLL nor asystematic approach to measuring teachers’ competence in pro-moting LLL exists as of yet.

To measure competences, one has to consider their situationand domain specificity as well as their close link to real action(Klieme & Leutner, 2006). That is why questionnaires cannot suf-fice. Their tendency to produce socially desired answers biases the

Fig. 1. The modified action phase model derived from the areas of motivation and SRL (compare Schober et al., 2007). Note: the numbers in parentheses refer to the numbers of thecorresponding questions in the interview guide (see Table 1).

Table 1The interview guide.

Segment Duration Instructions Materials

Beginning 3 min Agreement to be recorded QuestionnaireDemog. data 5 min E.g. age, gender, experience in years, current classes, school membershipIntroduction 10 min Definition of lifelong learning and its facets according to the action phase model ModelInterview guide with

reference to theaction phase model

30e60 min (1) In the preaction phase, students’ interest is central. Please describe how youaroused your students’ interest, giving a concrete example from one of your lessons.

(2) Please report a concrete example of how you supported your students’ existing interests in class.(3) In the preaction phase, self-efficacy is also an important component. Please report a concrete

example of how you helped your students feel confident in solving tasks and to believe in theirown abilities.

Self-assessment for each of the reported examples

(4) With regard to the action phase model, we will now focus on planning. How do you teach yourstudents how to plan their learning activities? Please give a concrete example.

(5) Next, we will address supporting students while planning their learning activities. Please describehow you supported your students while planning in class.

(6) Referring to the action phase of the model, the use of learning strategies plays a crucial role. Pleasegive a concrete example of how you managed to encourage your students to use learning strategies.

(7) Self-reflection, as students’ ability to monitor and evaluate their own learning process, is also central.Please describe a concrete example of how you promoted your students’ self-reflection in class.

Self-assessment for each of the reported examples

(8) In the postaction phase of the model, students evaluate their performance. For their furthermotivation, it is important which reasons students assume led to their performance. Please describea concrete example of how you encouraged your students’ to ascribe their performances toreasons that are beneficial for your students’ further motivation.

(9) Please report a concrete example of how you promoted students’ awareness of their individuallearning progress.

(10) Finally, please describe a concrete example of how you gave feedback to a student. What exactlydid you say?

Self-assessment for each of the reported examples

a) Interview guideincluding exemplaryexamples

b) Recorderc) Form for teachers’

self-assessment

Closing 1e10 min Room for interviewee’s questions, comments and notes. Formula

J. Klug et al. / Teaching and Teacher Education 37 (2014) 119e129 121

Author's personal copy

results. What is measured is not a teacher’s actual competence, butwhether a teacher regards him- or herself as competent(Fahrenberg, Myrtek, Pawlik, & Perrez, 2007). Hence, competencemeasurement using questionnaires would not lead to meaningfulresults. In large-scale assessments, usually standardized tests areused to measure competences. For our purpose, however, a stan-dardized test is not an adequate method as we are taking the veryfirst step in trying to measure a competence that has not yet beenexplored. Non-standardized tests and observations of educationalprocesses (e.g. observing teachers in direct interaction withlearners) are common ways of assessing competences as well(Koeppen, Hartig, Klieme, & Leutner, 2008). Direct observation asthe most valid way to assess real action can be referred to as thesilver bullet in competence measurement in terms of situation anddomain specificity and ecological validity. Nevertheless, directobservation demands high personnel and financial resources (Oser,Salzmann, & Heinzer, 2009). Besides, in lots of countries andespecially in Austria, there is no culture of classroom observation.Observations can therefore provoke feelings of stress and threatwithin the teachers. Furthermore, there is a reactive effect due tothe fact of observation, which again biases the results. For thisreason, we decided against using observations tomeasure teachers’competence in promoting students’ LLL. Another approach tomeasure competence is using structured interviews, in whichteachers can be asked what they really do in specific situationsrelated to a certain domain. Interviews are cheaper in terms ofresources, and teachers can choose themselves, which behavioursthey want to report. Besides, it is possible to follow up for clarifi-cation in interviews, if follow-up questions are allowed. Thus, thecommitment is higher while the threat is lower. However, aninterview is also prone to biases. The face-to-face interaction, theinterviewers gestures, mimes, expressions “may convey hiddencues as to how to respond to them” (Podsakoff, MacKenzie, Lee, &Podsakoff, 2003, p. 882). Even “interviewer characteristics”(Podsakoff et al., 2003, p. 885), the “time and location of mea-surement” (Podsakoff et al., 2003, p. 885) as well as the context-induced mood by the wording of the questions (Peterson, 2000)may affect what and how much an interviewee shares. Thus, in-terviews could also produce biased or socially desired answers.However, if teachers report examples of their classroom behaviourand the examples’ quality is assessed, they need to have at leastsome knowledge about how a good example would look like toproduce a socially desired answer in contrast to a questionnaireitem which would only ask if they judge themselves as competent.Besides, one can assume that the more differentiated a teacher candescribe an example the more likely it is that he or she reallyshowed the reported behaviour. After all, to get first insights in howteachers promote aspects linked to students’ LLL the advantagesexceed the disadvantages due to potential biases. Consequently, inour study, we chose the structured interview approach to measureteachers’ competence in promoting aspects theoretically linked toLLL.

3. Research goals

The main aim of the present study was to develop and presentan instrument, whichmeasures teachers’ competence in promotingaspects linked to LLL according to the theoretical model. In thispaper, we illustrate the developmental procedure as well as theadministration and scoring of the interview.

For illustration reasons, we additionally apply the instrument ina small first sample and explore how many examples are reportedper category and whether teachers have any competence for pro-moting the facets of LLL according to the theoretical model andwhether there are differences related to gender and age. First, we

compare the frequencies of reported examples belonging todifferent facets of LLL in themodel. Second, we follow three steps toexplore whether teachers have any competence for promoting thetheoretically based facets of LLL:

(a) We analyze our data on the group level: We examine thequality of all examples presented by teachers in the LLLInterview using expert ratings to see if there are systematicdifferences in the quality of promotion in different facets.That tells us the facets teachers are able to promote best andthose in which they need further training.

(b) We investigate how well teachers themselves are able toassess the quality of their examples: For that purpose, wecompare expert ratings of teachers’ examples to their self-assessments. In line with research results on the correla-tion between student, teacher and observer ratings of qualityof classroom instruction and teaching competence (Hewson,Copeland, & Fishleder, 2001, report correlations ranging fromr ¼ .13 to r ¼ .77 and Kunter & Baumert, 2006, report cor-relations ranging from r ¼ .00 to r ¼ .64), we expect thecorrelation to be small. If there were to be just a small cor-relation, the high importance of giving teachers feedback ontheir competence would be revealed.

(c) Finally, we analyze the data on the level of individualteachers: We analyze the distribution of teachers’ generalcompetence level in promoting LLL linked to the theoreticalmodel and show what competence profiles for individualteachers look like.

Third, concerning differences in the interview results for genderand age, we hypothesize that there is no difference in the generalcompetence level between female and male teachers. Studies onother teacher competences (e.g. diagnostic competence, adaptiveteaching competence) have shown that there is no difference incompetence values between male and female teachers (McElvanyet al., 2012; Vogt & Rogalla, 2009). Additionally, we hypothesizethat a teacher’s age has a small negative correlation with his or hergeneral competence level. In contrast to the assumptions ofexpertise research, studies on the correlation between teachers’ ageor years of teaching experience and other teaching competenceshave often shown a small negative correlation (Klug, 2011; Bruder,2011; McElvany et al., 2012).

4. Method

4.1. The development of the LLL Interview

The first version of the LLL Interview was developed from atheoretical basis built by the aforementionedmodified action phasemodel and the respective literature (e.g., Dresel, 2010; Heckhausen& Gollwitzer, 1987; Rheinberg, 2008; Schmitz & Wiese, 2006;Wigfield & Eccles, 2000; Zimmerman, 2000). The questions werechosen deductively with respect to the theoretical model. Wedeveloped an interview guide to control for effects of the inter-viewer as far as possible. To assure trustworthiness in the interviewcontent, we consulted experts in the fields of motivation and self-regulated learning to comment on the interview questions withreference to the model. Additionally, in a workshop, teachers whoparticipated in one of our training programmes gave useful hints toimprove the interview guide and the chosen language from theperspective of practitioners. The interview guide was tested andoptimized in three preliminary studies.

First data from n¼ 9 Austrian teachers, teaching in four differentschool types, various different subjects and different grades (5e13)with different school teaching experience (4e39 years), were

J. Klug et al. / Teaching and Teacher Education 37 (2014) 119e129122

Author's personal copy

collected in the first preliminary study and suggestions forimproving the interview were deduced. In a second preliminarystudy, the optimized interview was tested with a sample of n ¼ 34Austrian teachers, of whom 21 formerly participated in a trainingprogramme on LLL (Schober et al., 2007), and 13 did not receive thetraining programme. Again, some weak points were detected, e.g.teachers reported few concrete and differentiated examples. Thus,the introduction and the phrasing when asking for examples werereworked. Additionally, based on a qualitative content analysis ofthe reported examples, the coding and rating process was specifiedin discourse with co-workers. In a third preliminary study withn ¼ 13 teachers from one Austrian school, the interview imple-mentation worked out well.

In each study, the exampleswere identified from the transcribedinterviews. The content of the questions represented the deduc-tively developed categories, to which the examples needed to beassigned. In qualitative content analyses of the preliminary data,we found, for example, that there were examples concerning pro-moting students’ interest of different quality, some referring topromoting interest in a new topic and others referring to promotingexisting interests. Thus, we inductively refined the categories andmodified the interview guide by asking separately for promotinginterest in a new topic and promoting existing interests. Similarly,we found qualitatively different examples for the aspect planning,for which we also refined the category system and interview guidein asking for teaching students how to plan and supporting stu-dents while planning.

The final version of the LLL Interview contains ten questionsrelated to the model. Three questions cover the deliberating phase:two questions about promoting interest (promoting interest in newtopics, promoting students’ existing interests) and one about pro-moting self-efficacy. Two questions cover the planning phase withone question about teaching students how to plan and one aboutsupporting students while planning. One question about teachinglearning strategies covers the acting phase and another fourquestions cover the evaluating phase: one about promoting self-reflection, one about promoting a beneficial style of attribution,one about promoting an individual frame of reference and oneabout giving constructive feedback.

Two facets of LLL appearing in the model are not covered with aquestion (implicit theories and goal orientation). On a theoreticalbasis, we already set them aside in the first place for economicreasons. We chose to cover just one aspect of expectancy and valueeach. Since interest and goal orientation are strongly correlated(e.g. Horvath, Herleman, & McKie, 2006; Hulleman, Durik,Schweigert, & Harackiewicz, 2008) and both are aspects of thevalue component, we just ask for interest. It is the same with self-efficacy and implicit theories, which are both aspects of the ex-pectancy component. In each case, we decided in favour of the one,which is easier to explain to teachers. Another two facets arecovered with two questions each (interest and planning), becausewe found qualitatively different aspects of interest and planning inthe qualitative content analysis as described before. Every otherfacet of LLL asked for in the interviewmaps onto the model. Table 2summarizes how the chosen questions relate to the phases of themodel.

4.2. Participants

The sample, in which the final version of the LLL Interview wasapplied for the first time, consisted of N ¼ 40 teachers from twelvedifferent schools, teaching classes from 5th up to 12th grade. 53% arefemale. The average age is 43.9 years (Min ¼ 25, Max ¼ 62). Half ofthe sample teaches in Austria, the other half in Germany. Coopera-tion from principals in the schools who informed their teaching staff

about the study and recommendations from teachers we knewhelped us compose the sample. Participation was voluntary.

5. Results

5.1. The LLL Interview

We describe the resulting LLL Interviewwith its Interview Guideand Indicators in the following.

5.1.1. The interview guideTable 1 shows the final interview guide for the LLL Interview.

Before conducting the interview, all participants get a standardizedintroduction to LLL and its facets according to the modified actionphase model (see Fig. 1). The standardized introduction involvedinformation about background and aims of the study, informationabout the two components of LLL being motivation for educationand self-regulated learning and a detailed description of the modelexplaining every phase including each facet of LLL. Each phase isillustrated with an example of a student who engages well or badlyin LLL. In the end, interviewees are allowed to ask questions aboutthe model if they have any. To ensure standardization, the intro-duction is written down and read out loudly word for word.

After the introduction, interviewees are presented ten questionsasking for concrete descriptions of examples referring to how theytried to promote certain facets of LLL according to the model intheir own classes. Interviewees were asked to report concrete,detailed and precise examples including the general set-up of thesituation regarding each question in the first place, but follow-upquestions by the interviewer to reveal things in more detail werenot allowed.

The questions are asked in three blocks. After each block ofquestions, teachers are asked to self-assess the quality of the ex-amples they reported as answers to the questions. They assess thequality of each example using scores from one to six, representingthe German and Austrian grading system at school. That meansteachers in these countries are very familiar with these scores. Onemeans “very good”, two means “good”, three means “satisfying”and so on until six, meaning “insufficient”. The scores were recodedfor our analyses so that one stands for insufficient and six for verygood to facilitate interpretation.

The first two questions in the first block are about arousingstudents’ interest and supporting students’ existing interests (ac-cording to the value component in themodel). The third question isabout promoting students’ self-efficacy (according to the expec-tancy component in the model). In the second block, two questionsare about teaching students how to plan their learning and sup-porting students while planning (both according to the planningcomponent in the model). The following questions ask for teaching

Table 2Chosen questions in relation to phases of the model.

Model phase Facet of LLL Question in the interview guide #

Deliberating Interest Promoting new interests 1Promoting existing interest 2

Self-efficacy Promoting self-efficacy 3Planning Planning Teaching how to plan 4

Supporting while planning 5Acting Learning strategies Teaching learning strategies 6Evaluating Self-Reflection Promoting self-reflection 7

Attribution Promoting a beneficial styleof attribution

8

Frame of reference Promoting an individual frameof reference

9

Feedback Giving constructive feedback 10

J. Klug et al. / Teaching and Teacher Education 37 (2014) 119e129 123

Author's personal copy

learning strategies (according to the acting component of themodel) and fostering students’ self-reflection capabilities (accord-ing to the evaluating component of the model). Finally, in the thirdblock, there are three questions about promoting a beneficial styleof attribution, promoting an individual frame of reference andgiving feedback (according to the three respective components inthe model).

The duration of one complete interview varies between 60 and90 min, depending on how many examples a teacher reports andhow differentiated the descriptions are. Teachers’ answers arerecorded and transcribed afterwards in order to do the analyses.

5.1.2. The indicatorsWhich information do we receive from the LLL Interview?

5.1.2.1. The main categories. As a first step of analysis, the examplesprovided by teachers need to be identified and assigned to therespective facets of LLL. In our preliminary analyses, we developed acoding manual, which describes the facets of LLL according to themodel and distinguishes categories to which the identified exam-ples can be assigned.

There are ten main categories, which refer to the action phasemodel and correspond to the ten questions in the interview guide.The frequencies of examples in the main categories can tell uswhich facets are more present in teachers’ minds than others. Thecategories are explained in detail in the complete coding manualwith reference to the corresponding educational psychologicaltheory and examples of how the several facets in the model couldbe promoted, e.g. arousing interest in a new topic could beimplemented by creating references to everyday life, invitingexternal partners, using illustrations or practical examples orworking in teams; promoting self-efficacy could be implementedby developing tasks according to the ability level, supporting stu-dents’ autonomy, using cooperative learning forms or givingemotional support. We illustrate selecting and assigning anexample to a category by excerpts of two transcribed interviewsserving as our raw-data. The excerpts represent the answers of twodifferent teachers to the first interview question about promotingstudents existing interests. Both excerpts can be found in AppendixA.

First of all, those parts of the text need to be specified, which arein fact examples. In this first excerpt, it is relatively easy to identifythe example. It is clearly and precisely described. However, one hasto fix its beginning and its end. The beginning could be “I decided tobring living mice”, but this would mean disclaiming the given in-formation about the subject, grade, first lesson and topic, that isrelevant to rate the example’s quality later on. Therefore, it wouldbe reasonable to set the beginning of the example at the first word.The end of the example could be located after “lots of character-istics of the living were mentioned.” The following text does notadd something meaningful to the example of promoting interest ina new topic e it is just explanation and remarks. Hence, lots of textparts of the transcribed interviews are left out and only meaningfulexamples are selected in the first step. If the example is selected, itis assigned to the respective category. This second step is easier,because the question already explicitly asked for an example of aspecial category. The example’s quality is finally rated in a third stepafter all examples are identified and assigned to their respectivecategory. In one answer to an interview question, two or moreexamples can occur, which need to be identified and assignedseparately. Similarly, there can be no example at all in an answer toa question. Excerpt 2 (see Appendix A) shows a case within whichone has to decide if an example is identified at all.

The answer basically consists of general talk on a meta-levelwith no concrete and meaningful example of how this teacher

promoted his or her students’ interest in a special lesson. Of coursehe or she mentions creating suspense as a general strategy topromote interest. In the coding process, one has to decide if thisgeneral strategy can serve as an example or if it is no example at all.We recommend identifying “creating suspense” as an example inthe first place so that it can find its way into the quality rating,where it will not score very high anyway.

5.1.2.2. The expert quality rating. To receive information about theexamples’ quality, experts score every selected example on twodimensions: The first one is theory consistency, meaning that theexample a teacher gave is in linewith the educational psychologicaltheory underlying the facet of the theoretical model. The seconddimension is differentiation of the reported example. Differentiationrefers to the concreteness and possibility of replicating the narratedexample. Like already mentioned, one can assume that the moredifferentiated a teacher can describe an example the more likely itis that he or she really showed the reported behaviour.

Both dimensions are rated from zero to two points with twobeing the best possible score for both theory consistency and dif-ferentiation. A zero rating says that an example is not in line witheducational psychological theory at all or that it is not described indetail at all, respectively. A rating of one point says that an examplecontains certain aspects of the educational psychological theoryasked for or is described in some detail, but not enough to allow forreplication of the behaviour. The best rating of two points stands foran example that is totally in line with educational psychologicaltheory or described in enough detail to replicate the behaviourwithout any problems, respectively. If rated bymore than one rater,the quality values can be averaged over raters to receive morereliable assessments. The two quality dimensions are rated sepa-rately in consecutive steps. The correlation between theory con-sistency and differentiation is r ¼ .58 (p < .01). Thus, we combinedthem to build an overall quality value for each example. The overallquality value is built up as the sum score of theory consistency anddifferentiation. Consequently, it ranges from zero to four with fourdenoting the highest quality for each example. To add weight totheory consistency and to avoid that an example may get a rating oftwo just because it is explained in enough detail even if there is lowrelevance to the construct, theory consistency is rated first. Only ifthe score in theory consistency is higher than zero, differentiationof the example is rated additionally. If an example scores with zeroin theory consistency it is rated with a total of zero in any case.

The example of the biology lesson we presented before, wouldreceive a high score for both, theory consistency (2) and differen-tiation (2), resulting in the highest possible value of four. The sec-ond presented example on creating suspense in mathematics andphysics lessons would receive two points at a maximum for theoryconsistency, but zero points for differentiation, because it has noteven been a real example, but more a general strategy. Hence, itssum score would be two points. To illustrate the quality rating, wepresent another two examples belonging to other categories in thetheoretical model, one with a high (value 4) and one with a lowquality rating (value 0):

Example 1, category promoting students’ self-efficacy, high qual-ity rating (4): “It is an example from my English classes, grades 5and 6: As homework, for example, instead of completing a task inthe work book, I copy it on slides and distribute them to students tosolve the task on their slide at home. Then I ask who is willing to be‘studenteteacher’ the next day. The student teacher will be incharge the next day. Her or she moderates the presentation of thehomework and compares the solutions. This is a lot of fun for thestudents. They feel important [.] and they do not react toonegatively if they make mistakes. [.] I choose not necessarily thebest students and that is ok, because the students correct each

J. Klug et al. / Teaching and Teacher Education 37 (2014) 119e129124

Author's personal copy

other in a friendly way.” This example is theoretically consistentand differentiated. Due to being in charge and experiencingcompetence, students’ self-efficacy can in fact be promoted.Moreover, the example is reported in enough detail to be replicated.

Example 2, category promoting students’ self-reflection, lowquality rating (0). “Impulses frommy side are that I want them to beteam players. The members of a group observe each other of theirown volition. That is in a sense self-regulation. They should knowthat they have to produce something substantial.” The example isneither theoretically consistent nor differentiated, as it addressesregulation by others rather than self-reflection. Moreover, theteacher does not support self-reflection; he or she assumes thatreflection occurs by itself. Even if it was reportedwithmore detail itcould never score more than zero points, because of the zero ratingin terms of theory consistency.

5.1.2.3. The general competence level. As a measure of teachers’general competence in promoting the aspects related to LLL, thesum score of teachers’ example qualities is calculated. Becauseteachers have the opportunity to tell the interviewermore than oneexample per category, the best example a teacher reported for eachof the ten main categories is chosen. Thus, the overall competencescore can range from zero to 40 points with 40 points being thehighest possible competence value.

5.1.2.4. The competence profile. The competence profile illustrateshow well the teacher performed in each of the main categories. Toillustrate what a competence profile looks like, Fig. 2 shows thecompetence profiles for the individual teachers with the highestand lowest general competence levels in our study and one with amedian competence level.

5.1.2.5. The self-assessment. In the LLL Interview, teachers are askedto self-assess their examples. As described in the interview guidesection, the self-assessment ranges from one to six.

5.1.2.6. The comparison between self- and expert rating.Teachers’ self-assessments can be compared to the experts’ qualityrating. This comparison provides information about teachers’knowledge about how to promote aspects linked to LLL mosteffectively according to the theoretical model. If teachers know

howa good example looks like, they could judge the quality of theirown examples like the experts do. If they do not have knowledgeabout how to promote a special facet most effectively, they couldassess their own examples highly, because they do not know how agood examplewould look like. In this case, their assessments wouldshow no correlation with the experts’ ratings.

5.2. Results of the final interview’s first application

To illustrate possible results of the LLL Interview, we report theresults of the LLL Interview’s first application.

5.2.1. How many examples are reported per category?In sum, the 40 teachers reported 465 examples of how they

promoted different facets of LLL according to the theoretical model.Table 3 shows the distribution of examples over the ten main cat-egories. It appears that most examples are reported for the cate-gories arouse students’ interest in a new topic and support studentswhile they are planning their learning. The fewest number of ex-amples is reported for giving feedback and fostering a beneficial styleof attribution. Table 3 also shows how many teachers reported atleast one example for each category. Every single teacher reports atleast one example for the category support students while they areplanning their learning, whereas 15% report no example for thecategory giving feedback. Generally, most teachers report at leastone example for each category.

5.2.2. How competent are teachers?Four independent raters, one expert in the field, one project

assistant off-the field and two psychology students to whom thetheoretical background was explained before, rated the examples.30% of the examples were rated by at least two of the four raters.We calculated Intra-Class-Coefficients (ICC’s) as a measure for in-ter-rater reliability and found intra-class correlations ranging fromICC ¼ .67 (p < .01) to a perfect match of ICC ¼ 1 (p < .01). The meanintra-class correlation is ICC¼ .89. The example quality ranges fromMin ¼ 0 to Max ¼ 4 with a mean quality of M ¼ 2.39 (SD ¼ 1.23).Table 3 shows the mean quality rating and its standard deviationsseparated for the ten main categories sorted from highest to lowestquality. There is a statistically significant difference in examplequality depending upon category (F ¼ 3.18, df ¼ 9, p < .01).

Fig. 2. Competence profiles for the individual teachers with the highest (teacher 6) and lowest (teacher 24) general competence levels (38 and 5 out of 40 possible points) and onewith a competence level meeting the median (teacher 7, 25 points).

J. Klug et al. / Teaching and Teacher Education 37 (2014) 119e129 125

Author's personal copy

Examples in the categories arousing new interests, promoting exist-ing interests and teaching learning strategies show the highestquality ratings, whereas examples in the categories promoting self-reflection, teaching planning and supporting students while planningshow the lowest quality ratings.

Teachers’ self-assessments of their examples and the expertratings show a small correlation of r ¼ .21 (p < .01). Table 3 showsthe correlations for teachers’ self-assessments of their examplesand the expert ratings for each category. We find just one statisti-cally significant correlation for the examples in the category pro-moting self-reflection.

On the individual level, the general competence in promotingstudents’ LLL linked to the theoretical model ranges fromMin¼ 5 toMax ¼ 38 points with a mean value ofM ¼ 24.53 (SD ¼ 8.53) and amedian of 25 points.

To illustrate what individual profiles look like, we singled outthe profiles of the individual teachers with the highest (Teacher 6,38 points) and lowest (Teacher 24, 5 points) general competencelevels and one with a competence level meeting the median(Teacher 7, 25 points). Fig. 2 illustrates these profiles. The compe-tence profiles show which facets according to the model theteacher is already good at and in which he or she would benefitfrom further training.

5.2.3. Are there differences in teachers’ general competence relatedto gender and age?

To investigate whether there are differences concerning gender,a t-test for independent measures is used. The mean generalcompetence in promoting students’ LLL linked to the theoreticalmodel for the female teachers isM ¼ 25.29 (SD ¼ 8.30); the one forthe male teachers is M ¼ 23.68 (SD ¼ 8.23). The t-test for inde-pendent measures shows no statistically significant difference(t ¼ .588, df ¼ 38, p > .05).

To investigate age differences, a Pearson correlation of teachers’age and general competence in promoting students’ LLL linked tothe theoretical model is calculated. We find a small negative cor-relation, which shows no statistical significance (r ¼ �.24, p > .05).

6. Discussion

Our main aimwas to develop and present an instrument, whichmeasures teachers’ competence in promoting students’ LLL ac-cording to a theoretical model. For illustration purposes, we addi-tionally applied the instrument in a small first sample to explorehow many examples are reported per category, whether teachershave any competence for promoting the facets of LLL according tothe model and whether there are differences based on gender andage.

As to psychometric properties of an interview, various authors(e.g. Greenwood & Mcconnell, 2011; Kuzmanic, 2009; Merrick,1999) suggest the application of alternative criteria in the litera-ture on qualitative methods. “The debate about truth and validitybecomes especially complex in types of research where the impactof the researcher or the observer, the situation and other variablesis very evasive and context dependent. Particularly interesting inthis sense are interviews” (Kuzmanic, 2009, p. 43). It is suggestedthat validity should be addressed throughout the entire researchprocess (e.g. Flick, 2002; Kvale, 1996). Two mentioned alternativecriteria that matter throughout the whole process as well as in theillustration to the reader are “trustworthiness” and “reflexivity” asan alternative for classic validity and objectivity concepts (Merrick,1999). Kuzmanic (2009) concludes: “validity, if it ought to retain thesame name in qualitative research, refers to all steps of a researchprocess separately [and] is constructed and reconstructed throughthe researcher’s engagement” (p. 49) and “rather than eliminatingthe effects of the interviewer in a qualitative interview setting, weshould try to control them” (p. 48).

In this sense, we tried to be reflexive in the whole developingand testing process from mapping the questions to the model toincluding feedback from teachers in the preliminary analyses,reading out loud a standardized introduction, and developingdifferentiated rules for scoring. In illustrating this process, we triedto build trustworthiness in our instrument. However, we want todiscuss certain aspects of classic quality criteria and potential biasesthe interview is prone to a little more in-depth.

Especially important to us was to create a measure that isecologically valid, meaning that behaviour recorded in the inter-view reflects behaviour that actually occurs in the natural class-room setting. We tried to produce ecological validity in asking forexamples of how teachers actually taught aspects linked to LLL intheir classrooms. However, the examples we gain cannot be equalto the perfect truth and reveal less real behaviour than directclassroom observation would do. We cannot be sure, if some an-swers are biased by interviewer characteristics and expressions(Podsakoff et al., 2003). At least, the wording was standardized (e.g.the introduction and the questions in the interview guide) and nofollow-up questions were allowed to countervail some sources ofbiases. It is possible that teachers may have stretched the truthabout what they do to look better. However, if the examples werequalitatively high, it would just implicate that the teachers’competence or at least knowledge is high enough to produce suchan example. Thus, he or she would probably be able to show thebehaviour even if he or she has not shown it yet. Additionally, weassume that examples, which really happened in class, are easier toreport in detail. By evaluating not only the examples’ theory con-sistency but also their differentiation, we tried to avoid that ex-amples, which have not really been applied in class, score the

Table 3Example frequencies, qualities and correlation with self-assessments per main category sorted by quality.

FrequenciesExamples FrequenciesTeachersa MQuality Rating SDQuality Rating rb

Arouse interest 55 (12%) 39 (98%) 2.90 1.00 .01Promote interest 43 (9%) 37 (93%) 2.62 1.24 .16Learning strategies 48 (10%) 38 (95%) 2.58 1.13 .06Feedback 37 (8%) 34 (85%) 2.58 1.07 .33Self-efficacy 45 (10%) 39 (98%) 2.57 1.29 .09Frame of reference 48 (10%) 39 (98%) 2.38 1.30 .05Attribution 40 (9%) 37 (93%) 2.33 1.13 .02Self-reflection 43 (9%) 35 (88%) 2.09 1.37 .51**Teach planning 51 (11%) 37 (93%) 1.98 1.48 .18Support planning 55 (12%) 40 (100%) 1.93 1.39 .15

a Teachers who reported at least one example for the category.b Correlation between teachers self-assessments of their examples and the expert rating.

J. Klug et al. / Teaching and Teacher Education 37 (2014) 119e129126

Author's personal copy

highest value. Having said that, using differentiation as a proxy forbeing real bears some critical issues. One could assume thatteachers who are used to rehearsing and reporting concrete prac-tices within their schools, would probably also share moreconcretely with a researcher in an interview. Also personalitymatters when it comes to differentiation. Some teachers are proneto few words whereas others talk a lot. We could not control forpersonality or school norms and we could not avoid that theseaspects matter in the interviewees’ narration of examples. Never-theless, one measure to keep those effects of “talking a lot vs. beingtight-lipped” small is the coding process. When analyzing thetranscribed interviews, before rating the quality, the examples needto be picked out and assigned to the categories. Not until the secondstep, their quality is rated. Picking out the examples means thatevery word that does not exactly belong to an example, like talkingon a meta-level, will not enter the coding process.

To enhance trustworthiness and approximate a kind ofconstruct and content validity, we mapped the interview questionsto the model content and experts in the field of SRL and motivationwho know the model’s content were consulted to comment on thequestions.

To control for effects of the interviewer and to enhancecomparability we developed, tested and optimized the interviewguide with its standardized introduction, prescribed questions andthe fact that no follow-up questions were allowed. However, wecannot preclude that the interview is prone to some biases like theaforementioned interviewer characteristics or expressions. Addi-tionally, one has to bear in mind that due to the standardizedintroduction, data is shaped in a way that teachers may be able toreport more examples than they possibly would be without adifferentiated introduction. However, we decided to add theintroduction to the interview guide to ensure a common under-standing of the facets associated with LLL according to the theo-retical model. Concerning follow-up questions, we decided toforbid them, because we did not want to push teachers to desirableanswers or to look competent above and beyond giving the intro-duction, although it is possible that some teachers could have re-ported examples in more detail in terms of differentiation if follow-up questions would have been allowed.

We also implemented some measures to control for effects ofinterpretation. The coding manual, as a guide for assigning theexamples to the categories and for rating their quality as to differ-entiation and theory consistency, delivers rules for scoring. Tosupport trustworthiness in our scoring procedure, we calculated aninter-rater reliability, meaning the degree to which two or moreraters are able to differentiate among subjects or objects undersimilar assessment conditions (Kottner et al., 2011). Ratings shouldbe independent from the person who gave them (compare Wirtz &Caspar, 2002) and should reflect the real values referring to classicaltest theory. With a mean ICC ¼ .89, the intra-class correlations ofour raters are quite acceptable, if one refers to common cut-offvalues, which are controversially discussed (e.g. Kottner et al.,2011). According to Kottner et al. (2011) “it depends on the pur-pose and consequences of test results, scores, or diagnostic resultsregarding howmuch error will be allowed [.] Values of .60, .70, or.80 are often used as the minimum standards [.] only sufficient forgroup-level comparisons or research purposes” (p. 103). Eventhough research purposes are aimed at for a moment, withICC ¼ .89 there is still some room for disagreement.

Despite of the mentioned weaknesses and being prone to somebiases, the LLL Interview seems to be a helpful tool for measuringteachers’ competence in promoting aspects theoretically associatedwith LLL, which helps to identify teachers’ strengths and needs.Resulting from the combined deductive and inductive analysis ofthe examples, there are ten main categories of LLL in the interview.

Differences in frequency tell us which categories teachers arealready familiar with and have more explicitly in mind than others.Such results can give us hints as to which aspects need moreattention in teacher education and further education so that theywill be anchored more explicitly in teachers’ minds. However, ahigh frequency of reported examples does not necessarily meanthat the corresponding LLL facets in the model are taught often orwith high quality in classroom practice. Consequently, we need tolook at the example quality for each category, which does notcorrespond in every case with the frequency of examples. The LLLInterview provides us with just these quality indicators in terms ofthe general competence level and the competence profiles. Thecompetence profiles show us teachers’ needs and strengths andgive us information on the necessity of training programmesfocussing on special aspects where the need is the highest. Trainingprogrammes can be developed in an adaptive way when theparticipating teachers’ competence profiles are known. Further-more, teachers can benefit from getting feedback on their compe-tence profiles. The comparison of teachers’ self-assessments andexpert ratings provides us with information about teachers’knowledge and understanding of special facets. Again, feedback canhelp to adjust the views. However, German and Austrian schoolgrades from one to six are used for teachers’ self-assessment. Thus,teachers in Germany and Austria are very familiar with this scoring.If the self-assessment is used in countries with different gradingsystems, it should be adapted to assure that teachers are familiarwith the scoring.

Concerning the results of the LLL Interview’s first application, itis important to bear in mind that these results just serve as a firstimpression for illustration purposes. We found differences in fre-quency and quality of the reported examples depending on thecategory. For example, the facet arousing interest in a new topic isreported often in relation to other facets and with high qualityexamples, whereas supporting students while planning is promotedjust as often, but with low quality. Such results give us insight intowhere further support for teachers is needed.

As hypothesized, we found only a small statistically significantcorrelation between teachers’ self-assessments and the expertratings. That is in line with other research results e.g. on the cor-relation between student, teacher and observer ratings of class-room instruction quality and teaching competences (Hewson et al.,2001; Kunter & Baumert, 2006). To test if teachers are more awareof quality in some areas than in others, we had a look at the cor-relations between teachers’ self-assessments and the expert ratingsper category. There is just one statistically significant correlation forthe category promoting self-reflection, whereas every other corre-lation did not reach statistical significance and is close to zerodespite of a small non-significant correlation for giving feedback.Obviously, especially concerning promoting self-reflection, teachersare aware of their examples’ rather low quality in this specialcategory. The zero correlations for the other categories can be a hintthat teachers have little knowledge about how to promote specialfacets in the model. Maybe they rate their examples highly, becausethey simply do not know how a good example would look like.Likewise teachers could have a more open and fluid understandingof lifelong learning and what is involved in promoting it in theclassroom, which does not correspond to the educational psycho-logical understanding in the theoretical model. Looking at absolutevalues, teachers’ evaluation of their own examples is slightly higherthan the experts’ judgement, again indicating little knowledge, adifferent understanding of LLL, or a self-serving bias in the self-assessment. The zero correlations emphasize the high importanceof training teachers and giving feedback to teachers on their com-petences so that their own judgement could be adapted to fit ex-perts’ views.

J. Klug et al. / Teaching and Teacher Education 37 (2014) 119e129 127

Author's personal copy

Concerning possible differences in the general competence leveldepending on sex, we hypothesized that there is no gender effectmeaning that the interview procedure is fair for female and maleteachers. In line with research results on other teacher compe-tences, we found no statistically significant difference between thecompetence scores of female and male teachers (McElvany et al.,2012; Vogt & Rogalla, 2009). In mean values, female teachershave a very slight advantage, which could reach statistical signifi-cance if the sample size were increased sharply.

Referring to the hypothesized small negative correlation be-tween teachers’ ages and their overall competence scores, we wereable to find a correlation of the expected size in our sample.However, the correlation does not reach statistical significance. But,again, if sample size would be expanded, this correlation couldreach statistical significance. For testing a small correlative effect,Bortz (2005) suggests a sample size of at least 150. Thus, ourresearch on teachers’ competence linked to the theoretical model isin line with research on other teacher competences, which alsoshow small negative correlations with age (Bruder, 2011; Klug,2011; McElvany et al., 2012) even if they would not be expectedfrom the perspective of expertise research. The small negativecorrelation tells us that the younger the teacher, the better his orher general competence in promoting aspects theoretically asso-ciated with LLL and vice versa. This effect could result from alreadyimplemented improvements in teacher education concerning LLL,which might have led to better performance among the youngerteachers. It could as well be due to motivational effects during theinterview, meaning that the younger teachers were more highlymotivated to report differentiated examples. However, there isneed to foster teachers’ competence in promoting aspects theo-retically associated with LLL, especially in further education.

The biggest limitation of this first application of the LLL Inter-view clearly stems from the small voluntary sample. We chose theschools, because we had easy access to their headmasters andteachers. The interview is just used with Austrian and Germanteachers, because the current version of the interview guide existsin German language. A translation to English language still owes. Atleast, we chose two different countries and various schools andgrades to show that the interview can be administered in varioussettings. Nevertheless, the present sample gives a first impressionon howwell teachers promote aspects theoretically associatedwithLLL. But first and foremost, it shows how the LLL Interview can beapplied and with which kinds of information it provides us.

Another limitation concerns the accomplishment of the LLLInterview. However useful the information we gather with the LLLInterview is, it requires a great effort in time and resources toconduct and analyze such interviews. Besides, the scoring andinterpretation of the data requires knowledge in the underlyingtheory. Hence, the criterion of being economical cannot be ful-filled. In future research, especially the economical aspect shouldbe taken into consideration. To evaluate changes due to anintervention or to do representative research with big samplesizes, there is still need to develop an instrument, which mea-sures the same competence in a more efficient way. The examplesreported in the LLL Interview can be used when consideringdeveloping a more efficient instrument to measure the compe-tence in question. Additionally, the reported examples can beused for further qualitative analyses as well as for training pro-grammes where other teachers can learn from analyzing theseexamples. Another interesting consideration for future researchwould be to include students’ perspectives on their teachers’competence in addition to teachers’ self-assessment and the ex-perts’ rating. This would be similar to Roth, Kanat-Maymon, andBibi (2011), who assessed students’ perspectives on their teachers’ability to provide support.

In sum, the present study offers a promising approach tomeasuring teachers’ competence in promoting aspects theoreti-cally associated with LLL. It can be used across all school subjectsand grades and has the potential to be used in schools’ self-evaluations. Resulting competence profiles give helpful informa-tion on the facets associated with LLL in the model needing moreattention in teacher education and further education. Similarly, onthe individual level, one’s own strengths and weaknesses can bedetected.

Acknowledgements

This research was financially funded by the Jubilee Fund of theAustrian National Bank (12875).

Appendix A. Supplementary data

Supplementary data related to this article can be found at http://dx.doi.org/10.1016/j.tate.2013.09.004.

References

Achtziger, A., & Gollwitzer, P. M. (2007). Motivation und Volition im Hand-lungsverlauf [Motivation and volition in the course of action]. In J. Heckhausen,& H. Heckhausen (Eds.), Motivation und Handeln [Motivation and action] (3rded.). (pp. 277e302). Heidelberg: Springer.

Ames, C., & Maehr, M. (Eds.). (1989). Advances in motivation and achievement.Greenwich, CT: JAI.

Asia-Pacific Regional Forum for Lifelong Learning. (2004). Lifelong learning in Asiaand the Pacific region. Bangkok: UNESCO.

Bortz, J. (2005). Statistik für Human- und Sozialwissenschaftler [Statistics for humanand social researchers] (6th ed.). Berlin: Springer.

Bruder, S. (2011). Lernberatung in der Schule. Ein zentraler Bereich professionellenLehrerhandelns [Advising learning at school. A central element of teachers’ pro-fessional action] (Doctoral dissertation). Retreived from http://tuprints.ulb.tu-darmstadt.de/2432/1/Dissertation_Bruder_Lernberatung_070311.pdf.

Commission of the European Communities. (2000). Memorandum on lifelonglearning. Brussels: Commission of the European Communities.

Dewe, B., & Weber, P. J. (2007). Wissensgesellschaft und Lebenslanges Lernen: EineEinführung in bildungspolitische Konzeptionen der EU [Knowledge society andlifelong learning: An introduction to educational political conceptions of the EU].Bad Heilbrunn: Julius Klinkhardt.

Dresel, M. (2010). Förderung der Lernmotivation mit attributionalem Feedback[Promoting learning motivation with attributional feedback]. In C. Spiel,B. Schober, P. Wagner, & R. Reimann (Eds.), Bildungspsychologie [Bildungs-psy-chology] (pp. 130e135). Göttingen: Hogrefe.

Eccles, J. S. (1983). Expectancies, values and academic behaviors. In J. T. Spence (Ed.),Achievement and achievement motives: Psychological and sociological approaches(pp. 75e146). San Francisco: Freeman.

European Commission. (2001). Making a European area of lifelong learning a reality.Brussels: Commission of the European Communities.

Fahrenberg, J., Myrtek, M., Pawlik, K., & Perrez, M. (2007). Ambulatory assessment emonitoring behavior in daily life settings. European Journal of PsychologicalAssessment, 23(4), 206e213.

Finsterwald, M., Wagner, P., Schober, B., Lüftenegger, M., & Spiel, C. (2013). Fosteringlifelong learning e evaluation of a training teacher education program forprofessional teachers. Teaching and Teacher Education, 29, 144e155.

Flick, U. (2002). An introduction to qualitative research. London: Sage.Gollwitzer, P. M. (1996). The volitional benefits of planning. In P. M. Gollwitzer, &

J. A. Bargh (Eds.), The psychology of action. Linking cognition and motivation tobehavior (pp. 287e312). New York: Guilford Press.

Gottfried, A., Fleming, J., & Gottfried, A. (2001). Continuity of academic intrinsicmotivation from childhood through late adolescence: a longitudinal study.Journal of Educational Psychology, 93, 3e13.

Greenwood, C. R., & Mcconnell, S. R. (2011). Guidelines for manuscripts describingthe development and testing of an assessment instrument or measure. Journalof Early Intervention, 33(3), 171e185.

Guskey, T. R. (2002). Professional development and teacher change. Teachers andTeaching, 8(3), 381e391.

Hake, B. J. (1999). Lifelong learning policies in the European Union: developmentsand issues. Compare: A Journal of Comparative and International Education, 29(1),53e69.

Hargreaves, D. H. (2004). Learning for life: The foundations for lifelong learning.Bristol: The Policy Press.

Heckhausen, H. (1974). Leistung und Chancengleichheit [Achievement and equal op-portunities]. Verlag für Psychologie, Hogrefe.

J. Klug et al. / Teaching and Teacher Education 37 (2014) 119e129128

Author's personal copy

Heckhausen, H., & Gollwitzer, P. M. (1987). Thought contents and cognitive func-tioning in motivational versus volitional states of mind.Motivation and Emotion,l1, 101e120.

Hewson, M. G., Copeland, H. L., & Fishleder, A. J. (2001). What’s the use of facultydevelopment? Program evaluation using retrospective self-assessments andindependent performance ratings. Teaching and Learning in Medicine: An In-ternational Journal, 13(3), 153e160.

Hof, C. (2009). Lebenslanges Lernen: Eine Einführung [Lifelong learning: An intro-duction]. Stuttgart: W. Kohlhammer Verlag.

Horvath, M., Herleman, H. A., & McKie, R. L. (2006). Goal orientation, task difficulty,and task interest: a multilevel analysis. Motivation and Emotion, 30, 171e187.

Hulleman, C. S., Durik, A. M., Schweigert, S. A., & Harackiewicz, J. M. (2008). Taskvalues, achievement goals, and interest: an integrative analysis. Journal ofEducational Psychology, 100(2), 398e416.

Istance, D. (2003). Schooling and lifelong learning: insights from OECD analyses.European Journal of Education, 38(1), 85e98.

Kearns, P. (2005). Achieving Australia as an inclusive learning society. A report onruture directions for lifelong learning in Australia. Canberra City: Adult LearningAustralia Inc.

Klieme, E., & Leutner, D. (2006). Kompetenzmodelle zur Erfassung individuellerLernergebnisse und zur Bilanzierung von Bildungsprozessen. Beschreibungeines neu eingerichteten Schwerpunktprogramms der DFG [Competencemodels for measuring individual learning outcomes and balancing educationalprocesses. Description of a new priority programme of the DFG]. Zeitschrift fürPädagogik, 52, 876e903.

Klug, J. (2011). Modeling and training a new concept of teachers’ diagnostic compe-tence. Dissertation, TU Darmstadt.

Klug, J., Bruder, S., Kelava, A., Spiel, C., & Schmitz, B. (2013). Diagnostic competenceof teachers: a process model that accounts for diagnosing learning behaviortested by means of a case scenario. Teaching and Teacher Education, 30, 38e46.

Koeppen, K., Hartig, J., Klieme, E., & Leutner, D. (2008). Current issues in competencemodeling and assessment. Journal of Psychology, 216(2), 61e73.

Kottner, J., Audige, L., Brorson, S., Donner, A., Gajewskie, B. J., Hrobjartsson, A., et al.(2011). Guidelines for reporting reliability and agreement studies (GRRAS).Journal of Clinical Epidemiology, 64, 96e106.

Kunter, M., & Baumert, J. (2006). Who is the expert? Construct and criteria validityof student and teacher ratings of instruction. Learning Environment Research, 9,231e251.

Kunter, M., Baumert, J., Blum, W., Klusmann, U., Krauss, S., & Neubrand, M. (2011).Professionelle Kompetenz von Lehrkräften e Ergebnisse des ForschungsprogrammsCOACTIV [Professional competence of teachers e Results of the research pro-gramme COACTIV]. Münster: Waxmann.

Kuzmanic, M. (2009). Appearance of truth through dialogue. Horizons of Psychology,18(2), 39e50.

Kvale, S. (1996). InterViews: An introduction to qualitative research interviewing.London: Sage.

Lüftenegger, M., Schober, B., Van de Schoot, R., Wagner, P., Finsterwald, M., &Spiel, C. (2012). Lifelong learning as a goal e do autonomy and self-regulation inschool result in well prepared pupils? Learning and Instruction, 22, 27e36.

McElvany, N., Schroeder, S., Baumert, J., & Schnotz, W. (2012). Cognitivelydemanding learning materials with texts and instructional pictures: teachers’diagnostic skills, pedagogical beliefs and motivation. European Journal of Psy-chology of Education, 27, 403e420.

Merrick, E. (1999). An exploration of quality in qualitative research: are ‘reliability’and ‘validity’ relevant? In M. Kopala, & L. A. Suzuki (Eds.), Using qualitativemethods in psychology (pp. 25e36). Thousand Oaks: Sage.

Oser, F., Salzmann, P., & Heinzer, S. (2009). Measuring the competence-quality ofvocational teachers: an advocatory approach. Empirical Research in VocationalEducation and Training, 1(1), 65e83.

Peterson, R. A. (2000). Constructing effective questionnaires. Thousand Oaks, CA: Sage.

Pintrich, P. R., & De Groot, E. V. (1990). Motivational and self-regulated learningcomponents of classroom academic performance. Journal of Educational Psy-chology, 82, 33e40.

Pintrich, P. R., & Schunk, D. H. (1996). Motivation in education: Theory, research, andapplications. Ohio: Merrill.

Podsakoff, P. M., MacKenzie, S. B., Lee, J. Y., & Podsakoff, N. P. (2003). Commonmethod biases in behavioral research: a critical review of the literature andrecommended remedies. Journal of Applied Psychology, 88(5), 879e903.

Rheinberg, F. (2006). Bezugsnormorientierung [Frames of reference]. In Hand-wörterbuch Pädagogische Psychologie [Concise dictionary educational psychology](3rd ed.). (pp. 55e62). Weinheim/Basel/Berlin: Beltz PVU.

Rheinberg, F. (2008). Motivation (7th ed.). Stuttgart: Kohlhammer.Roth, G., Kanat-Maymon, Y., & Bibi, U. (2011). Prevention of school bullying: the

important role of autonomy-supportive teaching and internationalization ofpro-social values. British Journal of Educational Psychology, 81, 654e666.

Schmitz, B., & Wiese, B. (2006). New perspectives for the evaluation of trainingsessions in self-regulated learning: time-series analyses of diary data.Contemporary Educational Psychology, 31, 64e96.

Schmuckler, M. A. (2001). What is ecological validity? A dimensional analysis. In-fancy, 2(4), 419e436.

Schober, B., Finsterwald, M., Wagner, P., Lüftenegger, M., Aysner, M., & Spiel, C.(2007). TALK e a training program to encourage lifelong learning in school.Zeitschrift für Psychologie, 215, 183e193.

Spiel, C., Lüftenegger, M., Wagner, P., Schober, B., & Finsterwald, M. (2011). För-derung von Lebenslangem Lernen e eine Aufgabe der Schule [Promoting Life-long Learning e A task of schools]. In O. Zlatkin-Troitschanskaia (Hrsg.),Stationen Empirischer Bildungsforschung: Traditionslinien und Perspektiven [Sta-tions of Empirical Educational Research: Traditions and Perspectives], 1. Auflage,(pp. 305e319). Wiesbaden: Verlag für Sozialwissenschaften.

Sternberg, R. J., & Davidson, J. E. (Eds.). (1986). Conceptions of giftedness. Cambridge:Cambridge University Press.

Todt, E., & Schreiber, S. (1998). Development of interests. In L. Hoffmann, A. Krapp,K. Renninger, & J. Baumert (Eds.), Interest and learning (pp. 25e40).

Torres, R. M. (2009). From literacy to lifelong learning: Trends, issues and challenges inyouth and adult education in Latin America and the Caribbean. Regional synthesisreport. Hamburg: UNESCO.

Vogt, F., & Rogalla, M. (2009). Developing adaptive teaching competency throughcoaching. Teaching and Teacher Education, 25(8), 1051e1060.

Weinstein, C. E., & Hume, L. M. (1998). Study strategies for lifelong learning. Wash-ington: American Psychological Association.

Wigfield, A., & Eccles, J. S. (2000). Expectancy-value theory of achievement moti-vation. Contemporary Educational Psychology, 25, 68e81.

Wirtz, M., & Caspar, F. (2002). Beurteilerübereinstimmung und Beurteilerreliabilität[Inter-rater agreement and inter-rater reliability]. Göttingen: Hogrefe.

Ziegler, A. (1999). Motivation. In C. Perleth, & A. Ziegler (Eds.), Pädagogische Psy-chologie: Grundlagen und Anwendungsfelder [Educational psychology: Basics andfields of application] (pp. 109e119). Bern: Hans Huber.

Ziegler, A., & Heller, K. A. (1998). Motivationsförderung mit Hilfe eines Reat-tributionstrainings [Fostering motivation by means of reattributional trainings].Psychologie in Erziehung und Unterricht, 45, 216e230.

Ziegler, A., Schober, B., & Dresel, M. (2005). Primary school students’ implicit the-ories of intelligence and maladaptive behavioral patterns. Education Science andPsychology, 6, 76e86.

Zimmerman, B. J. (2000). Attaining self-regulation: a social cognitive perspective. InM. Boekaerts, P. R. Pintrich, & M. Zeidner (Eds.), Handbook of self-regulation (pp.13e35). San Diego, CA: Academic Press.

Zimmerman, B., & Kitsantas, A. (2007). Reliability and validity of self-efficacy forlearning form (SELF) scores of college students. Journal of Psychology, 215(3),157e163.

J. Klug et al. / Teaching and Teacher Education 37 (2014) 119e129 129