Structured Observation as Research Method in Education - a Critical Assessment

15
1 Fergal Treanor Structured Observation an assessment of methodological strengths and weaknesses from two epistemological perspectives Submitted April 2010 as part of M.A.Ed at the Open University Abstract Structured observation is a quantitative method of data gathering and evaluation, which uses observation schedules and predetermined behavioural categories. Quantitative researchers informed by positivist/objectivist perspectives frequently use structured observation, but are faced with a validity/relevance trade-off as well as replication problems deriving from the unsuitability of individual observation schedules across differing contexts. Qualitative researchers informed by constructionist/discourse analytical perspectives criticize structural observation’s inability to chart the unfolding process of construction of social and educational discourses. This paper argues that positivist and constructionist perspectives are at odds, but not incommensurable; in mixed-methods research, structured observation may provide valuable pointers for case-selection in an otherwise constructionist research project. What is structured observation? “Structured Observation” is used in educational and other interactional settings to chart different forms of behaviour. It differs from participant observation and field notes in that it follows, as strictly as possible, a preordained observation “schedule”. Examples of such schedules can be found in the Media Guide (pp.31, 33), Cohen et al (2006) 1 and Hammersley et al (pp.197, 199). Structured observation is a quantitative, positivistic method; the Media Guide describes observation schedules as „a means of gathering quantitative data“ (Media Guide, p.30). Researchers using it must work with predetermined observation categories. Some of these categories may be purely observational in “Example 4.7(Hammersley et al, p.197), for example, the technique of “mapping” is used to record the exact location of children in a nursery school classroom at given intervals. By drawing a plan of the area showing the location of different activities or pieces of equipment … a researcher can mark 1 http://cw.routledge.com/textbooks/9780415368780/D/CH18box.asp#1

Transcript of Structured Observation as Research Method in Education - a Critical Assessment

1

Fergal Treanor

Structured Observation – an assessment of methodological

strengths and weaknesses from two epistemological perspectives

Submitted April 2010 as part of M.A.Ed at the Open University

Abstract

Structured observation is a quantitative method of data gathering and evaluation, which uses

observation schedules and predetermined behavioural categories. Quantitative researchers

informed by positivist/objectivist perspectives frequently use structured observation, but are

faced with a validity/relevance trade-off as well as replication problems deriving from the

unsuitability of individual observation schedules across differing contexts. Qualitative

researchers informed by constructionist/discourse analytical perspectives criticize structural

observation’s inability to chart the unfolding process of construction of social and educational

discourses. This paper argues that positivist and constructionist perspectives are at odds, but

not incommensurable; in mixed-methods research, structured observation may provide

valuable pointers for case-selection in an otherwise constructionist research project.

What is structured observation?

“Structured Observation” is used in educational and other interactional settings to chart

different forms of behaviour. It differs from participant observation and field notes in that it

follows, as strictly as possible, a preordained observation “schedule”. Examples of such

schedules can be found in the Media Guide (pp.31, 33), Cohen et al (2006)1 and Hammersley

et al (pp.197, 199).

Structured observation is a quantitative, positivistic method; the Media Guide describes

observation schedules as „a means of gathering quantitative data“ (Media Guide, p.30).

Researchers using it must work with predetermined observation categories.

Some of these categories may be purely observational – in “Example 4.7” (Hammersley et al,

p.197), for example, the technique of “mapping” is used to record the exact location of

children in a nursery school classroom at given intervals. By “drawing a plan of the area

showing the location of different activities or pieces of equipment … a researcher can mark

1 http://cw.routledge.com/textbooks/9780415368780/D/CH18box.asp#1

2

on it which children are in particular locations, and what activities they are carrying out.”

(Hammersley et al, p.197).

When researchers are required to record “certain specified types of behaviour” (Hammersley

et al, p.195), the task becomes more challenging. In Fig.1, we see “Observation schedule B”,

(Media Guide, p.33), which has been filled out for the DVD track “beginning of Frank’s

lesson” as part of data-based exercise 5 (Media Guide, pp. 30-33). As we shall see, even the

relatively uncontroversial distinctions made here between types of behaviour/types of

utterance may engender problems of definition, whose gravity differs, depending on the

priorities of the epistemological perspective of the research. We shall see that while such

problems present some challenges to positivist researchers, they do not greatly affect the

value of structured observation within the framework of positivist-informed thinking. In the

case of constructionism, however, these same problems may lead researchers to reject this

method altogether.

Structured observation schedules need not always involve predetermined categories from the

outset. Example 4.8 (Hammersley et al, p.199) requires observers simply to describe

behaviours briefly at regular intervals. This resembles detailed note taking, but also allows

behaviours to be categorised and counted after the fact: “By coding observations you can

transform qualitative information into quantitative information – you can say how many

pupils behaved in a certain way” (Hammersley et al, p.198).

In this assignment, the focus shall be on observation schedules in which categories have been

determined before the observation begins. “Observation schedule B” shall be used throughout

as the prime example.

Observation Schedule B – results of structured observation at 30-

second intervals for DVD 2, Track 3a: “Start of Frank’s Lesson”

a. Teacher is talking about the

lesson

IIIIII

b. Teacher presenting information

as part of the lesson

I

c. Teacher asking a question III

d. Teacher giving directions II

3

e. Teacher responding to students’

questions, answers or comments

III

f. Student asking a question I

g. Student giving a comment I

h. Student giving an answer IIII

Fig. 1: “Observation schedule B” for data-based exercise 5, “beginning of Frank’s

lesson”

What is positivism?

Positivism is an approach to scientific knowledge defined by Compte (Prechtl and Burkhard,

p.456), who drew on strands of thought from the ancient world, Islamic philosophy and early

modern thinkers to formulate his methodological demands (Study Guide, p.79). Compte

argued that all scientific enquiry should be restricted to the factual and the useful, to the

correlation of law-like connections between facts (Study Guide, p.79, Prechtl and Burkhard,

p.456). His “objectivist” stance posited the existence of one, objective reality, which scientists

should strive to explain (Trochim, 2006). He also held that the resulting technical knowledge

could and should be used to change the world for the better (Prechtl and Burkhard, p.457).

In Compte’s day, positivism stood for progress against superstition and magical thinking

(Study Guide, p.79). The “purpose of science [was] simply to stick to what we can observe

and measure. Knowledge of anything beyond that, a positivist would hold, is impossible”

(Trochim, 2006). Positivism has since been criticized extensively, and subjected to numerous

modifications and revisions, crucially by Karl Popper, whose “critical rationalism” disputed

the scientific validity of statistical inferences and introduced the criterion of falsifiability

(Prechtl and Burkhard, p.309). Another essential criterion for modern positivistic research is

“procedural objectivity”. To enable hypothesis testing, the procedures used in studies and

experiments must be made explicitly transparent, and should be replicable by others (Study

Guide p.79). As we shall see, it is not easy for observation schedules to fulfil this requirement.

Positivism continues to be rejected and criticized from many quarters (Study Guide, pp.81-99,

Gage, pp. 152-155). The central problem with positivism in educational research is its

perceived unsuitability to the topic: while the materials and forces of the natural world lend

themselves to measurement and correlation, humans, with their unpredictable ways, do not.

4

Adherents of different epistemological directions reject the positivist “assumption of the

uniformity of nature” and “linear causal models” (Study Guide p.82) (Gage, p.153). There

have also been accusations of “scientism” – a quasi-religious belief in the primacy of

objectivist science (Prechtl and Burkhard, p.458).

Though very few scientists now retain a purely “objectivist” view of nature (Trochim, 2006),

today’s quantitative researchers regularly face “accusations of positivism” (Elliott pp.66-67).

Though the “antinaturalist critique” has been acknowledged (Gage, p.152), “post-positivist”

quantitative research in education remains strongly influenced by positivist principles. It is in

this broader sense that the word is used in this assignment.

Hargreaves’ appeal for “evidence-based practice” reveals this: “One must ask the essential

question: just how much research is there which (i) demonstrates conclusively that if teachers

change their practice from x to y there will be a significant and enduring improvement in

teaching and learning and (ii) has developed an effective method of convincing teachers of

the benefits of, and means to, changing x to y?” (Hargreaves, p.9). Hargreaves believes there

is a “practice x” and a “practice y” in education, and that “conclusive” evidence can be found

as to their respective benefits. Though he has relativised his stance, he does seem to rely on

“naive positivistic assumptions” (Elliott, p.67). Also unclear is Hargreaves’ concept of

“usefulness” in the positivist sense; Elliott presents Peters’ concept of what it means to be

educated, (Elliott, pp.73-77) and contrasts these with Hargreaves’ “unquestioning

commitment to an outcomes-based view”.

Hargreaves’ proposals are derived from the medical profession, where “evidence-based

practice” has proved successful. But in medicine, there are drugs and clinical procedures

whose effects on the human body can be measured, and it is here that research findings come

into play. In educational settings, where subtle and complex interpersonal interactions are of

central importance, it is difficult to see a useful role for statistical correlation. Oakley argues

that there is nothing “inherently wrong” with positivism (Oakley, 2001a). Indeed, it may well

help define certain quantitative parameters, such as class sizes or the availability of the most

frequently used equipment. As we shall see, however, different methods are better suited to

understanding verbal interaction in educational settings.

Structured observation from a positivist point of view.

5

The chart in Fig.2 provides an overview of the strengths and weaknesses of structured

observation from a positivist perspective.

Strengths Weaknesses

1. Explicit procedures enable researchers

to “set up” the situation to be observed,

thus maintaining some control over

possibly disruptive extrinsic factors.

1. “Artificial setting” heightens participants’

awareness of being observed, leading to

reactivity or “Hawthorne effect”, which

can reduce validity of findings.

2. Is quantitative, therefore suitable for

statistical analysis.

2. Quantitative analysis in interactional

settings does not account for content or

outcome of interactions.

3. Allows incidences of each category of

behaviour to be correlated with each

other.

3. Statistical correlations do not always lead

to causal links, and are only valid insofar

as the categories chosen by the researcher

are valid.

4. Results can be compared directly with

results of other observations carried out

using the same observation schedule.

4. Observation schedules designed for one

situation are rarely transferable to different

situations.

5. Procedures and results may be

replicated by other research teams to

falsify/confirm findings

5. As above, exact replication will sacrifice

relevance. Altered procedures and

categories cannot be compared, and will

therefore sacrifice validity.

6. Schedules can be designed to provide

very detailed information

6. The more detailed the observations and the

more frequent the observation intervals,

the more difficult the observer’s task will

be. This may lead to a relevance / validity

trade off.

7. Provides precise information about

what is happening at a given moment.

7. A decontextualised “snapshot” does not

tell us how long the given behaviour

lasted, or in what context it arose.

8. Continuation or replication of research

at longer intervals can track changes

over time.

8. Observation of changes over time still does

not chart development of relationship

between participants or dynamics of

development of jointly constructed

discourses.

9. Suitable for team work, as the identity

of the observer is less important than

the observation procedure.

9. Individual observers still have the task of

placing behaviours/utterances into

categories. Interpretations will vary from

person to person.

10. Provides procedural objectivity -

observation has a relatively objective

structural framework, so it can be

carried out and evaluated according to

clear, explicit rules.

10. For procedural objectivity to be

maintained, observation categories cannot

be altered once set, so any initial bias or

weakness in categorisation will be

sustained throughout entire research

project.

6

11. Compels researchers to express their

methods and findings in clear, rational

terms, which may then be easily

evaluated by other researchers.

11. Imposes normative scheme of observation

which may fail to recognise important

aspects of interactive process

Fig. 2: Strengths and weaknesses of structured observation from a positivist perspective

1. Artificiality – In highly controlled situations, the notorious “Hawthorne effect” can

come into play (Cohen et al, 20062, Warburton, 2010). The advantage of a controlled

setting is reduced by “reactivity” – the fact that awareness of being observed

influences the behaviour of participants.

2. Quantitative research method – observation schedules explicitly require the

quantification of data. Critics of positivism, as we have seen, regard the measurement

of social phenomena as being of little use in the pursuit of understanding. To a

positivist, however, allowing essentially non-quantitative phenomena, such as verbal

interaction, to be recorded in this way is the first step in successful, objective research.

3. Statistical correlation – once data have been gathered in this numerical fashion, they

can be correlated with each other and tested for statistical significance. Even within

positivism, however, this is not unproblematic. Even confirmed statistical significance

need not imply causality (Hammersley et al, p.96), and in the case of structured

observation, the variables must first be clearly defined and delineated from each other

– no easy task, especially when the variables are different kinds of talk.

4. Comparability – as long as the same observation schedule is used, results can be

compared for a range of different situations. This brings positivist researchers closer to

the requirement of procedural objectivity. But does it make sense to use the same

schedule? Is not every interactive situation different? Should specially designed or

pre-published schedules be used at all? “Some published interaction schedules are

difficult to use, and a schedule that works perfectly well in one context may not in

another” (Hammersley et al, p.204).

5. Replicability – here, quantitative researchers face the same dilemma. Research may

be replicated with high validity and reliability, criteria emphasised by Warburton

(2010), but for this to work, the observation categories may not be modified in any

way. There is a trade-off here: Should new observation schedules be tailored to suit

2 http://cw.routledge.com/textbooks/9780415368780/A/ch4doc.asp

7

the new situation – a different class size or style of teaching – then relevance could

well be increased, but at the expense of validity. This relevance/validity trade-off

occurs only within the positivist paradigm, where the objectivist world view and

universalised research parameters lead to the danger of a “one-size-fits-all” approach.

6. Difficulty of Detailed Observation – Observation schedules can provide much detail,

but the more the observer is required to record, and the more subtle the categorisation

problems they face, the more difficult it will be to “stick to the game plan”

(Warburton, 2010) and produce valid observations. This further relevance/validity

trade-off is particularly true of verbal interactions: “It is difficult to categorize talk in

any very complex way when observing and recording on the spot.” (Hammersley et al,

p.204). In producing the observations in Fig.1, I found it difficult to distinguish

categories a. and d., or g. and h. By contrast, simpler observations will be more valid,

but will not reflect the true complexity of the situation being observed. Audiovisual

recordings will help bring in more detail, but even here, researchers must still decide

which aspects are most relevant to their research.

7. The “Snapshot” problem – every observation records what is happening only at a

given moment. It does not record how long a behaviour (or speaker’s turn) lasts, or say

anything about its relevance to the overall development of a situation (Hammersley et

al, p.201).

8. Longer Time Periods – observations may be repeated at intervals to chart changes

over time, but this is also limited: “The more frequently an observer samples, the more

complete a picture they obtain, but it is still ‘snapshots’ that are being collected. It is

not possible to follow through connected sequences of behaviour.” (Hammersley et al,

p.198). In particular when researchers are seeking to understand the process of

construction of discourses, this form of observation is of no help3.

9. Team Work – Structured observation can be carried out in teams, as the points being

observed are the same for all. Observation results may still differ, however, as

individual observers will interpret the given categories in different ways. This

weakness itself takes an interpretivist view, and so may not be fully appreciated by

positivist-oriented quantitative researchers.

3 This last problem is not of concern to positivists, but worth mentioning here as it is relevant to the overall

argument of this assignment.

8

10. Procedural Objectivity – This method fulfils the positivist requirement of procedural

objectivity (Study Guide, p.79), as its terms are clear from start to finish. This is only

true when the categories remain constant throughout a research project, which

precludes constructive criticism in medias res from research team members. As

mentioned, any alterations to preordained categories also conflict with the requirement

of replicability. This makes structural observation a rather rigid research framework,

particularly in such dynamic settings as the classroom.

11. Clear Categories – As discussed, working with clear, transparent categories has

mixed merits. On the one hand, conceiving of these categories can be a healthy

scientific practice, as it compels researchers to express their ideas in a clear, rational,

transparent way. All scientific work must be methodical and disciplined. But does

structured observation offer the best discipline for educational research? This is a

question of epistemological approach.

In all, one can say that positivist researchers will see this method as very useful, though even

within the positivist framework, they face several uncomfortable dilemmas. As we shall see,

the constructionist perspective is much more critical, though I will argue that a complete

rejection of observation schedules is unnecessary.

What is constructionism?

Constructionism is an approach to educational and social research which focuses on the role

of participants and observers in constructing discourses which shape our understanding of the

world. It derives from constructivism, an epistemological movement encountered in

psychology and in the natural sciences (Study Guide p.93). Constructionism posits that social

constructs result from complex human choices, not from deterministic systems. This view

replaces the positivist “what” with a “how” (Prechtl and Burkhard, p.299). Constructionism

can be seen as one of the many “subjectivist” approaches which stand in opposition to

positivism (Cohen et al, 2006)4. It is distinct from Interpretivism, in that it places as much

emphasis on subconscious or semi-conscious belief systems as it does on conscious,

deliberate interpretations (Study Guide, pp.96, 97). Constructionist researchers tend to work

qualitatively. The prime constructionist methodology, though not the only one, is discourse

analysis (Study Guide, p.54. p.96). Constructionists seek to understand how discourses are

4 Cohen et al summarise this basic contrast very clearly at:

http://cw.routledge.com/textbooks/9780415368780/A/ch1box.asp#2

9

constructed, and how they shape the development of both tacit beliefs and subjective

interpretations of reality. Discourses are seen as “ways of representing aspects of the world”

(Fairclough, p.124). Much of discourse analysis has been based on the philosophical writing

of Foucault (Study Guide, p.96, Fairclough, p.123), but the detailed, methodological analysis

of linguistic data, both oral and written, is also crucial in constructionist discourse analysis

(Study Guide, p.97). In discussing constructionism, this assignment shall therefore focus on

the discourse analytical methods associated with this philosophy. Such methods are strongly

context-based, seeking to understand the relationships between speaker and utterance, and

between participants and wider social context (Butt et al, 2000, p.4, Brown and Yule, 1983,

p.27). Also crucial is the study of implicature – that which is meant, but not directly said

(Brown and Yule, pp.31-32). My research project for E844 recognised the importance of

verbal interaction in such processes of construction (Treanor, 2009, pp.2-7). This view is also

taken by Burns, for whom “language is socially constructed and embedded in culture” (Burns,

2000, p.126)5. This interrelationship between linguistic interactions and constructed realities

is elaborated by Fairclough: “Different discourses are different perspectives on the world, and

they are associated with the different relations people have to the world, which in turn

depends on their positions in the world, their social and personal identities, and the social

relationships in which they stand to other people” (Fairclough, p.124). This strongly anti-

objectivist epistemological foundation of constructionism has led to the view that it is

“incommensurable” (in the sense of Kuhn) with positivism, (Study Guide, p.93, Prechtl and

Burkhard, p.261). I will argue that research methods based on both philosophical approaches

may coexist, however uneasily, and complement one another in different stages of educational

research.

Structured Observation from a constructionist perspective

It is difficult to chart clear, opposing strengths and weaknesses, as done in part 3:

Constructionism and structured observation seem simply not to suit each other. This requires

careful discussion, and less point-by-point comparison. Nevertheless, Fig. 3 briefly sets out

some strengths and weaknesses of structured observation from the constructionist perspective.

Only three “pairs” are listed, but are then considered in depth.

5 This is the only source which is directly „re-cited“ from a previous OU assignment. I consider this worthwhile,

as it is directly relevant to the connection between the method of discourse analysis and the constructionist

philosophy. Some other sources have also already been used in other OU assignments and courses, but with

different citations and emphases.

10

Strengths Weaknesses

1. May provide initial quantitative

overview of a situation, thus informing

decisions about where to place focus of

qualitative, discourse analytical

component of a “mixed methods”

research project.

1. May still miss out on significant or

“critical” moments in process of

development of discourse.

2. Gives a clear Account of how much

time each person/category of person

spent talking, thus identifying key

participants in the discourse.

2. Tells us nothing about content of what was

said or in which discursive context it was

said. Is therefore of no use in analysing the

“why” or “how” of a developing discourse

3. Allows for analysis of socially

constructed normative discourse of

categorisation used by positivist

informed quantitative researchers.

3. By applying preordained categories,

structured observation prevents qualitative

researchers from seeking deeper

understanding of discursive practices and

processes.

Fig. 3: “Structured Observation” from a constructionist perspective – strengths and

weaknesses

1. “Critical Moments” – In Education research, constructionists “place emphasis on the

role of particular occasions in shaping the way people understand their world, rather

than treating the learning process as a steady accumulation of knowledge at an even

pace” (Denscombe, p.204). This negates entirely the value of observation schedules, if

we remember the discussion from a positivist perspective; by their very nature, they

seek to make processes seem uniform which are patently not so. Potter and Wetherell

argue that researchers “can and should focus on variability and even inconsistency,

rather than trying to disguise variation in the hope of producing clear and stable

patterns” (Jaworski and Coupland, p.19). This is a powerful criticism of positivist,

quantitative methods, and as the analysis of Fig. 5, will show, there are crucial aspects

of classroom conversations which simply don’t show up on observation schedules.

2. Observing the discursive construction of social reality – Observation schedules are

purely quantitative, and therefore tell us nothing about the context or the construction

process of a discourse. To understand how my house stays up, I do not count the

bricks; I seek to understand the structure. Correspondingly, to understand socially

constructed discourses, close structural examination of linguistic data is needed. In

practice, qualitative analysis of linguistic materials will take longer than completing an

observation schedule. This may seem uneconomical – “much more detailed transcripts

are required for discourse analysis” (Media Guide, p.17), but the payoff is a deeper

11

understanding of discursive processes. The cost in terms of time may also be seen as

an advantage: “In qualitative analysis, there is usually more time available to think

about exactly what people are saying … and why they might be saying … it, keeping

the ‘what’ and the ‘why’ separate as far as possible” (Media Guide, p.19).

3. Positivist research paradigms as the object of critical research – one rather

subversive “strength” of structured observation is that it exemplifies all that is

“wrong” with positivist, quantitative research from a constructionist perspective. In an

observation schedule, one finds the prioritization of quantitative data gathering

methods, the restriction to what can be directly observed, and the use of necessarily

subjective categories, to impose simplified order and uniformity on highly complex

interactive situations. This weakens the positivist claim to be free of ideology;

constructionists argue that it is impossible to be free of ideology, and that positivists

are the only ones not to acknowledge this, assuming for themselves “latent privileges

of interpretation” (Prechtl and Burkhard, p.457). Critical analysis of positivist

assumptions occupies much literature (e.g. Adorno 1993, Cameron et al, 1992). The

above criticisms must of course also be presented as weaknesses, as the underlying

epistemological assumptions of observation schedules make this research method

wholly unsuitable for constructionist research.

Implications for research

In this assignment I have compared the merits of two radically different ways of finding

things out. Fig. 4 uses examples to present these different approaches on two levels: their

epistemological approach and their closeness to theory on one hand, and “actionable

knowledge” on the other. When we consider positivism and constructionism, it is fair to ask:

which is the better way of doing educational research? If we accept Bassey’s definition that

“[e]ducational research aims critically to inform educational judgments and decisions in order

to improve educational action” (Bassey, p.147), then we must conclude that both

methodologies have valuable contributions to make.

epistemological approach

relevance to practice

Informed by “positivist”

theory

Informed by

“constructionist” (or

“constructivist) theory

“Actionable knowledge” –

of direct use to educational

practitioners and

“Evidence-Based Practice”

programme of Hargreaves

Conversation analysis and

learning theories of Mercer

12

policymakers Social research advocated by

Oakley

Discourse analysis methods

of Fairclough

“Abstract knowledge” – of

theoretical interest, with

little direct impact on

educational policy or

practice.

Works of Compte, Popper,

logical positivists, etc.

Works of Piaget, Vygotsky,

Foucault, etc.

Fig. 4 – ways of finding things out

Positivist-informed quantitative research is particularly valuable in highlighting connections

between policy decisions and schools and students at large. The achievements of male and

female students can be effectively compared, or the usefulness of expensive equipment

measured. The greatest weakness of this approach is the necessarily subjective and limited

nature of its terms – what do we mean by student “achievements”? Can young people’s

development be measured in a numerically abstract fashion? Quantitative researchers have,

however, taken on this criticism, and acknowledged the interpretivist, if not the

constructionist stance (Trochim, 2006). Many authors, notably Oakley (Study Guide, p98,

2001b) have claimed that quantitative research is in fact a powerful tool to “improve

educational action”. Furthermore, as long as competing epistemological schools of thought

exist in a democracy, all different forms of research are free to compete in the public sphere.

The special strength of the constructionist approach is that it allows us to understand the ways

in which discourses are constructed and maintained. For educationalists, and for action

researchers with an emancipatory agenda seeking to heighten student awareness of these

processes, this form of knowledge, also based on sound scientific methods, becomes a

powerful educational tool. To illustrate this, and to highlight the importance of “critical

moments” and the constructive interrelatedness of speakers and utterances in a given

situation, let us consider Fig. 5, a transcript of just seventeen seconds of dialogue from

Media-based exercise 5, between Frank, the teacher, and his student Daniel.

Transcript of Frank’s Lesson, 11:41:43 – 11:42:00

Frank What sort of hairstyle?

Student 2 [Anything

Daniel [Blonde

Frank Male, female, or both?

13

Daniel What, for me personally?

Frank Well, yes.

Daniel (surprised) Female! … (laughter)

Frank Well, right, all right, nah I …

Daniel What kind of a question is that?

Frank All right, sorry Daniel, yeah, no no, I didn’t quite

mean it like that, apologies, it came out the wrong

way.

Fig.5

In this exchange, we observe the significance of the contexts of culture (the norms and taboos

surrounding “homosexual” identities) and of situation (an exchange between an older and a

younger man in front of a class of teenagers) as elucidated in Butt et al (p.4). Although Frank

is specifically critical of normative, media-inspired concepts of “beauty” throughout the

lesson, his emancipatory awareness-raising does not go as far as questioning the fears of a

male teenager of being labelled as “gay”.

The transcript also reveals the internal dynamics of this discourse. Once the initial

misunderstanding has occurred, the role of implicature increases, and the two men can assert

their distance from homosexual identity without any explicit mention of it. How Frank might

have reacted differently or how educators can effectively combat taboos and unspoken

assumptions is not the central point here. What is important is that a discourse analytical

method, using well-founded arguments based on constructionist epistemology, has enabled us

to understand more about the ways in which prejudices and taboos are put together. This is

exactly what quantitative, positivist research cannot do. Discourse analysis, I argue, fulfils

every part of Bassey’s requirement.

Do we have to make a chauvinistic choice between the two? To form tribal loyalty to one of

two philosophies, each of which claims for itself the status of enlightened thinking, would

seem ironic. But there is a strong case to be made that these two epistemological rivals are

incommensurable; they work in different ways, examine different things, and have radically

different concepts of truth.

While I accept that there is no easy middle ground between positivism and constructionism, I

argue, along with Gage (p.159), that it would be counterproductive to dismiss either approach

out of hand. Gage’s supposed reconciliation does seem to favour positivism, with Popperian

14

“piecemeal social engineering” coming out on top (Gage, p.158). I am more convinced by the

descriptive power of discourse analysis than by the value of quantitative research, but it is a

highly plausible argument that an initial phase of quantitative research can point the way to

more detailed analyses. If statistical research shows, for instance, that reduced teacher talking

time often but not always correlates with better student performance in foreign language

classes, then a qualitative researcher might more easily decide it is worth her while analyzing

the more successful classes, to understand what exactly is being done right.

Two caveats seem appropriate here: firstly, mixed methods research is imperfect, and one

could argue that it is better for different researchers to stick to what they already do best. It is

difficult to share the optimism of the Study Guide (p.99). Secondly, any mixed research can

only succeed in a context of fairness and mutual recognition. This is not meant sentimentally;

Gage’s somewhat maudlin happy end seems to suggest that we should all just be friends. We

should not. Debates and competing views of truth are healthy. Scientists need not show each

other tenderness – what we need is each other’s respect.

Word Count (including tables and citations, but without references section): 4,870

REFERENCES

Adorno, T. , Dahrendorf, R., Pilot, H. „Der Positivismusstreit in der deutschen Soziologie“ Deutscher

Taschenbuch Verlag, Frankfurt 1993

Bassey, M. (1995), „On the kinds of research in educational settings“ in Hammersley, M., (ed.) Educational

Research and Evidence-Based Practice, The Open University Press, Sage, 2007, pp.141-150

Brown, G, Yule, G, “Discourse Analysis”, Cambridge University Press, 1983

Burns, A. (2001), “Analysing Spoken Discourse: Implications for TESOL”, in Burns, A. and Coffin, C. (eds.)

Analysing English in a Global Context, Routledge (in Association with Macquarie University and the Open

University) pp.123-148

Butt, D, Fahey, R., Feez, S., Spinks, S., Yallop, C. (2000) “Using Functional Grammar, An Explorer’s Guide”,

2nd Edition, Macquarie University Press.

Cameron, D., Frazer, E., Harvey, P., Rampton, B., Richardson, K. (1992) “Power/Knowledge: The Politics of

Social Science” in Jaworski, A. And Coupland, N. (Eds.), “The Discourse Reader”, 2nd

edition, Routledge,

2006, pp.132-145

15

Denscombe, M. (1999), “Critical incidents and learning about risks: the case of young people and their health”,

in Hammersley, M., (ed.) Educational Research and Evidence-Based Practice, Sage Publications and The Open

University, 2007, pp.204-219

Elliott, John (2001) “Making evidence-based practice educational”, in Hammersley, M. (ed.) Educational

Research and Evidence-based Practice, Sage Publications and The Open University, 2007, pp. 66-88

Fairclough, N. “Analysing Discourse”, Routledge, 2003

Gage, N. (1989), “The paradigm wars and their aftermath: a ‘historical’ sketch of research on teaching since

1989”, in Hammersley, M., (ed.) Educational Research and Evidence-Based Practice, Sage Publications and

The Open University, 2007, pp.151-166

Hammersley, M., Faulkner, D. et al, Research Methods in Education Handbook, Open University, 2001

Hargreaves, David (1996) “Teaching as a research-based profession: possibilities and prospects”, in

Hammersley, M. (ed.) Educational Research and Evidence-based Practice, Sage Publications and The Open

University, 2007, pp. 3-17

Jaworski, A. And Coupland, N. (Eds.), “The Discourse Reader”, 2nd

edition, Routledge, 2006, Introduction,

pp.1-37

Oakley, Ann (2001a) “Making evidence-based practice educational: a rejoinder to John Elliott”, in

Hammersley, M. (ed.) Educational Research and Evidence-based Practice, Sage Publications and The Open

University, 2007, pp. 89-90

Oakley, Ann (2001b) “Evidence-informed policy and practice: challenges for social science”, in Hammersley,

M. (ed.) Educational Research and Evidence-based Practice, Sage Publications and The Open University, 2007,

pp. 91-105

Prechtl, P. and Burkhard, F. (1999) “Metzler Philosophie Lexikon – Begriffe und Definitionen”, 2nd

Edition,

Stuttgart, 1999

Schofield, J. (1990), “Increasing the generalisability of qualitative research”, in Hammersley, M., (ed.)

Educational Research and Evidence-Based Practice, Sage Publications and The Open University, 2007, 181-203

Treanor, F. (2009) “Research Project for E844 – Language and Literacy in a changing world”

Cohen, L., Manion, L., Morrisson, K., “Research Methods in Education Companion Website”, Routledge, 2006,

http://cw.routledge.com/textbooks/9780415368780/default.asp

Warburton, Caroline (2010) Elluminate Conference and PowerPoint presentation, The Open University,

30.03.2010

Trochim, W.M. K., „Research Methods Knowledge Base”, 2006, www.researchmethods.net/kb