Bricolaging P and Q: Findings from case studies of Singaporean teachers
Transcript of Bricolaging P and Q: Findings from case studies of Singaporean teachers
Bricolaging P and Q: Findings from Case Studies of Singaporean Teachers∙29
Bricolaging P and Q:Findings from Case Studies of Singaporean Teachers1)
Wei Shin Leong*
National Institute of Education, Nanyang Technological University
Abstract
The paper reports on a section of findings from classroom assessment research involving a mixed method or a bricolage of phenomenographic methods (P) and the Q methodology (Q). The complementary use of P and Q attempts to study the diversity of ‘outcome space’ (P) and ‘concourse’ (Q), representing the variations in conceptions of practices of classroom assessment among the case studies of ‘high-achieving’ Singaporean school teachers. As such, the emphasis is on unravelling the multiplicity and significant relationships of classroom assessment practices within the specific context of introduction of new educational assessment policies. The different clusters of conceptions from the Q-factor analysis, in particular, have revealed underlying bases of common and different views of classroom assessment and implications for practices. This paper reports mainly the methodological considerations, procedures and issues of the use of Q. I suggest how the concurrent use of P within case studies of individual teachers, provide a critical rigour that can potentially avoid the reductionism and proceduralism of an otherwise monological and mimetic research design.
Key words : Conceptions, Classroom assessment practice, Phenomenography, Q-methodology * Correspondence concerning this article should be addressed to : Wei Shin Leong
National Institute of Education,
Nanyang Technological University
E-mail: [email protected],
DID: +65 96977002
Received 15 April, 2013 Revised 10 June, 2013 Accepted 20 June, 2013
30∙Journal of Human Subjectivity
I. Contextual Background
Following a review of primary-school education in 2009, the Singapore government
supported a key recommendation by the Primary School Review Committee (PERI) to
address the over-emphasis on testing and examination, particularly at the lower primary
levels. The Committee recommended that ‘holistic assessment’ that supports student learning
would be progressively introduced in all primary-school classrooms, starting with lower
primary in 2011 (PERI, 2009). At the same time, another Assessment Review Committee
within the Ministry of Education (MOE) was convened to review and explore ways to refine
the examination and assessment landscape across all other Singaporean schools. The
recommendation for changes in assessment beyond primary schools proposed by the
Committee involved helping secondary and junior college schools and teachers to think
about the possibilities of ‘balanced assessment’ involving the judicious use of both formative
and summative assessment.
The introduction of education policies that support the use of assessment to enhance
teaching and learning in the classroom is relatively late-coming considering the output of
research and policy documents and research literature on classroom assessment that have
emerged across the world since the 1990s (e.g. Black & Wiliam, 1998, 2005). One might
speculate that such an introduction has been carefully considered to gradually initiate
changes in classroom assessment in Singaporean primary and secondary schools. Before the
introduction of the assessment policies, there had been no nationwide targeted educational
initiatives on classroom assessment, although other curricular and infrastructural policies
have been introduced incrementally to deemphasise an exclusive privileging of student’s
academic achievement since 2000s (MOE, 2005, 2013). With the recalibrating of evaluation
of schools towards ‘holistic education’, there is a progressive abolishing of ranking and
banding of school by using absolute academic results.
A logical implication of ‘holistic education’ was given further attention by the current
Minister for Education, Heng (2011), where he noted the aspirations and desires of school
leaders and teachers in his speech at the 2011 annual Work Plan Seminar for Singaporean
school leaders and teachers:
Many of you [school leaders and teachers] have asked for support to be more
student-centric, to see to the total development of the person rather than to
build up just the academics…. Our schools and teachers will need time and
space, to engage in the more demanding type of educating - values and
twenty-first century competencies (Heng, 2011, p. 5, point 36).
Bricolaging P and Q: Findings from Case Studies of Singaporean Teachers∙31
Withstanding the challenges of an uncertainty of the twenty-first century and the need for
time and space for a different vision and values of educating, the building of infrastructure
for more student-centricity towards ‘holistic education’ has been consistently invested in
Singaporean schools and teachers by the MOE over the past few years. Therefore, it may
appear that the issue is not whether there is sufficient resource invested in Singaporean
schools to effect changes in classroom practices. Rather, it is whether we adequately
understand how teachers negotiate the changing goals of education and enact any ‘effective’
practices within classrooms that are true to the intent of the espoused student-centric
policies and initiatives, while the national examination system is still in place.
This paper details the exploratory use of Q-methodology (Q) for understanding the
relationships of conceptions of classroom assessment practices among case studies of
Singaporean teachers in this period of change in the Singaporean education landscape. This
work builds on an earlier qualitatively-driven pilot study (Leong & Tan, 2010) involving the
phenomenographic methods (P1)) of interviewing teachers on their views of ‘holistic
assessment’ and observing their lessons. To my knowledge, this bricolage of P and Q has
not been used in education assessment research even though, individually they are
well-known methodology for studying different teachers’ viewpoints (e.g., Anderson et al.,
1997; Rimm-Kaufman et al., 2006; Lim, 2010). P is selected as the main data collection and
analysis method for responding to the research question on identifying the conceptions and
practices of classroom assessment among six ‘high achieving’ case-study teachers. Q
examines the subjectivity of a ‘big picture’ way of seeing and hearing by studying the
participants’ subjective awareness at a given point in time. Q is used as a triangulation
method to inquire further about case-study teachers’ conceptions and to study the possible
relationships of the conceptions of classroom assessment among a larger number of
teachers. This paper reports mainly the methodological considerations, procedures and
issues of the use of Q.
II. Phenomenographic Methods (P)
Phenomenographic method (P) of research was developed by a research group in the
Department of Education at the University of Goteborg in Sweden during the early 1970s to
study the different conceptions of students’ learning. It is therefore a research methodology
1) The ‘P’ here is the abbreviation for ‘phenomenographic methods’ and is not related to the term ‘P set’ that is
the participant group of Q.
32∙Journal of Human Subjectivity
that has been developed within the discipline of education. The word ‘phenomenography’
was first used in 1979 and later appeared in the work of Marton (1981, 1986, 1988).
Etymologically, it derives from the Greek words phainemenon and graphein, which mean
‘appearance’ and ‘description’, and P is thus concerned with the description of things as
they appear to the research participants. Fundamental to an understanding of the
phenomenographic approach is the realisation that its ontological and epistemological
stance on knowledge is grounded in the principle of intentionality, which embodies a
non-dualist view of human cognition insofar as it depicts experience as a dialectical
relationship between the interaction of human beings and the world.
P has been credited with being “an internationally valued educational research method
since the 1970s” (Ashworth & Lucas , 2000, p. 295). As an approach to qualitative
educational research, phenomenography is the process of describing variations in people’s
experiences of different phenomena through their own discourse. Its aim is to investigate
and present “the qualitatively different ways in which something is experienced” (Säljö,
1997, p. 174). The ultimate goal of phenomenography is to describe the qualitatively
different ways in which we understand our experience of phenomena in the world around
us (Johansson et al. 1985; Barnard et al., 1999). Marton and Booth (1997) describe the basis
of phenomenography as “an interest in describing the phenomena in the world as others
see them, and in revealing and describing the variation therein, especially in an educational
context” (p. 111). Phenomenographic researchers posit that people can hold multiple and
contradictory conceptions within their frame of reference. These diverse conceptions can be
aggregated to form an ‘outcome space’ representing the variations of conceptions present
within the populations (Harris, 2010). The fundamental results of a phenomenographic
investigation are a set of categories of description by which the researcher attempts to
describe how the relevant phenomenon is experienced by others (Marton, 1986, 1988).
Just as many other qualitative researchers have no basis for characterising other people’s
experiences of the world because they only have access to their own experiences,
phenomenographers have no basis for characterising other people’s conceptions of the
world because they only have access mainly to other people’s verbal accounts of
justification of intentions. Indeed, Schutz (1966, p. 71) argued that even observations of
outward behaviour were insufficient to ascribe mental events of other people. There may
be a tension between a possible discrepancy between the ‘subjective’ meaning that a
research participant thinks he/she holds and the ‘objective’ meaning imputed by the
researcher-observer. This tension is most salient in identifying the participant’s ‘intent’ by the
researcher-observer (Chew, 2009). This has been put forward as an issue in social studies
Bricolaging P and Q: Findings from Case Studies of Singaporean Teachers∙33
of science, where several writers have objected to the view of the social scientist as a
‘cognizing agent’ (Woolgar, 1996). It has also been raised as a general problem in theories
of cognitive psychology by Edwards and Potter (1992). The subjectivity (and uncertainty) of
deciding what is and what is not ‘critical’ of the phenomenon of experiencing may be
problematic, if researchers cannot be sure if there is indeed more than one conception
based on the clear identification of intentions.
1. Q-Methodology (Q)
Q was developed by the English psychologist and physicist William Stephenson in the
1930s to explore the subjective views of various individuals. A crucial premise of Q is that
subjectivity of views is communicable and can be studied systematically. When different
views are expressed, read and discussed openly, a rich resource is created for deliberation
and understanding of various views of different individuals. Factor analysis resulting from
the analysis of data represents clusters of subjectivity or viewpoints that are currently
‘operant’ (Brown, 1980, 1993) - these are viewpoints that are currently ‘in action’ among
individuals in a community but are not absolute in any way. Rather they are contingent on
a specific time, place and purpose of ‘seeing’ an issue.
When Q is positioned within a pragmatic approach, it is a practical mixed-method for
researchers to unpack the tacit decision-making process of wide-ranging social-interplay
between people in practical situations in education, health-care, advertising and, lately,
social-networking and gaming fields. Q enables researchers to collect and study viewpoints
as knowledge that is not seen as absolute in any way, but multiple and contingent on time,
place and purpose. Crucially, Q specifically enables and expects different individuals to
express their opinions in their Q-sorting in a structured, participatory and safe environment.
The method employs a distinctive factor statistical analysis to identity groups of viewpoints
from participants who sort a pool of statements in comparable ways. Q is also a useful
research tool to name and depict the textuality of the discourses’ interplay, and, through
further interpretation, to map out their possible relations to one another. This was
particularly important for the study of controversial topics (see, for example, van Eeten,
2000), providing researchers and policy-makers with different ideas for interrogating
subjective opinions and suggesting opportunities for consensus-building. Brown (1995) goes
so far as to argue that there is no other method that matches Q’s versatility and reach, and
which comports so well with keeping up with changes in views and understanding of
contemporary research topics. I am, however, also aware that there is evidence of
34∙Journal of Human Subjectivity
controversy and peer criticism regarding Q and Stephenson’s work in the literature,
particularly up until the late 1960s (see, for example, Brown, 1997), which I am also keen
to understand.
Q, as I have understood, is not well suited to dealing with the unfolding temporality of
narratives and narrative of individuals. Its focus is on pursuing ‘snap shots’ or temporally
frozen images of a connected series of viewpoints. It then examines these positions in
terms of their overall structure, function and implications for specific groups of participants.
Q’s very deliberate pursuit of constructions and representations of individual and social
grouping’s viewpoints does not take into consideration the possibilities of change of
circumstances that may invalidate its findings or set an ‘expiry limit’. In other words, when
repeated on the same person, Q does not necessarily yield the same results that have led
researchers to question its reliability. However, social psychologists see no problem with
this as there is no expectation for individual viewpoints to stay the same on two separate
occasions (Stainton Rogers, 1991). Rather, it is about what view is present at this point in
time that could be due to various possible circumstances or positions of participants. It is
in this respect that Q provides results consistent with qualitative studies that focus on
processes of situated interaction within socially constructed data-collection fields of what
has been said and/or seen (Curt, 1994).
2. Bricolage of P and Q
In Kincheloe’s conception of the research bricolage (2001, 2005) through using different
frameworks and methodologies, researchers are empowered to produce more rigorous and
‘praxiological insights’ into educational phenomena. Kincheloe theorizes a critical
multi-methodological epistemology and connected ontology to ground the research
bricolage. These philosophical notions provide the research bricolage with a deeper
understanding of the complexity of knowledge production and the interrelated complexity
of both researcher positionality and phenomena in the world. Focusing on webs of
relationships instead of ‘things-in-themselves’, the researcher can construct the object of
study in a more complex framework. In this process, attention is directed toward processes,
relationships, and interconnections among phenomena. Such complexity demands a more
rigorous mode of research that is capable of dealing with the complications of
socio-educational experience. Such a critical form of rigor avoids the reductionism of many
monological, mimetic research orientations (Kincheloe, 2001, 2005).
Both P and Q focus on the various ways people approach experiencing a phenomenon,
Bricolaging P and Q: Findings from Case Studies of Singaporean Teachers∙35
Table 1. Complementary use of P and Q in proposed case study of Singaporean teachers
Phenomenographic methods (P) Q Methodology (Q)
Focus Research Questions:
RQ1: What are Singaporean case-study teachers’ conception(s)
and practice(s) of classroom assessment?
RQ2: How can the case-study teachers’ conceptions and
practices be understood in terms of its mediating influences or
dimensions?
RQ3: What are the relationships between the case-study teachers’
conceptions and practices of classroom assessment?
Focus Research Questions:
RQ1a: What are Singaporean case-study teachers’ conceptions of
classroom assessment? (vis-à-vis a bigger group or case of
Singaporean teachers)?
RQ4: How do the conceptions of case-study teachers relate to a
bigger group or case of teachers?
Intention: Focus on identifying critical differences of conceptions
and practices based on minimum number of qualitative features or
dimensions necessary for drawing intentions of classroom
assessment.
Intention: Focus on searching and validating the logical
relationships between different conceptions of classroom
assessment among the case-study teachers AND a bigger group
of comparable teachers.
Data Collection: Series of individual interviews and lesson
observation. (spanning seven months)
Data Collection: Administration of Q-sort and interview.
Analysis: Analysis is based on aggregated responses of interview
and lesson observation data across case-study teachers.
Analysis: Analysis is based on individual ranking of statements of
Q-set.
their ideas and the various ways of thinking and acting, with the aim of presenting
relationships of one another. By examining the differences in the structure of awareness of
experiences through these two ways of knowing teachers’ conceptions, I hope the
phenomenon of ‘classroom assessment’ could be better understood from different teachers’
perspectives. The benefits of such methodological triangulation include “increasing
confidence in research data, creating innovative ways of understanding a phenomenon,
revealing unique findings, challenging or integrating theories, and providing a clearer
understanding of the problem” (Thurmond, 2004, p. 254). These benefits largely result from
the diversity and quality of data that can be used for analysis.
Up to now, I have not noted any research that has made use of P and Q in
complementary ways as a means of methodological triangulation for ensuring validity of
findings. Patton (2002) cautions that it is a common misconception that the goal of
triangulation is to arrive at consistency across data sources or approaches; in fact, such
inconsistencies may be very likely indeed, given the relative strengths and weaknesses of
different approaches of data collection. In Patton’s view, these inconsistencies should not be
seen as weakening the evidence, but as an opportunity to uncover deeper meaning in the
data.
I see many opportunities for this in the complementary use of P and Q. Inconsistencies
across data sets, I felt, would be powerful resources for interrogating the differences of
36∙Journal of Human Subjectivity
ways of knowing and furthering theoretical and methodological insights. My use of
triangulation would be a way to enhance the coherence of research strategy of responding
to different relevant research questions, while acknowledging differences and similarities of
approaches of addressing the study of teacher’s conceptions of classroom assessment
practices. Table 1 summarises the research questions and the complementary use of P and
Q in the proposed case study of Singaporean teachers.
Both P and Q attempt to study a sample of conceptions and practices from a universe
of ‘outcome space’ (P) and ‘concourse’ (Q) of conceptions of classroom assessment
practices. This sampling process is different from the conventional method of sampling in
either of the qualitative and quantitative research. The emphasis is on diversity of
respondents selected for study rather than on representativeness of population. There is also
an attempt to ensure that findings from the smaller number of case-study teachers are
compared with a larger group of teachers. It is not possible within the scope of this study
to study the practices of a larger group of teachers without observing all them in their
classrooms; hence Q can be said to be a proxy and triangulation method for studying the
practices of a larger group of Singaporean teachers.
III. Methodology
In the main study, six case-study teachers were selected from a pool of ‘high-achieving’
Singaporean teachers who have been successfully admitted to a well-subscribed Master of
Education course, offered by the only teaching college in Singapore. The entry requirements
of being admitted to this Masters programme served as important criteria for selecting
case-study teachers for the research. These teachers have a good, if not exemplary, teaching
track record and are also likely to be holding leadership roles in their schools. The teachers
were observed for at least 12 lessons for a period of seven months in order to ensure that
lessons prior to both the examination and non-examination season of the school terms
were observed. In addition, the case-study data collection included at least four to five
hours of individual interviews with each of the teachers (inclusive of the short pre- and
post-lesson chats) based on questions derived from a phenomenographic framework of
interview and observation. At the end of the lesson observation and interview sequence, the
teachers were asked to complete a Q-sorting of 45 statements on classroom assessment to
compare their conceptions of classroom assessment with those of 35 comparable
Singaporean teachers.
Bricolaging P and Q: Findings from Case Studies of Singaporean Teachers∙37
1. Q-Sort
Invitations to participate in the Q-sorting were sent out in September 2011 to 41
participants (inclusive of the six case-study teachers) of the pilot phase of the Q-sorting
earlier in February 2011. The timing of the administration of this repeat sorting
corresponded to the final phase of lesson observation and interview with the six case-study
teachers. The Q-sorting provided a final and summary check on the case-study teachers’
conceptions of classroom assessment, in relation to a larger group of teachers. It also
provided me with an opportunity to check that the concurrent analytical coding process
had contributed to a meaningful set of statements of classroom assessment that all the
teachers could look at and hopefully meaningfully sort in accordance with their current
practices. The main reason for inviting the same group of participants was that I prefer to
work with a group of teachers who are already familiar with the Q-sort procedure and that
I could still refer to their previous sortings for clarification if necessary, especially if there
seemed to be major discrepancies in their sortings. I was also satisfied that the 41 teacher
participants were teaching different subjects and at different levels of seniority and positions
in schools (see Table 2).
The participants were invited to do the Q-sorting on a web-based version of Q-sort of 45
statements (instead of the previous 25 statements). These changes were made in accordance
with the findings of an earlier pilot, whereby the number of statements was found to be
insufficient, there was confusing phrasing of certain statements and some teachers were
intimidated by my presence while they were sorting. Based on Alexander’s categorisation of
classroom practices (1992) and the phenomenographical dimensions of a conception
(Marton & Booth, 1997), I generated a 5 x 9 Fisherian structural grid (Brown, 1980) to
ensure that there was a good sampling of 45 statements of classroom assessment practice
that could be used for the Q-sorting process. Extracts of Q-statements from each category
and dimension of classroom assessment is illustrated in Table 3.
A web-based version of the Q-sorting was designed between September and November
2011 to offer teachers the privacy and convenience of sorting the statements in their own
time. This could be particularly useful as the teachers were asked to sort more cards, which
I consider to be much more difficult than the Q-sort in the pilot-study. Their familiarity with
the Q-sorting process is very helpful and I expect it to minimise any need for me to give
further assistance or guidance (as compared to inviting a another group of teachers). In
designing the website for web-based Q-sorting, I have referred to a number of studies that
utilised either electronic surveys or online Q-platforms (e.g. Birnbaum, 2004; Rimm-Kaufman
38∙Journal of Human Subjectivity
Table 2. Teaching Profile of N=41 teachers by type of school, years of teaching and subject taught
Educational level taught
Primary Secondary
Total no. of teachers 18 23
Type of school Government 10 7
Aided 4 3
Autonomous NA 2
Independent NA 5
MOE HQ 2 3
University 1 4
No. of Years <9 Years 6 7
10-14 Years 8 10
15-19 Years 1 2
>19 Years 2 2
Subject taught English 11 3
Maths 10 5
Music 3 8
Science 5 4
Humanities - 3
PE/Character/Leadership - 2
Positions Teacher 7 7
Head of Department 3 4
Subject Head/Level Head 3 3
Vice Principals 1 2
MOE Officers 2 3
Lecturers 1 4
& Sawyer, 2004), and consulted several computer programmers. I was particularly assured
by the findings of Reber, Kaufman and Cropp (2000) that there was no apparent difference
in the reliability or validity of the manual and online methods of administration of Q-sort.
Of the 41 teachers, five teachers (including two case-study teachers) decided to sort twice
based on subject differences or different positions that the teachers held in the school.
There were altogether 46 Q-sorts collected by the end of November 2011.
2. Q-Analysis
The 46 Q-sorts were inter-correlated and factor-analysed using the dedicated computer
packages, PQ Method (Schmolck, 2012) and PCK (Stricklin, 2004), rather than SPSS, as
advised by the Q-experts at the 2011 Q-conference. The reason for analysing the data using
Bricolaging P and Q: Findings from Case Studies of Singaporean Teachers∙39
Table 3. Developing the concourse of statements on classroom assessment practices
DIRECT OBJECT: What is
classroom assessment?
CONCEPTUALISED ACTS: How
is classroom assessment acted
out?
INTENTIONS: Why is classroom
assessment acted out as such?
POLITICAL: What classroom
assessment practices should I
abide by in accordance with
what my school/MOE leaders
advocate/do not advocate?
Assessment is about evaluation
of my school’s curriculum/
programme.
I give emphasis to reporting
achievements and trends of
student performances using
school’s evaluation instrument.
Assessment is for accounting
progress of teaching/learning to
my school leaders.
CULTURAL: What classroom
assessment practices do I know
are traditionally understood and
favoured in my culture?
Assessment is all the quizzes,
tests and examinations for the
subject(s) I teach.
I believe in making use of tests
and quizzes to motivate my
students to work harder in my
class.
Assessment can help my
students to learn important
values that will be invaluable
beyond schooling.
PRAGMATIC: What classroom
assessment practices should I
attend to that are important to
me in my classroom?
Assessment is all the
homework/classwork that I need
to take place after or during
each lesson.
It is important that I return the
marked homework in a timely
fashion.
Assessment should be practical
and implementable (e.g. it does
not excessively increase my
workload).
CONCEPTUAL: What should
classroom assessment practices
be ‘made up’ as I was
informed or thought about?
Assessment is the different
ways of understanding and
responding to my students’
learning.
I try to ask questions and give
feedback to students based on
the learning goals of my
lessons.
The forms or strategies of
assessment would be different
according to the purpose of
assessment in my class.
Empirical: What classroom
assessment practices should be
most effective as I was
informed about by research?
Assessment is the different
ways of researching information
about teaching and learning to
find ways of improvement.
I believe in conducting research
in my class to determine the
effectiveness of different
approaches of assessment.
I prefer to practise assessment
based on effective practices as
reported in research findings or
professional training.
two software programmes was to provide opportunities to counter-check for any
outstanding discrepancy in the proposed solutions (that was generated using slightly
different statistical algorithms within and between the two software programmes). In the
earlier pilot study, there was a concern that I received “ill-informed advice from the use of
SPSS” (S. Brown, Q Listserve posting, dated 25 June 2011). Interestingly, initial analysis
suggested that there was a discrepancy in the outputs in the extraction of factors in PQ
Method using Principal Component Analysis (PCA) and Centroid extraction method. Upon
counter-checking with the outputs in PCQ, checking with the maker of the software and
other experts through the Q Listserve, the Q Listserve community agreed that there was a
mistake in the use of Brown’s (1980) algorithm in PQ Method (Q Listserve email threads,
dated 22 February 2012). This was subsequently corrected by Schmolck (2012) when a
revised version of PQ Method was circulated on 23 May 2012. The discovery of this
discrepancy was shared in a Q-seminar at the University of London (2012), and also
prompted two experts to write an article (in-press) about the comparative study of using
PCQ and PQ Method in the factor analysis of Q-sorting (email with P. Schmolck, dated 24
February 2012). After a series of email discussions with the various Q experts on my data
set, I was satisfied that a four-factor solutions could be extracted. Abductive logic (Watts &
40∙Journal of Human Subjectivity
Table 4. Summary of table parameters of 4-factor solution
Final 4-factor solution
1 2 3 4
Eigenvalue 3.38 16.51 2.91 2.35
No of significant sorting 10 14 4 5
% of total variation explained 17 19 9 10
Stenner, 2012) also played a prominent role in helping me to decide that a 4-factor solution
is satisfactory. This was done by comparing factor arrays of the different number of factor
solutions (comparing 1- to 6-factor solutions) with demographic information and
participants’ comments in the post-sorting open-ended question component of the Q-sorting.
The essence of Q’s analysis is to group similar sortings together to understand the
relationship within the sorting pattern. This provides the means to see how unique or
similar the conception of classroom assessment is within the case-study teachers; also with
a larger group of teachers. After analysis, a total of 4 factors resulted in each factor
component containing at least three teachers’ sortings. Factor loadings of ±0.52 or above
were significant at the p < 0.01 level and account for 55 per cent of the sorting variation.
The statistical conditions for the isolation of factors were based on the Kaiser-Guttmann
criterion of keeping factors’ eigenvalues at 1.00 or above, and the use of Cattell’s scree test.
Of the 46 Q sorts, 33 loaded significantly on one or other of these 4 factors. Table 4
displays the factor solutions, the accompanying eigenvalues and the cumulative percentage
of total variation explained by the factor solutions.
A factor array shows the merging of similar Q-sorts into an ‘ideal-typical’ Q-sort (Stenner
et al., 2003), so that the Q-sorts from all participants who had sorted similarly are
represented in a single factor. Table 5 shows an extract of the 4-factor arrays, as well as the
card statement number and the words of each statement. It showed that statements 1 and
24 did not distinguish the factors (‘consensual sorts’), as the statements were consistently
sorted in the -1/-2 (less agreeable) column or the +1/+2 (more agreeable) column. In
contrast, certain statements, including 18, 3, 25 and 44 (‘distinguishing sorts’) were able to
distinguish factor 1 to 4 respectively. I have chosen to include as many different statements
ranked higher or lower for each factor vis-à-vis other factors, to assist in the interpretation.
So, for instance, statement 3 may not be very highly ranked in Factor 2 (0), but is
considered high in relation to other factors (-1, -3, -1).
The eventual derivation of the factor arrays reflects the relationship of each Q-sorting
configuration with other Q-sortings (rather than just the atomistic relationship of each
Bricolaging P and Q: Findings from Case Studies of Singaporean Teachers∙41
Table 5. Extract of factor arrays of the 4-factor solutions
Card
No.Statement
Factor
1 2 3 4
1 Assessment is about evaluation of my school’s curriculum/programme. -1 -1 -1 -2
24 I believe in taking time to understand how my students learn and think of ways to help them learn. +2 +2 +1 +2
18 I will focus on what is being tested/examined in CA/SA, PSLE, N or O-Level (or equivalent
qualification) before attending to other matters in my class.
+3 -4 -1 -4
3 Assessment includes the homework/classwork and tests/quizzes that need to happen after each
lesson or unit of lessons.
-1 0 -3 -1
25 I believe in conducting research in my class (e.g. action research) to determine the effectiveness of
different approaches of assessment.
-2 -2 +2 -2
44 There is a gap between what I believe or know about assessment and what is actually happening
in my classroom.
+1 -3 -2 +2
statement item with the others). In this way, the analysis seeks to gain a holistic view of
the relationship of teachers’ conceptions of classroom assessment.
IV. Interpretations of factors
The four different factors that emerged from the analysis are summarised in the
respective quadrants in Figure 1. Each factor is identified by the most relevant statements of
classroom assessment that a group of teachers tended to agree more or less than the other
teachers in other factors based on their sortings. All the factors shared agreement over some
statements - these are referred to as consensual statements and are positioned in the middle
of Figure 1. The presence of these consensual statements could explain why some of the
teachers’ sortings cannot be identified in at least one of the factors as in the case of one
of the case-study teacher Mei Lan. It must be noted that a considerable number of
participants’ sortings (13) were not considered statistically significant. These sortings cannot
be clearly identified as being categorised as one of the factors, but would be ‘sharing’
features of two or more factors. For the other five case-study teachers, their sortings could be identified in one of the
factors. In the case of Pei Pei, she had sorted twice and her sortings were identified as
being in Factors 2 and 4. Data from the interviews and observations of the case-study
teachers were also considered for triangulation of the interpretations. There were some
contradictions of interpretation of the case-study teachers’ sortings with the interview
findings and lesson observations. These statements were identified as problematic and are
indicated in italics in Figure 1. The relatively low rankings of statements 20-23 in Factor 3
42∙Journal of Human Subjectivity
Figure 1. Four patterns of sorting of statements of classroom assessment revealed by factor analysis
for instance suggest that teachers did not seem to be practising formative assessment
strategies frequently in the classroom, as compared to what teachers claimed in Factor 2.
This may be over-cautiousness on their part in their sorting, as I would have expected the
case-study teacher Alisha, for instance, to articulate a stronger support of her use of
formative assessment practice in the sorting based on the interviews and observing her
lessons (and the contrary for Elsie, Ryan and Pei Pei in Factor 2). So, like the other teachers
in Factor 3, Alisha may actually know more about formative assessment than other teachers
in other factor groups, but she was espousing through the sort, that she could only practise
certain aspects of classroom assessment (statement 45: -2, -1, 1, 0). It could be a case of
‘knowing more’ would predispose her to indicating that she had not adequately practised
formative assessment. Similar to teachers in other factors, Alisha disagreed that many
classroom assessment decisions were already decided by the school (statement 26: -2, -2, -1,
-1) and there was a limit to how much agency they can exercise when it comes to
formative and summative assessment decisions. However their espoused form of agency
was not what I had observed and heard of their reflections of how she was constrained by
Bricolaging P and Q: Findings from Case Studies of Singaporean Teachers∙43
Figure 2. Relative position of each factor (F1-4) with respect to their leaning towards each category
of statements of classroom assessment
the school’s imposition of tests and examinations. This again points to how this Q-sorting
here could be a partial or even inaccurate reflection of an individual teacher’s actual
practice, in relation to other teachers.
For further comparison of the index in the factor array for each sub-group of teachers,
the matrix index of the factors were represented using radial graphs. These various
visualisations of sortings provided further insights into how the various factors can be
considered comparable to or distinctive from one another. Figure 2 illustrates how different
conceptions can currently be positioned visually, approximating the position of the centroid
of each of the radial graph of the factors, in relation to the five categories of assessment
practices.
The nearer each conception group is to a particular category, the more teachers
subscribing to such a group may be clearly influenced by the specific category or way
about thinking of the assessment. I propose that the conception that is positioned more
towards the centre (i.e. Factor 1 and 2) will be most challenging for the teachers, as they
have to address more conceptualisations of assessment, possibly resulting in ‘cognitive
overload’ of responding to multiple assessment purposes and requirements. On the other
hand, teachers with such conception of classroom assessment may have been able to
successfully negotiate the ambivalence and uncertainties of responding to the various
demands of assessment, and can be said to be what many teachers could be experiencing
in their actual practice within the context of Singaporean classrooms. Their conceptions and
44∙Journal of Human Subjectivity
practices relating to assessment would be of great interest to the teaching fraternity and
policy-makers.
The Q-analysis has also shown that the various conceptions of classroom assessment
were not dichotomous and, in fact, suggested a high degree of normative overlap in their
views. For instance, despite the differences in sorting, the majority of the teachers seemed
to strongly agree that all forms and strategies of classroom assessment should help their
students to learn; the issue is what kind of learning do they privilege. This suggests that
what a statement of classroom assessment is supposed to signify may vary considerably
depending on different priorities in operant, and not due to a single category of influence
(e.g., policy directives or professional development). Overall, Q’s findings underscore the
conceptual complexity of teachers’ conceptions of classroom assessment in a way that
seems to suggest that different teachers’ beliefs and values relating to teaching and learning
undergird their conceptions and practices of classroom assessment. Notably, while a
theoretical distinction between formative and summative assessments can be made, in
practice classroom assessment activities are already so integrated or entangled in a teacher’s
day-to-day work that insisting that they learn and practise each one according to either
formative or summative purpose and principles may remain just as an ideal. This range of
conceptions of practices of classroom assessment possibly infers that formative and
summative assessment may not be distinguishable, particularly if most teachers tend to use
a variation or combination thereof in class. What may be of interest for teachers is how
both types can interact more productively: how a confluence of both formative and
summative assessment within the continuum of conceptions and practices of classroom
assessment needs to be addressed on a day-to-day basis.
V. DISCUSSION
Within the context of this study, the epistemological and ontological positioning,
methodology and choice of methods were all informed by the need to know the
conceptions and practices of a small group of Singaporean teachers. By locating classroom
assessment in teachers’ everyday classroom worlds of ‘practical reasoning’, I am
problematising the obviousness and ‘normalcy’ of policy and research recommendations of
a singular kind of ‘formal knowledge’ in classroom assessment that teachers need. I do not
assume teachers’ conceptions and practices of classroom assessment would be
unproblematic or straightforward. By pursuing the research questions through a very
Bricolaging P and Q: Findings from Case Studies of Singaporean Teachers∙45
purposeful use of descriptive case studies, and utilising a complementary data-collection
process involving P and Q, I hope to understand a particular discursive space that the
Singaporean case-study teachers were in their classroom. The focus of the research design
was on working with a small group of teacher-participants to develop credible accounts of
their conceptions and practices, with the aim of developing tentative theory for future
research in this area. In emphasising an interest in classroom assessment as subjectively
problematic, I am assuming that there are methodological challenges in using a
singular-monological lens of research (Kincheloe, 2001, 2005). Potentially such a way of
knowing also highlights the need to bring the world of policy-makers, researchers and
teachers together to create genuine new thinking across researcher and practitioner
communities. When policy-makers and school leaders in particular are aware of the
different conceptions and practices of classroom assessment, they can develop and adapt
policies and research findings that are more responsive to specific inconsistencies between
their teachers’ practices and conceptions; how this awareness can lead them to define more
clearly the different possibilities of this ‘balancing’ of formative and summative assessment.
The Q-sort analysis provided evidence of the different conceptions of classroom
assessment; it is important to note that the interpretation of the sorting patterns may not
necessarily cohere with earlier findings from the extended interviews and lesson
observations of the case-study teachers using P. The limitation and strength of Q is that its
focus is on pursuing ‘snap shots’ or temporary frozen images of a connected series of
reasonings of sortings, without actually knowing the actual practices. It is therefore possible
that an individual’s sorting may not be consistent with his/her practice, or it may change
according to changing circumstances. For the case-study teachers, I am more sensitive to
this possible change or discrepancy in their sortings with their conceptions and practices,
after spending an extended period of time interviewing and observing their lessons.
Considering that the viewpoints of a group of individuals should remain relatively stable
and consistent compared to an individual’s view (Thomas & Rhoads, personal
communication, 2011), it is less significant that an individual’s sorting may be inconsistent.
What is decisive from this Q-sorting is less ‘Who said what about classroom assessment?’
and ‘How do we know what is being said is actually a fair representation of their classroom
assessment practice?’ Rather it is simply ‘What is currently being said about classroom
assessment?’ for different teachers at a particular point. It is not the intention of Q to make
a generalisation about why teachers may have similar or different viewpoints. It is about
what viewpoint is present that could be due to various possible circumstances or positions
of the participants, which can only be discovered through more extended interactions such
46∙Journal of Human Subjectivity
as through the use of phemenographic methods in the case studies. At the organisational
level, collective awareness of dissonance between the varying conceptions and actual
practices of teachers can become a very powerful catalyst for school self-evaluation,
organisational learning and change. The knowledge and values of formative assessment for
instance, can result in a change-provoking disequilibrium (Woolfolk Hoy et al., 2009) that
can stimulate both teacher and organisational learning. However this can only happen if
there is sufficient awareness of teachers’ conceptions and practices of classroom assessment,
which the current bricolage of P and Q attempted to do in an exploratory and tentative
approach. I conclude at this point that a critical form of knowing teachers’ conceptions of
practices through the use of both P and Q can potentially avoid the reductionism and
proceduralism of an otherwise monological and mimetic research design.
Bricolaging P and Q: Findings from Case Studies of Singaporean Teachers∙47
References
Alexander, R. J. (1992). Policy and practice in primary education: Local initiative, national
agenda. London: Routledge.
Anderson, C., Avery, P. G., Pederson, P. V., Smith, E. S., & Sullivan, J. L. (1997). Divergent
perspectives on citizenship education: A Q-method study and survey of social
studies teachers. American Educational Research Journal, 34(2), 333-364.
Ashworth, P., & Lucas, U. (2000). Achieving empathy and engagement: A practical approach
to the design, conduct and reporting of phenomenographic research. Studies in
Higher Education, 25(3), 295-308.
Birnbaum, M. H. (2004). Human research and data collection via the Internet. Annual Review
of Psychology, 55, 803-832.
Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education,
5(1), 7-73.
___________________ (2005). Lessons from around the world: How policies, politics and
cultures constrain and afford assessment practices. The Curriculum Journal, 16,
249-261.
Brown, S. R. (1980). Political subjectivity: Applications of Q methodology in political science.
New Haven, CT: Yale University Press.
___________ (1993). A primer on Q methodology. Operant Subjectivity, 16(3/4), 91-138.
___________ (1995). Q methodology as the foundation for a science of subjectivity. Operant
Subjectivity, 18, 1-16.
___________ (1997). The history and principles of Q methodology in psychology and social
sciences. Kent, OH: Department of Political Science, Kent State University.
Chew, M. M. (2009). The theoretical quandary of subjectivity: An intellectual historical note
on the action theories of Talcott Parsons and Alfred Schutz. Review of European
Studies, 1(1), 23-34.
Curt, B. C. (1994). Textuality and tectonics: Troubling social and psychological science.
Buckingham: Open University Press.
Edwards, D., & Potter, J. (1992). Discursive psychology. London: Sage Publications Limited.
Harris, L. (2010). Delivering, modifying or collaborating? Examining three teacher conceptions
of how to facilitate student engagement. Teachers and Teaching, 16(1), 131-151.
Heng, S. K. (2011). Opening Address by Mr Heng Swee Kiat, Minister for Education. Presented
at the Ministry of Education (MOE) Work Plan Seminar, Ngee Ann Polytechnic
Convention Centre.
48∙Journal of Human Subjectivity
Kincheloe, J. L. (2001). Describing the bricolage: Conceptualizing a new rigor in qualitative
research. Qualitative Inquiry, 7(6), 679-692.
___________ (2005). On to the next level: Continuing the conceptualization of the bricolage.
Qualitative Inquiry, 11(3), 323-350.
Leong, W.S., & Tan, M.M.G (2010, September). ‘Holistic assessment’ in Singapore primary
schools: snapshots of Singapore primary school teachers’ conceptions and practices
of classroom assessment. Paper presented at the International Association of
Educational Assessment Conference, Bangkok, Thailand.
Lim, C. (2010). Understanding Singaporean preschool teachers’ beliefs about literacy
development: Four different perspectives. Teaching and Teacher Education, 26(2),
215-224.
Marton, F. (1981). Phenomenography―Describing conceptions of the world around us.
Instructional Science, 10(2), 177-200.
________ (1986). Phenomenography: A research approach to investigating different
understandings of reality. Journal of Thought, 21(3), 28-49.
_______ (1988). Describing and improving learning. In R. R. Schmeck (Ed.), Learning strategies
and learning styles (Vol. 53, pp. 53-82). New York: Plenum Press.
________ (1992). Phenomenography and “the art of teaching all things to all men’’.
International Journal of Qualitative Studies in Education, 5(3), 253-267.
_______ & Booth, S. (1997). Learning and awareness. New York: Lawrence Erlbaum.
MOE. (2005). Greater support for teachers and school leaders. Singapore Government Press
Release, 22 September 2005.
_____ (2013). Towards learner-centred and balanced assessment. [Brochure]. Ministry of
Education Internal Circulation Document.
Rimm-Kaufman, S. E., & Sawyer, B. E. (2004). Primary-grade teachers’ self-efficacy beliefs,
attitudes toward teaching, and discipline and teaching practice priorities in relation
to the“ responsive classroom” approach. The Elementary School Journal, 321-341.
__________________, Storm, M. D., Sawyer, B. E., Pianta, R. C., & LaParo, K. M. (2006). The
teacher belief Q-sort: A measure of teachers’ priorities in relation to disciplinary
practices, teaching practices, and beliefs about children. Journal of School
Psychology, 44(2), 141-165.
Patton, M. Q. (1988). Paradigms and pragmatism. In D.M. Fetterman (Ed.), Qualitative
approaches to evaluation in education: The silent scientific revolution (pp. 89-115).
New York: Praeger.
Reber, B. H., Kaufman, S. E. & Cropp, F. (2000). Assessing Q-Assessor: A validation study
Bricolaging P and Q: Findings from Case Studies of Singaporean Teachers∙49
of computer-based Q sorts versus paper sorts. Operant Subjectivity, 23(4), 192-209.
Säljö, R. (1997). Talk as data and practice―a critical look at phenomenographic inquiry and
the appeal to experience. Higher Education Research & Development, 16(2),
173-190.
Schmolck, P. (2012). PQ Method. Retrieved from http://schmolck.userweb.mwn.de/qmethod/
Schutz, A. (1966). The problem of transcendental intersubjectivity in Husserl. In Collected
papers: Studies in phenomenological philosophy (Vol. 2, pp. 51-84). The Hague:
Martinus Nijhoff. Retrieved from http://secure.pdcnet.org/schutz/content/
schutz_2010_0002_0013_0043.
Shavelson, R. J., Webb, N. M., & Burstein, L. (1986). Measurement of teaching. In M. Wittrock
(Ed.), Handbook of research on teaching (pp. 50-91). New York: Macmillan.
Stainton, Rogers, W. (1991). Explaining health and illness. UK: Prentice-Hall.
Stainton Rogers, R. (1995). Q methodology. In J. Smith, R. Harre & L. Van Langenhove (Eds.),
Rethinking methods in psychology (pp. 178-192). New York: Sage.
Stenner, P. H., Cooper, D. & Skevington, S. M. (2003). Putting the Q into quality of life; the
identification of subjective constructions of health-related quality of life using Q
methodology. Social Science & Medicine, 57(11), 2161-2172.
_______ & Stainton Rogers, R. S. (2004). Q methodology and qualiquantology. In Z. Todd,
B. Nerlich, S. Mckeown, & D. Clark (Eds.), Mixing methods in psychology (pp.
101-117). East Sussex, Great Britain: Psychology Press.
Stricklin, M. (2004). PCQ. Retrieved from http://www.pcqsoft.com/
Van Eeten, M. (2000). Recasting environmental controversies: AQ study of the expansion of
amsterdam airport. In H. Addams, & J. Proops (Eds.), Social discourse and
environmental policy: an application of Q methodology (pp. 41-70). Cheltenham,
UK: Edward Elgar.
Watts, S. & Stenner, P. (2005). Doing Q methodology: Theory, method and interpretation.
Qualitative Research in Psychology, 2(1), 67-91.
________ & ________ (2012). Doing Q methodological research: Theory, method &
interpretation. London: Sage Publications Limited.
Woolfolk Hoy, A., Hoy, W. K., & Davis, H. A. (2009). Teachers’ self-efficacy beliefs. In K.
Wentzel & A. Wigfield (Eds.), Handbook of motivation in school (pp. 627-655).
Mahwah, NJ: Lawrence Erlbaum.
Woolgar, S. (1996). Psychology, qualitative methods and the ideas of science. In J. T. .
Richardson (Ed.), Handbook of qualitative research methods for psychology and the
social sciences (Vol. 1, pp. 11-24). Leicester UK: BPS Books.
50∙Journal of Human Subjectivity
국문초록
P와 Q의 혼합: 싱가포르 교사들의 사례 연구
Wei Shin Leong
National Institute of Education, Nanyang Technological University
본 논문은 현상기록적 방법(P)과 Q 방법론의 혼합 연구를 요하는 교실 평가 연구의 결론 부분에 대
해 기술하고 있다. P와 q의 상호보완적 사용은 ‘상위 집단’의 싱가포르 교사들에 대한 사례 연구
중에서 교실 평가 실행에 대한 이해의 차이를 보여주는 ‘결과 공간’(P)과 ‘통합체’(Q)에 대한 다양
성 연구를 시도하기 위함이다. 엄밀한 의미로, 새로운 교육 평가 정책의 도입이라는 특정 상황 내에
교실 평가 수행의 다양성과 유의미한 관계를 해독하는데 주안점을 두고 있다. 특히, Q-요인 분석으
로부터 이해의 다른 집락은 교실 평가에 대한 관점의 공통점과 차이점, 그리고 실행에 대한 시사점
의 근간을 보여주고 있다. 이 논문은 주로 Q의 이용에 대한 방법론적 고려사항, 과정과 쟁점에 대해
기술하고 있다. 나는 각각의 교사들의 사례 연구 내에 P를 함께 사용하는 방법이 어떻게 단편적이
고 모방과 같은 연구 설계인 환원주의와 절차주의를 피하여 좀 더 치밀한 결과를 도출할 수 있는지
제시하였다.
주제어: 신념, 교실 평가 실행, 현상의 기술, Q 방법론