Peer Assessment Among Secondary School Students: Introducing a Peer Feedback Tool in the Context of...

33
Jl. of Computers in Mathematics and Science Teaching (2012) 31(4), 433-465 Peer Assessment Among Secondary School Students: Introducing a Peer Feedback Tool in the Context of a Computer Supported Inquiry Learning Environment in Science OLIA TSIVITANIDOU, ZACHARIAS C. ZACHARIA, TASOS HOVARDAS, AND APHRODITE NICOLAOU University of Cyprus, Cyprus [email protected] [email protected] [email protected] [email protected] In this study we introduced a peer feedback tool to second- ary school students while aiming at investigating whether this tool leads to a feedback dialogue when using a computer sup- ported inquiry learning environment in science. Moreover, we aimed at examining what type of feedback students ask for and receive and whether the students use the feedback they receive to improve their science related work. The partici- pants of the study were 38 eighth graders, who used a web- based learning platform, namely the SCY-Lab platform along with its SCYFeedback tool, as well as one of its learning mis- sions, titled the “Healthy pizza” mission. In doing so, stu- dents were assigned to create a healthy pizza while consider- ing the nutritional value of the ingredients, diet-related health issues and the human digestive system, and daily exercise. The findings of the study revealed that whenever students re- quested for feedback from peers, there was a great possibil- ity to receive one. Additionally, significant correlations were found between changes requested by peers concerning their learner products and changes proposed by peers for revisions. Overall, it appears that the beginnings of a fruitful feedback dialogue were there, but it seems that they were not enough to support a thorough dialogue throughout the intervention that could lead to having students revising their work.

Transcript of Peer Assessment Among Secondary School Students: Introducing a Peer Feedback Tool in the Context of...

Jl. of Computers in Mathematics and Science Teaching (2012) 31(4), 433-465

Peer Assessment Among Secondary School Students: Introducing a Peer Feedback Tool in the Context of a

Computer Supported Inquiry Learning Environment in Science

Olia TsiviTanidOu, Zacharias c. Zacharia, TasOs hOvardas, and aphrOdiTe nicOlaOu

University of Cyprus, [email protected]

[email protected]@uth.gr

[email protected]

in this study we introduced a peer feedback tool to second-ary school students while aiming at investigating whether this tool leads to a feedback dialogue when using a computer sup-ported inquiry learning environment in science. Moreover, we aimed at examining what type of feedback students ask for and receive and whether the students use the feedback they receive to improve their science related work. The partici-pants of the study were 38 eighth graders, who used a web-based learning platform, namely the scY-lab platform along with its scYFeedback tool, as well as one of its learning mis-sions, titled the “healthy pizza” mission. in doing so, stu-dents were assigned to create a healthy pizza while consider-ing the nutritional value of the ingredients, diet-related health issues and the human digestive system, and daily exercise. The findings of the study revealed that whenever students re-quested for feedback from peers, there was a great possibil-ity to receive one. additionally, significant correlations were found between changes requested by peers concerning their learner products and changes proposed by peers for revisions. Overall, it appears that the beginnings of a fruitful feedback dialogue were there, but it seems that they were not enough to support a thorough dialogue throughout the intervention that could lead to having students revising their work.

434 Tsivitanidou, Zacharia, Hovardas, and Nicolaou

Introduction

despite the increasing view of learning as a participative activity (e.g., Barab, hay, Barnett, & squire, 2001; Kollar & Fischer, 2010), educators have been slow to react to the emergence of this new participatory culture (Jenkins, clinton, purushotma, robison, & Weigel, 2006). in a participatory culture of learning students are expected to be actively engaged in the learn-ing experience (Bosco, 2009), as well as in the assessment/feedback process (Tsivitanidou, Zacharias, & hovardas, 2011). Kollar and Fischer (2010) have characterized peer assessment/feedback as an important component of this participatory culture of learning and an important component in the de-sign of learning environments, such as computer supported inquiry learning environments, implementing this contemporary culture of learning.

in our work on peer feedback (the peer assessment outcome) we are interested in whether students interact and give each other feedback in par-ticipatory environments (e.g., computer supported inquiry learning environ-ments) when provided the opportunity. in this study, we introduced a peer feedback tool, namely the scYFeedback tool, to secondary school students while aiming at investigating whether this tool leads to a feedback dialogue in the context of a scY (science created by You) mission. in scY (de Jong et al., 2010) students are offered a participatory learning environment (Bar-ab et al., 2000), scY-lab, which is populated with resources and tools re-quired to carry out a scY Mission. during a scY Mission learners address general socio-scientific problems (e.g., how do we create a healthy pizza?) through collaborative and inquiry learning. They engage in processes of ac-tive learning, based on inquiry, knowledge building, and learning by design. Through these activities students learn by creating and exchanging what we refer to as elOs (emerging learning Objects (chen, 2004; hoppe et al., 2005). scY elOs include models (e.g., system dynamics models), concept maps, designed artifacts, data sets, hypotheses, tables, summaries, reports, and other types of artifacts (for details see de Jong et al., 2010). The elOs are the vehicles through which a student can ask for feedback during the learning process, and from which the teacher can gain an understanding of the general science skills, social and presentations skills, and domain con-cepts the student has developed. ronen and langley (2004) pointed to the benefits of peer assessment when students are provided the opportunity to learn from artifacts (elOs) created by their peers, and Falchikov (2003) showed how peer assessment assists students to create higher quality ar-tifacts. Thus, in our approaches to assessment (Wasson, vold, & de Jong, 2012) these elOs are central. in particular, formative assessment is given during the Mission in the form of peer feedback on elOs.

Peer Assessment Among Secondary School Students 435

The purpose of this study was to investigate whether the scYFeedback tool leads to a feedback dialogue among students, when working in the con-text of the “healthy pizza” scY mission which requires from students to create a healthy pizza while considering the nutritional value of the ingre-dients, diet-related health issues and the human digestive system, and daily exercise. The overall idea was to examine whether students engage through a peer feedback tool in a process of giving and receiving feedback for im-proving their own work and that of their peers.

ThEorETICAL BACkground

Peer Assessment

peer assessment concerns the involvement of learners in making judg-ments about their peers’ learning products by using grades and written or oral feedback (Topping, 1998; sung, chang, chiou & hou, 2004). in oth-er words, peer-assessment can be either summative, thus concentrating on judging learning results to be correct or incorrect or assigning a quantitative grade, or formative, if they concentrate on in-depth qualitative assessment of different kinds of learning results (Topping, 2003). Formative peer as-sessment refers to any type of assessment conducted by student, which pro-vides instructive feedback to be used by the student in order to enhance his or her learning (Black & William, 1998; Xiao & lucking, 2008). Formative peer assessment focuses on cognitive, social, affective and meta-cognitive aspects of learning, and it often applies a multi-method approach that leads to a more holistic profile instead of a single score (strijbos & sluijsmans, 2009). it aims at constructing a comprehensive picture of learners’ compe-tencies, it is an integral part of the learning process and it takes place several times during a course rather than only at the end of it as in the summative peer assessment (Xiao & lucking, 2008).

The outcome of formative peer assessment is peer feedback, which is given during the learning process and aims at impacting the learning proc-ess as it develops (sluijsmans, Brand-Gruwel & van Merriënboer, 2002; van Gennip, segers, & Tillema, 2010; Xiao & lucking, 2008). peer feedback could be for example an opinion, a suggestion for improvements, or an idea. Giving and receiving feedback helps students to realize not only what they have achieved, but also how their work could be further developed (Tsivita-nidou et al, 2011); thus, the assessment becomes part of the learning process (Frost & Turner, 2005). Frost and Turner (2005) explained that peer feed-

436 Tsivitanidou, Zacharia, Hovardas, and Nicolaou

back is valuable because feedback is provided in ‘student-speak’ rather than ‘teacher-speak’ or ‘science-speak,’ and students may be more open to ac-cepting it from peers.

peer assessment and feedback have been investigated at primary (har-len, 2007; harrison, & harlen, 2006; lindsay, & clarke, 2001), secondary (e.g., noonan & duncan, 2005; Tsai, lin, & Yuan, 2002; Tsivitanidou et al, 2011) and higher education levels (e.g., crane & Winterbottom, 2008; davies, 2006; Gehringer, 2001; lindblom- Ylanne, pihlajamaki& Kot-kas, 2006; lorenzo & ittelson, 2005; purchase, 2000; van dyke, 2008). researchers have shown that peer assessment and feedback enhance stu-dents’ learning across all these educational levels (Black & Wiliam, 1998; Kennedy, chan, Fok, & Yu, 2008; pellegrino, chudowsky, & Glaser, 2001; dysthe, lillejord, Wasson, & vines, 2009; hattie & Timperley, 2007; shute, 2008; sluijsmans, et al, 2002; van Gennip et al., 2010; Xiao & lucking, 2008). The primary reason behind the positive effects of peer assessment is the fact that students are positioned in a participatory culture of learning during the assessment process (dysthe, 2004; swan, shen, & hiltz, 2006), in which students reflect not only on what they have achieved, but also on how it compares to that of their peers, as well as on how their work could be further developed. Moreover, according to Falchikov (1995), such a partici-patory learning experience could enable students to develop meta-cognitive awareness that leads to the development of skills required for professional responsibility, judgment and autonomy. hence, it could reasonably be un-derstood why several researchers and educators are in favour of such a prac-tice.

however, it should be noted that peer assessment is not an easy proce-dure to implement. it requires exposing students to substantial training and practice (Birenbaum, 1996; Fallows & chandramohan, 2001; hanrahan & issacs, 2001; sluijsmans, 2002; van steendam, rijlaarsdam, sercu, & van den Bergh, 2010). This complexity comes as a result of the complex nature of assessment in general, which requires understanding the content of the material to be assessed, the assessment criteria to be used, and the most ef-fective way of providing suggestions of improving ones work without giv-ing him/her ready made answers/solutions.

Peer Assessment and Science Education

peer assessment and its effects in students’ learning have been also ex-amined in various science education studies, mainly at the university level

Peer Assessment Among Secondary School Students 437

(crane & Winterbottom, 2008). For example, researchers found that peer as-sessment had a positive effect on undergraduate students’ learning and criti-cal thinking skills (Tsai, liu, lin, & Yuan, 2001), as well as on their will-ingness to revise their science related work (prins, sluijsmans, Kirschner, & strijbos, 2005; Tsai et al., 2002).

positive influences of peer-assessment on student learning have also been reported in secondary education (e.g., Black & harrison, 2001). Tsi-vitanidou et al (2011) have found that secondary school science students have the beginnings of the skills necessary for enacting peer assessment, even in the absence of support. More specific, they found that secondary school science students were able to provide written comments (positive or negative comments and suggested changes) in their feedback. Many studies, across several subject domains, have shown that providing such comments to peers promotes one’s learning (chen, Wie, Wu, & uden, 2009; paré & Joordens, 2008; ploegh, Tillema, & segers, 2009; sluijsmans, et al, 2002; Tseng & Tsai, 2007). in particular, it was found that the provision of rein-forcing peer feedback (positive feedback on a peer’s work) was the factor that most positively impacted the quality of students’ work (Tseng & Tsai, 2007). On the other hand, research findings have indicated that peer feed-back is constructive if it includes structural components such as suggestions for improvements and positive and negative judgments (chen et al., 2009; paré & Joordens, 2008; ploegh, et al, 2009; sluijsmans, et al 2002; Tseng & Tsai, 2007).

Types of Peer Assessment

peer assessment can be one-sided or reciprocal. in one-sided peer as-sessment the student undertakes either the role of the assessor or the role of the assessee, whereas in the case of reciprocal peer assessment, the stu-dent undertakes both roles. in the context of this study we used reciprocal peer assessment because we consider it a better context for students to learn since they undertake both the role of the assessor and the assessee (for an example of a successful implementation of reciprocal peer assessment see Tsivitanidou et al, 2011). in this way students get the opportunity to reap the benefits of both roles.

Reciprocal Peer Assessment

reciprocal peer assessment is the only type of peer assessment in which students undertake both the role of the assessor and the assessee. The

438 Tsivitanidou, Zacharia, Hovardas, and Nicolaou

role of the assessor requires students to assess their peers’ work/products and, thus, to produce feedback that often includes qualitative comments in addition to, or instead of, grades. after all participants have acted as asses-sors, the next stage involves the review of peer feedback and the revision of learner products. When participants switch roles and become assessees, they need to critically review the peer feedback they have just received, decide which revisions are necessary for the improvement of their work and pro-ceed with making the corresponding changes. Of course, learning benefits can arise when receiving feedback from peers, but also during the phase of giving feedback to peers, since students could be introduced to alternative examples and approaches (Gielen, peeters, dochy, Onghena & struyven, 2010).

reciprocal peer assessment, and peer assessment in general, could be enacted both in a paper-and-pencil and a computer supported (e.g., web-based) context. however, it has proven through numerous studies that the computer supported context is the most effective (davies, 2000; sung, et al, 2004; Tsai & liang, 2009; Tsai et al, 2002). First, the timeliness needed in formative assessment can be enhanced by online technology significantly (davies, 2000; sung, et al, 2004; Tsai & liang, 2009; Tsai, et al, 2002). second, computer supported peer assessment systems can offer support/scaffolds during the assessment process. For instance, they can provide the students with pre-determined rubrics containing assessment criteria when-ever needed. Third, computer supported peer assessment can also ensure the anonymity of participants, facilitate willingness to critique peer work (Wen & Tsai, 2006; lu & Bol, 2007; Tseng & Tsai, 2007; Xiao & lucking, 2008), and promote the effectiveness of peer assessment (Kollar & Fischer, 2010). Fourth, they can offer immense storage space, high processing speed, multimedia appeal, learner control, instant and personalized feedback, and multiple-branching capabilities (heinich, Molenda, russell, & smaldino, 2002). Fifth, they could be provided to the students through the internet and thus take advantage of the benefits carried by the internet (e.g., worldwide connectivity and collaboration due to the absence of time and place restric-tions, communicating and sharing their work with other students) (Yu, liu, & chan, 2005). sixth, computer supported peer assessment systems could offer students better possibilities for organizing, searching and accessing in-formation (Zhang, cooley, & ni, 2001).

Peer Assessment Among Secondary School Students 439

This study

reciprocal peer assessment is usually enacted within formal con-straints, meaning that there are specified points in a learning activity se-quence at which students are prompted to implement peer assessment. in the context of this study we decided to remove these constraints and provid-ed students with the freedom to enact peer assessment whenever they want. in this context, students are able to give and receive feedback from other students about their work, without any intervention from the teacher, while they are working on the same exercise or a different one. in this respect, peer assessment is reciprocal in nature but it does not necessarily require giving and receiving feedback from the same peer or group of peers. They can give and get ideas from different peers and this feedback could be about the content, organization, appearance, grammar and other aspects of a learn-er’s product. needless to say, students are free to discuss anything relative to their work. This way, they can improve their work and learn from each other. Finally, in such a context, neither assessment criteria are given to the students, nor time restrictions placed on when to exchange feedback. stu-dents are free to request feedback from peers and to give feedback whenever they see fit. in other words, the peer assessment implemented in this study was unsupported (e.g., no assessment criteria are provided), unstructured and unspecified time wise (students can initiate a “feedback dialogue” with their peers for learning purposes for the learning products that they wished to and whenever they feel the need to do so). The rationale behind this mode of peer assessment implementation was to identify what secondary school students really could do on their own when enacting an unsupported and un-structured peer assessment.

More specifically, the present study aimed at answering the following questions:

1. does an unsupported and unstructured peer assessment, provided through the use of the scYFeedback tool, lead to a feedback dialogue among students? When and how?

2. What type of feedback do the students ask for and receive?3. do students use the feedback they receive to improve their science

related work (elOs, e.g., reports, concept maps, models, tables)?

These questions were examined in the context of a science investiga-tion. specifically, we asked our participants to use the “healthy pizza” scY mission, which requires from students to create a healthy pizza while con-sidering the nutritional value of the ingredients, diet-related health issues

440 Tsivitanidou, Zacharia, Hovardas, and Nicolaou

and the human digestive system, and daily exercise. For providing peer feedback students were introduced to the scYFeedback tool (for details on the scYFeedback tool see the Methods part).

such research is significant for the science education community in several ways. it could shed light on what kind of support students are re-questing from their peers and what feedback their peers are capable of of-fering them in a science learning context. Moreover, it provides an insight in terms of when students’ assessment capabilities fall short and, thus, the in-stances that the teacher could intervene. Finally, it provides insights to web-platform developers as far as when the students are in need of scaffolding from the web platform.

METhodS

Participants

The participants were 38 eighth graders (14 year-olds), coming from two classes of a public school (Gymnasium) in nicosia, cyprus. The sample of the study was kept relatively low intentionally, because of the type and amount of data we needed to collect for answering the research questions.

participation in the study guaranteed anonymity and that it would not contribute to students’ final grade. researchers strongly argue that such an arrangement is important in order to allow students express freely about their peers work (Topping, 2009). however, in order to avoid possible lack of motivation, since this study was disassociated from students’ final grade for methodological purposes, students that sincerely completed all the re-quirements of the study were promised a book of their preference and to participate in an excursion.

none of the students had prior experience with peer assessment before the implementation of this study. Finally, the mode of work was primarily collaborative in nature. students worked individually when elOs that in-volved information of personal nature were requested (e.g., daily calorie intake elO, health passport elO). The study’s curriculum defined the in-stances when students had to switch to an individual mode of work (see ap-pendix a).

Peer Assessment Among Secondary School Students 441

Material

Throughout the course, students used learning material developed for the scY (science created by You) project (de Jong et al., 2010). in scY students are offered a participatory learning environment, scY-lab, which is populated with resources and tools required to carry out a scY Mission. during a scY Mission, learners address general socio-scientific problems through collaborative and inquiry learning. Through these activities students learn by creating and exchanging what we refer to as emerging learning Objects (elOs) (chen, 2004; hoppe et al., 2005). scY elOs include mod-els (e.g., system dynamics models), concept maps, designed artifacts, data sets, hypotheses, tables, summaries, reports, and other types of artifacts. The elOs are the vehicles through which a student can ask for feedback during the learning process, and from which the teacher can gain an under-standing of the general science skills, social and presentations skills, and domain concepts the student has developed. Thus, elOs are central in our approaches to assessment (Wasson, et al, 2012).

under the “healthy pizza” mission, learning material carried by scY-lab required from students to create 31 elOs (for a list of all activities and elOs produced during the “healthy pizza” mission see appendix a) throughout the teaching intervention. The mission aimed at actively engag-ing students in the right choice of food products offered by their school’s canteen or cafeteria. in doing so, students were assigned to create a healthy pizza while considering the nutritional value of the ingredients, diet-related health issues and the human digestive system, and daily exercise. all elOs were created using scY-labs tools (e.g., students used scYMapper for cre-ating concept maps) and were stored in the scY-lab platform. The elO feedback was given using the scYFeedback tool. The activity sequence of the learning material required from students to pass through several steps. The purpose of this mission was to create a healthy pizza.

The mission was carried out by two science teachers, one per class, who previously attended preparatory meetings designed for the purposes of this study. These meetings focused on familiarizing the teachers with the content of the study, the procedures and methods, and the tools of the web-based platform. The meetings ran throughout the teaching intervention. spe-cifically, the teachers attended two three-hour meetings prior to this study and a one-hour meeting prior to each classroom meeting.

442 Tsivitanidou, Zacharia, Hovardas, and Nicolaou

The SCYFeedback Tool

The scYFeedback tool is a peer feedback tool, whose design and de-velopment was inspired by both theoretical and empirical underpinnings (for details see Wasson & vold, in press). using this feedback tool, students can (a) ask for feedback on their own elO, (b) receive feedback on their own elO, (c) browse an elO gallery of elOs submitted for feedback, and (d) provide feedback on any elO in the elO gallery. While working on an elO in scY-lab, a student can ask a question related to the elO directly in the tool with which they are creating the elO and submit it for feedback. Figure 1 shows the scY-lab environment where the student is creating a note taking table with personal information regarding his/her daily calorie intake during the healthy pizza Mission.

Figure 1. requesting feedback for a specific elO.

Once the elO has received feedback, the student receives notice and he/she can view the formative peer feedback. similarly, students receive no-tice when another student has asked for feedback on an elO. The student can open then the scYFeedback tool, find the elO in the elO Gallery, and provide feedback. all feedback questions asked in scY-lab result in the elO and its question being added to the scYFeedback tool’s elO Gallery (see Figure 2) of the most recently posted elOs (i.e., those that are awaiting feedback). Figure 3 shows the elO Feedback screen, where students can give or receive feedback on an elO (for more details see Wasson & vold, in press).

Peer Assessment Among Secondary School Students 443

Figure 2. logging in scYFeedback tool to view peers’ elOs in the elO Gallery

Figure 3. Giving feedback for a specific elO of a peer.

444 Tsivitanidou, Zacharia, Hovardas, and Nicolaou

Procedure

Before starting the study’s intervention, an introductory lesson about the scY-lab environment and its tools, including the scYFeedback tool, was implemented. right after, each student used the scY-lab platform on a computer in order to access the learning material, follow the activity se-quence and complete the accompanying assignments/tasks. The mission un-folded in several steps. at the beginning the students were informed about the purpose of the mission and created their favorite pizza selecting the in-gredients of their choice. Following, they were introduced to the concept of nutrients (carbohydrates, fats, proteins, vitamins, minerals, and water) and learned how to read and interpret nutritional value labels found on most food products. They familiarized themselves with versions of the food pyra-mid and compared their own diet with the daily nutritional needs of the hu-man body. Thus, they focused on exercise behaviour, the balance between the amount of calories ingested and the amount of energy spent based on the activities carried out throughout the day in order to put together a health passport based on their own eating and exercise habits. Further on, they learnt about the digestive system, and its function, and looked at the con-sequences of an unhealthy diet. students used the optimization strategy in order to select the healthiest pizza ingredients and created several (virtual) pizzas, so as to find the healthiest option. Finally, they compared their piz-zas with those of their peers and wrote a letter of advice to their school can-teen or cafeteria providing a healthier alternative.

Throughout this procedure students were free to engage in an unstruc-tured and unsupported peer assessment and participate in any form of feed-back dialogue with their peers via the scYFeedback tool.

The duration of the mission was approximately 18 hours. during the mission, students alternate individual and collaborative activities in scY-lab with whole-class discussions and authentic hands-on experiences. during the individual mode of work students were working on elOs that required data of personal nature, whereas during the collaborative mode of work students worked in dyads and on elOs that required synthesis of the personal information coming from the two members of each group. The members of a dyad were sitting next to each other, but working on separate computers.

Data Collection

The data collection process involved two sources, namely screen and video captured data and interviews. The interviews involved 30 out of the 38 of the study’s participants.

Peer Assessment Among Secondary School Students 445

The screen and video captured data were collected through a computer screen capture plus video-audio software (river past screen recorder pro) throughout the study. screen recording allowed the collection of a rich re-cord of actual computer work activity (e.g., actions, sounds, movements that took place on the computer monitor) in its natural work setting that portrays the user’s mobility among various parts of the web-based material. With the assistance of a microphone and a camera the software also allowed video-taping the students in conjunction with what was taking place on the screen. screen captured data along with video data were collected for all class meetings.

The interviews involved 30 participants (from 10 different groups). each participant was interviewed separately through the use of a structured protocol (see appendix B) which consisted of nine open-ended questions. Other questions besides the ones of the protocol were used only for clarifi-cation purposes. The purpose of the interview was to examine what students thought of their experience with the use of the scYFeedback tool in scY-lab.

Data Analysis

For the purposes of answering the first and second research questions we used data derived from the interviews and the screen and audio record-ings. in particular, we isolated the data about whether students used the scYFeedback tool, when and how. From the screen and video captured data we isolated the episodes during which the students were requesting feed-back and giving feedback. in the case of the requesting feedback episodes we coded for the number of students who requested feedback, the elOs for which feedback was requested, and the type of feedback requested. The lat-ter resulted in the following categories: question about science content, re-quest for changes, request for clarification, request for help, and request for an opinion. additionally, in an effort to understand the characteristics of the elOs that the students requested feedback for and to check for any possible commonalities across these elOs, we coded data concerning the type of an elO (text, table etc.), the learning activities that were required for an elO to be produced and the time needed to produce each elO (see appendix a for information on all of the mission’s elOs).

in the case of giving feedback we coded for the number of students who provided feedback, the elOs for which feedback was provided, and the type of feedback provided. The latter resulted in two main categories: feedback in a form of an answer and feedback in a form of a question. The

446 Tsivitanidou, Zacharia, Hovardas, and Nicolaou

latter category was not that prevalent (n=5). in contrast, the category of an-swers was the most popular type of response (n=47). The category of an-swers was further coded into sub-categories: positive judgments1, negative judgments2, changes proposed3, neutral comments and clarification/informa-tion. it should be mentioned that in both cases (requesting and giving feed-back), the categories were not mutually exclusive (e.g. a specific feedback could include positive and negative judgments).

These data were treated quantitatively by using the non-parametric Kendall’s tau b correlations. For the purposes of the Kendall’s tau b cor-relations tests we used the following variables: requesting feedback, giv-ing feedback, the average time spent on each elO’s production, along with its corresponding standard deviation, the time needed for the production of each elO for each peer group, the number of elOs for which feedback was requested, the number of requested feedback, the number of students who requested feedback, the various categories that emerged from the analysis of the elO content, the number of elOs for which feedback was finally given, the number of feedback received, and the various categories that emerged from the analysis of the given feedback’s content.

From the interviews we used the data collected through the questions that focused on: whether students used the scYFeedback tool (we also asked if not why, if yes what were their impressions about it), how they used the tool, whether they were satisfied with their experience with the tool, what improvements they would recommend if any, whether there was a spe-cific feature that they liked in the scYFeedback tool, how they would like a feedback tool to be, the type of feedback the students actually asked for and received from peers, and students’ expectations on what type of feedback they would like to ask for and receive from peers. all interviews were tran-scribed and then analyzed qualitatively. We used open coding to analyse the interview transcripts. after coding the data of each question of the protocol we used axial coding to create categories.

1 positive judgments concern encouraging remarks and proper or correct handling of aspects related to the work/products (e.g., elO) produced by a group of students (e.g., inclusion of proper material, inclusion of scientifi-cally accurate information) (for more details see chen et al., 2009; Tseng & Tsai, 2007). 2 negative judgments concern incorrect or incomplete handling of as-pects related to a student group’s work/products and discouraging remarks (for more details see chen et al., 2009; Tseng & Tsai, 2007). 3 changes proposed to assessee groups: concern assessors’ comments about the revision of assessees’ artifacts.

Peer Assessment Among Secondary School Students 447

For the third research question we used data derived from the inter-views and the screen and audio recordings. in particular, we isolated the data about whether the students responded to the feedback they received and what was the content of this response, whether they revised their elOs based on the received feedback and, if yes, what kind of changes were actu-ally made. From the screen and video captured data we isolated the episodes during which the students were receiving feedback. More specifically, we coded for whether students revised their elOs based on the feedback re-ceived, whether students sent comments or requests to their peers concern-ing the feedback received and what kind of interaction they had after these comments or requests.

From the interview data, we used the data derived from the questions that focused both on students that received feedback and students that have not received any feedback. in the case that they received feedback from peers, we used the data coming from a question checking whether they used the feedback and whether it helped them to improve their work. in the case that they did not receive any feedback, we used the data coming from the questions focusing on whether they would like to receive one and whether they believe it would have helped them. These data were also analyzed qual-itatively through open and axial coding.

Internal-reliability Data Analysis

internal-reliability data were collected for each coding process sepa-rately. specifically, a second rater (fourth author) scored about 40% of the study’s data (random sample), independently from the first author who cod-ed all of the study’s data, and then a cohen’s Kappa was calculated for the two raters for each coding process separately. The second rater did not have access to the study’s data or coding process until she was called to get in-volved in the inter-reliability process. cohen’s Kappa was found to be above 0.89 in all cases.

results

Does an unsupported and unstructured peer assessment, provid-ed through the use of the SCYFeedback tool, lead to a feedback dialogue among students? When and how?

students engaged in a feedback dialogue for the purposes of 11 out of the 31 elOs of the ‘healthy pizza’ mission (Table 1). This indicates a rather

448 Tsivitanidou, Zacharia, Hovardas, and Nicolaou

confined use of the scYFeedback tool. Figure 4 also reveals that the fre-quency of requested feedback was descending as time progressed.

Table 1number of feedback requests per elO

ELOa Frequency of requested feedback

My favorite pizza 9

Notes on unhealthy diet 6

Questions about pizza benefits 2

Nutrient and energy calculations 2

Questions about the food pyramid 9

Daily galorie intake 5

Energy fact sheet 3

Estimated Energy Requirements 1

Map Of The Digestive System 2

Fact Sheet Of One Organ 2

My first healthy pizza 1

Total number of feedback requests 42

aThe elOs are presented in the series that they appear in the learning mate-rial.

Figure 4. number of feedback requests per elO.

Peer Assessment Among Secondary School Students 449

in an effort to understand the characteristics of the elOs that the stu-dents requested feedback for and check for any possible commonalities across these elOs, we coded data concerning the type of an elO (text, table etc.), the learning activities that were required for an elO to be pro-duced and the time needed to produce each elO (see Table 2; see also ap-pendix a for info on all of the mission’s elOs), and then proceeded with contrasting the data with the use of the non-parametric Kendall’s tau b cor-relations.

Table 2description, mean time needed to produce, and frequency of feedback re-

quested for emerging learning Objects (elOs)

ELOs with their serial number

ELO Description

Activities needed to produce ELOs

Mean time needed to produce ELOsa

Frequency of feedback requested

ELO1 (My favorite pizza)

Animation Record pre-selected choices

12.27 (5.82) 9

ELO2 (Notes on unhealthy diet)

Text Watch a video, note taking

34.85 (15.66) 6

ELO3 (Questions about pizza benefits)

Text Read an article, note taking

65.23 (34.12) 2

ELO6 (Nutrient and energy calculations)

Table Web quest, note taking

18.53 (0.04) 2

ELO8 (Questions about the food pyramid)

Text Watch a video, note taking

29.49 (17.90) 9

ELO10 (Daily calorie intake)

Table Note taking 63.68 (16.63) 5

ELO12 (Energy fact sheet)

Cognitive map

Watch a video, note taking

24.30 (15.04) 3

ELO14 (Esti-mated Energy Requirements)

Table Mathematical calculations

36.45 (10.05) 1

ELO15 (Map of the digestive system)

Correspon-dence task

Read guidelines 11.55 (10.61) 2

ELO16 (Fact sheet of one organ)

Fact sheet Web quest, note taking

86.55 (28.50) 2

ELO21 (My first healthy pizza)

Animation Record pre-selected choices

7.12 (2.73) 1

aMean time needed to produce elOs is given in minutes; standard deviation is given in parentheses.

450 Tsivitanidou, Zacharia, Hovardas, and Nicolaou

For the purposes of the Kendall’s tau b correlations test we used the following variables: requesting feedback, giving feedback and the average time spent on each elO’s production and standard deviation of time for each elO’s production. Kendall’s tau b correlations (see Table 3) revealed that requesting feedback was positively correlated to giving feedback across elOs (Kendall’s tau_b = 0.600; p < 0.05), which implies that whenever stu-dents requested for feedback from peers, there was a great possibility to re-ceive one. Moreover, the more the average time for the preparation of an elO, the more the standard deviation (Kendall’s tau_b = 0.667; p < 0.05). This means that there was heterogeneity among students on the time needed to produce elOs; that is to say that some students needed much more time than others to produce the same elO.

Table 3Kendall’s tau b correlations among the variables requesting and giving feed-

back and the time needed for an elO production.

Giving feedback Average time

Standard deviation of time

Requesting feedback 0,60* ns ns

Giving feedback ns ns

Average time 0,67*

note: ns = not significant; * = p < 0.05it should be noted that 21 stu-dents out of the 38 participants requested feedback, while there were in total 42 requests for feedback across all elOs (i.e., sum of last column in Ta-ble 1). as a response to these feedback requests, students responded with 52 different feedback texts. The students who responded to their peers’ requests were 16 (out of the 38 participants) and each one of them sent approximate-ly three different feedback texts through the scYFeedback tool. however, it should be clarified that not all of the 21 peers who requested feedback received one. eight students out of the 21, who requested feedback, did not receive any feedback from their peers. Therefore, only 13 students received feedback based on their initial requests. six of them received feedback con-cerning all their requests, whereas seven received feedback for part of their requests. Ten students out of the 38 participants both requested and gave feedback.

Peer Assessment Among Secondary School Students 451

interview data revealed that 22 students (out of the 30 interviewees) used the tool while eight of them did not use it. students who did not use the scYFeedback tool mentioned that they did not have much time because they had to finish their own work first (n=2), and that they did not like the tool and did not find it useful (n=2). One said that he thought it was not important to use it and anther one said that he did not need it because he had help from the teacher and his group mates.

students who used the scYFeedback tool were asked about their im-pressions of the tool. Three replied that it seemed a good experience and that they liked it while another three mentioned that they liked it because they could talk with others and get feedback for their work. Two students mentioned that the scYFeedback tool was helpful to improve one’s work. Two students mentioned that the tool allowed them to interact with peers and avoid waiting for the teacher to come. The following quotations are par-ticularly revealing:

“It’s nice because you can interact with others and get your peers’ opinion. This way you don’t need so often teacher’s opinion about your work.” (interviewee s3G9)

“I liked it because we could see our peers’ answers and also we could send our work simultaneously to all. You could see other’s work, get some ideas and then you could fill your work with something new, also you could communicate with your team” (interviewee s2G10)

What type of feedback do students ask for and receive?

The most frequent type of feedback requested was help (mostly about technical issues) or a peer’s opinion about an elO (see Table 4). in 13 cases students asked for clarification/information from their peers. Only six stu-dents requested suggestions for changes from their peers.

Table 4Type of feedback requested from peers

Type of requested feedback Frequency Example (actual quotes)

Requesting help 39 “How can I save my ELO?”

Requesting an opinion 26 “What do you think of my table?”

Asking for clarification/ information

13 “What is the tofu?”

Asking for possible changes for improvement

6 “What can I add or correct to make it better?”

452 Tsivitanidou, Zacharia, Hovardas, and Nicolaou

in the case of feedback provided to peers, the two main categories of feedback were answers/statements (n=47) and questions (n=5). Answers/statements were further coded into the following sub-categories: positive judgments, negative judgments, changes proposed, neutral comments and clarification/information. as shown in Table 5, the positive judgments pre-vailed. in eight cases, the students proposed changes to their peers and in five cases students provided neutral comments. Finally, in five out of the 52 cases, students replied to the initial requests giving clarifications and addi-tional information to peers. it should be mentioned that the categories were not mutually exclusive (e.g. a specific feedback could include positive and negative judgments: “Your answer is very good, you included all informa-tion needed but without many details. You also forgot to mention that in the new pyramid we eat from all the food groups”).

Table 5Type of feedback received from peers

Type of feedback receiveda Frequency Example (actual quotes)

Answers/Statements 47

Positive judgments 29 “Your response is complete! Excellent! ”

Negative judgments 8 “I think that you are missing some information which I neither could find”

Neutral comments 5 “It is your choice to select the ingredients for your pizza”

Changes to be made 8 “Your work is good but you could also describe the experiment with the nuggets that we watched in the video”

Clarification/information 5 “Pepperoni is a spicy sausage”

Questions 5 “What are you talking about here?”

aFeedback categories were not mutually exclusive.

Given the type of feedback that students requested and offered, we ran a non-parametric Kendall’s tau b correlations test to investigate possible correlations among the aforementioned variables. Kendall’s tau b correla-tions among types of feedback requested and offered revealed that changes requested were significantly correlated with changes proposed (Kendall’s tau_b = 0.708; p < 0.05) and clarification/information requested was signifi-cantly correlated with clarification/information provided (Kendall’s tau_b = 0.637; p < 0.05) (Table 6). That is to say, whenever students were asking for changes to be proposed and additional information, their peers were willing to provide the requested type of feedback to them.

Peer Assessment Among Secondary School Students 453

Tabl

e 6

Ken

dall’

s ta

u_b

corr

elat

ions

am

ong

stru

ctur

al c

ompo

nent

s of

req

uest

ed a

nd g

iven

fee

dbac

k

Req

uest

ing

chan

ges

Req

uest

ing

info

rmat

ion

Req

uest

ing

help

Req

uest

ing

opin

ion

Giv

en P

ositi

ve

com

men

t G

iven

in

form

atio

nF

eedb

ack

give

n in

a fo

rm o

f an

answ

er/ s

tate

men

t

Giv

en

(pro

pose

d)

chan

ges

Req

uest

ing

ques

tion

ns0.

630*

0.93

6***

0.54

7*ns

ns0.

565*

0,58

4*

Req

uest

ing

info

rmat

ion

ns1.

000.

698**

nsns

0.63

7*ns

ns

Req

uest

ing

opin

ion

nsns

0.50

5*1.

00ns

nsns

0.65

3*

Req

uest

ing

chan

ges

1.00

nsns

0.58

7*0,

566*

nsns

0.70

8*

Req

uest

ing

help

nsns

1.00

0,50

5*ns

0,63

6*ns

ns

not

e: n

s =

not

sig

nific

ant;

* p <

0.0

5; **

p <

0.0

1; **

* p <

0.0

01

454 Tsivitanidou, Zacharia, Hovardas, and Nicolaou

For triangulation purposes, we also posed questions to students dur-ing the interviews, about the kind of feedback that they gave to peers, in order to complement the quantitative analysis. The interview data analysis revealed that 16 out of the 30 students stated that they did not give feed-back, four of them mentioned that they gave positive comments, while four others mentioned that they gave positive feedback and some tips for work improvement. Furthermore, three students mentioned that they gave positive and negative comments and proposed changes for the improvement of their peers’ elOs. Two other students stated that they offered their peers ideas that could improve their work; another stated that she offered only general comments to her peers, and finally one student mentioned that he did not remember if he had given feedback.

similarly, students were asked of what kind of feedback they received. nineteen students mentioned that they did not receive any feedback at all, four students mentioned that they received only positive comments and not any proposed changes, three students reported that they did not look at the scYFeedback tool to see whether they had received any feedback from their peers, two students mentioned that they had received just a neutral comment, one student mentioned that he received only negative comments and no changes were proposed, and finally one student mentioned that her peers suggested some changes concerning the improvement of her elOs.

apart from what kind of feedback the students actually asked for and received from peers, they were further asked during the interviews about what kind of feedback they would like to ask for and receive from peers. The analysis revealed a wide range of responses. There were students who: (a) wanted to know if their work was scientifically accurate (n=14), (b) wanted to know what they could add/change in their elO in order to make it better (n = 12), (c) wanted to ask for clarifications when a task was not understandable (n = 8), (d) wanted to ask for new ideas to be included in their elO (n=5), and (e) wanted to ask for an assessment of a completed elO (n=6) . Only one student did not have an opinion regarding this issue.

When the students were asked of what kind of feedback they would like to receive, a wide range of categories of responses emerged through the analysis. in particular, the students wanted: (a) to receive both positive and negative comments (n=12), (b) to be informed about the scientific accuracy of their elO (n=8), (c) to receive specific comments that point to changes to certain aspects of their elOs (n=6), (d) to get honest answers to their initial requests/questions posed through the scYFeedback tool (n=4), (e) to get feedback related to the content of their elO (n=3), (f) to get a score/grade for their work (n=3), (g) to receive comments of what their peers

Peer Assessment Among Secondary School Students 455

consider incorrect in their work (n=2), (h) to receive positive comments (n=2), and (i) to get new ideas to improve their work (n=2). The following quotation from interviewee s1G11 is particularly revealing, regarding these categories:

“I would like to receive feedback with positive comments about my work, but I would also like to receive some advice of how I could improve it. It would be nice if I would also get a grade from my classmates. This way, I would know whether it is correct and complete and whether it needs to get better and how much my classmate liked my work” (interviewee s1G11)

Do students use the feedback they receive to improve their science related work?

interestingly, none of the students that received feedback proceeded with revising their elOs based on the suggestions included in the feedback received. For investigating the reasons that discouraged students from pro-ceeding to any revision after receiving peer feedback, we turned to our in-terview data. Most of the students that received feedback stated that they felt that the feedback was not of high standards, while they highlighted that their level of reference for this judgment was the quality of the elO they themselves had produced during the mission. in other words, they felt more confident about the quality of their elO than the quality of the feedback. Moreover, they mentioned that peer feedback was not as detailed as the feedback one usually gets from a teacher. along these lines, they mentioned that general comments or comments without critical remarks, negative judg-ments or suggestions for changes were not helpful and therefore not worthy for considering them for any kind of revision. The following quotation is particularly revealing:

“I did not change my ELO because the feedback was not pointing to something that was problematic with my ELO. For example, I got a comment that they way I organized my information in the Energy fact sheet (ELO) was not good. I could not understand what was ‘not good’. It looked fine to me.” (interviewee, s1G3)

456 Tsivitanidou, Zacharia, Hovardas, and Nicolaou

dISCuSSIon

The objective of this study was to investigate whether the scYFeed-back tool leads to a feedback dialogue among students in the context of a science investigation. in particular, we aimed at examining what type of feedback students ask for and receive and whether the students use the feed-back they receive to improve their science related work.

regarding when and how students use unstructured and unsupported peer assessment in the context of the “healthy pizza” mission (first research question), the results showed that students engaged in a peer assessment process when they felt that they needed help, an opinion for something they have created, and a question concerning clarifications or information about science content. Furthermore, our findings showed that whenever students requested for feedback from peers, there was a great possibility to receive one. however, student engagement in peer assessment was found to be con-ditional, since students felt the need for feedback only for certain elOs. This calls for further research in order to identify the reasons students felt the need for certain elOs and not others. could it be that these elOs have some commonalities that cause this need? On a surface level, we checked the type of these elOs (e.g., text, table) and found that there were different types of elOs, which means that the type was not the factor we are looking for. needless to say, more in depth analysis is needed to reach to solid con-clusions. On the other hand, it appears that the beginnings of a fruitful feed-back dialogue were there, but they were not enough to support a thorough dialogue that could lead to having students revising their elOs. Obviously, the type of such support needed to reach sustainable dialogues remains to be investigated.

concerning the type of feedback requested and provided (second re-search question), positive comments were much more numerous than nega-tive comments in peer feedback, which confirms analogous findings of pre-vious studies (cho & Macarthur, 2010; cho, schunn, & charney, 2006). This result appears to indicate that the students wanted to encourage their peers. however, the presence of positive judgments does not seem to have promoted the revision of elOs. in the study’s interviews students stated that positive comments would not assist them to revise their elOs. in fact, in one of our previous studies we found that positive judgments might act as a barrier and prevent assessees from revising their work (Tsivitanidou et al, 2011). indeed, it seems that students would get involved in critically review-ing peer feedback only in the case of receiving negative comments and/or suggestions for changes. The quantitative analysis in the present study re-

Peer Assessment Among Secondary School Students 457

vealed significant correlations between changes requested and changes pro-posed. What is left now is to encourage students to increase the number of negative/critical comments and/or suggestions for changes in their feedback in order to stimulate their peers’ interest to engage in peer assessment and revise their elOs, which is in line with previous research (davies, 2006; Tseng & Tsai, 2007; Tsivitanidou et al, 2011).

another noteworthy finding was that students’ use of peer assessment descended as time progressed. since students did not find peer comments justifiable enough to support their work, they might have felt that there is no need to request more of this feedback. This aspect of appreciation could also be the answer to why our participants did not proceed with revising their elOs after receiving peer feedback, which relates to our final research question. We believe that appreciation could contribute to the confrontation of students’ hesitancy to accept peers as legitimate or capable assessors. pri-or studies revealed that students tend to regard expert feedback more valu-able than peer feedback (Bryant & carless, 2010; peterson & irving, 2008). hence, we need to create the right circumstances in which students could create feedback of good quality (e.g., scientifically accurate) and appreci-ate their peers as legitimate and trustworthy assessors. in any case, the fact that students did not make any changes to their elOs does not imply that students did not benefit from the whole procedure. indeed, students em-phasised that through the use of the scYFeedback tool they could interact with their classmates and get new ideas from them, either when receiving peers’ feedback or when reviewing peers’ elOs. it could be that students improved their elOs when examining the work of the other students and producing for them feedback or it could be that the ideas gained from re-ceiving or providing feedback would be used by students in future learning endeavours (strijbos & sluijsmans, 2009). Both of these conjectures sound reasonable, but further research is definitely needed in order to reach to sol-id conclusions.

since high-quality peer feedback processes are not likely to show up spontaneously, earlier studies have emphasized training as a crucial prereq-uisite of an effective peer assessment procedure (Gielen et al., 2010; van Zundert, sluijsmans, & van Merriënboer, 2010; Xiao & lucking, 2008). dochy, segers and sluijsmans (1999) highlighted the need to develop a shared understanding of the assessment/feedback procedure. Training is said to support the role of the peer that produces feedback, in improving the quality of peer feedback (van steendam, et al, 2010), as well as the role of the peer who is receiving feedback, in appreciating peer feedback (Gielen et al., 2010). To improve the quality of peer feedback, a training

458 Tsivitanidou, Zacharia, Hovardas, and Nicolaou

session prior to any implementation of unsupported and unstructured peer assessment is needed, where students should be prompted to both request and provide feedback that includes (a) structural components that correlate with revisions and improvements of elOs (e.g., detailed negative/critical comments), (b) scientifically accurate content, and (c) proper justification as far as why something needs to changed/revised. Training should also focus on the sustainability of the unsupported and unstructured peer feedback dia-logue and on having students accepting their peers as legitimate or capable assessors. prior studies have already pointed towards the need of addressing these two issues, but no framework has been provided yet as to what needs to be done (Brindley & scoffield, 1998; Orsmond, Merely, & reining, 1996; smith, cooper, & lancaster, 2002; van Gennip et al., 2010; Walker, 2001).

another interesting research direction would be how computer technol-ogy could further support unsupported and unstructured peer assessment. Of course, by providing support the unsupported aspect of peer assessment is “violated”. however, we could have a fading mechanism that initially pro-vides support and later on starts fading out until we reach a completely un-supported stage (if unsupported peer assessment is at task). The idea is to have the students understand first the added value of providing and receiv-ing feedback and then leave them of their own. in this context, the students need to understand when to provide feedback, why to provide it, what to include in it, how to present it, and what to do with the feedback received (e.g., examine the quality of the feedback received). Gielen et al. (2010) argued that the provision of scaffolding could help in this respect. For in-stance, the computer-based scaffolding could (a) prompt students to provide more thorough, well document critical comments, (b) encourage students to communicate for clarifications, (c) encourage students to collaborate, (d) encourage students to communicate when the feedback dialogue begins to fade-out, and (e) bring together students that could offer valuable feed-back to each other according to the development and quality of their work. all these, in the context of a web based platform, could become feasible through the use of data mining (e.g., use of agents). Overall, further re-search on defining how to support students in enacting peer assessment in a science related, computer-based learning is definitely needed, specifically in an unstructured and unsupported peer assessment context, as the one of this study. Given the benefits that peer assessment brings into a science learning environment, such research becomes an essential need.

Peer Assessment Among Secondary School Students 459

references

Barab, s. a., hay, K. e., Barnett, M., & squire, Kurt (2001). constructing virtu-al Worlds: Tracing the Historical Development of Learner Practices. Cog-nition And Instruction, 19 (1), 47–94.

Barab, s. a., hay, K. e., squire, K., Barnett, M., schmidt, r., Karrigan, K., et al. (2000). virtual solar system project: learning Through a Technology-rich, inquiry-Based, participatory learning environment. Journal of Sci-ence Education and Technology, 9 (1), 7–25.

Birenbaum, M. (1996). assessment 2000: Towards a pluralistic approach to as-sessment. in M. Birenbaum, & F. dochy (eds.), Alternatives in assessment of achievements, learning processes and prior knowledge (pp. 3-29). Bos-ton, Ma: Kluwer.

Black, p., & harrison, c. (2001). Feedback in questioning and marking: the sci-ence teacher’s role in formative assessment. School Science Review, 82, 301, 55-61.

Black, p. J., & Wiliam, d. (1998). assessment and classroom learning. assess-ment in education: principles. Policy and Practice, 5, 7-74.

Bosco, J. (2009). participatory culture and schools: can We Get There from here? Threshold, 2009, 12–15.

Brindley, c., & scoffield, s. (1998). peer assessment in undergraduate pro-grams. Teaching in Higher Education, 3 (1), 79-89.

Bryant, d. a., & carless, d. r. (2010). peer assessment in a test-dominated set-ting: empowering, boring or facilitating examination preparation? Educa-tional Research in Policy and Practice, 9 (1), 3-15.

chen, W. (2004). reuse of collaborative knowledge in discussion forums. lec-ture notes in computer science, vol. 3220. (pp. 800–802)Berlin/heidel-berg: springer-verlag.

chen, n.-s., Wie, c.-W., Wu, K.-T., & uden, l. (2009). effects of high level prompts and peer assessment on online learners’ reflection levels. Comput-ers and Education, 52, 283-291.

cho, K., & Macarthur, c. (2010). student revision with peer and expert review-ing. Learning and Instruction, 20 (4), 328-338.

cho, K., schunn, c. d., & charney, d. (2006). commenting on writing: typol-ogy and perceived helpfulness of comments from novice peer reviewers and subject matter experts. Written Communication, 23, 260-294.

crane, l., & Winterbottom, M. (2008). plants and photosynthesis: peer assess-ment to help students learn. Educational Research, 42 (4), 150-156.

davies, p. (2000). computerized peer assessment. Innovations in Education and Training International, 37, 346-355.

davies, p. (2006). peer- assessment: judging the quality of students’ work by comments rather than marks. Innovations in Education and Teaching Inter-national, 43 (1), 69-82.

460 Tsivitanidou, Zacharia, Hovardas, and Nicolaou

de Jong, T., van Joolingen, W., Giemza, a., Girault, i., hoppe, u., Kinder-mann, J., Kluge, a., lazonder, a., vold, v., Weinberger, a., Weinbrenner, s., Wichmann, a., anjewierden, a., Bodin, M., Bollen, l., d’ham, c., dolonen, J., engler, J., Geraedts, c., Grosskreutz, h., hovardas, T., Julien, r., lechner, J., ludvigsen, s., Matteman, Y., Meistadt5, Ø., næss, B., ney, M., pedaste, M., perritano, a., rinket, M., von schlanbusch, h., sarapuu, T., Fschulz, F., sikken1, J., slotta, J., Toussaint., J., verkade, a., Wajeman, c., Wasson, B., Zacharia, Z., van der Zanden, M. (2010). learning by cre-ating and exchanging objects: The scY experience. British Journal of Edu-cational Technology, 41 (6), 909-921.

dochy, F., segers, M., & sluijsmans, d. (1999). The use of self-, peer and co-assessment in higher education: a review. Studies in Higher Education, 24, 331-350.

dysthe, O. (2004). The challenges of assessment in a new learning culture. The 32nd international nera/nFpF conference, reykjavik, iceland.

dysthe, O., lillejord, s., Wasson, B., & vines, a. (2009). ‘productive e-feed-back in higher education: Two models and some critical issues’. in s. lud-vigsen, & r. saljo (eds.), learning across sites. Oxon: routledge.

Fadel, c., honey, M., & pasnik, s. (2007). assessment in the age of innovation. education Week retrieved 15 May 2012 from. http://www.edweek.org/login.html

Falchikov, n. (1995). peer feedback marking: developing peer assessment. In-novations in Education and Training International, 32, 175–187.

Falchikov, n. (2003). involving students in assessment. Psychology Learning and Teaching, 3, 102–108.

Fallows, s., & chandramohan, B. (2001). Multiple approaches to assessment: reflections on use of tutor, peer and self-assessment. Teaching in Higher Education, 6 (2), 229-246.

Frost, J., & Turner, T. (2005) learning to Teach science in the secondary school, second edition., routledge Falmer , london

Gehringer, e., F. (2001). electronic peer review and peer Grading in computer- science courses. SIGCSE. 139-143.

Gielen, s., peeters, e., dochy, F., Onghena, p. & struyven, K. (2010). improving the effectiveness of peer feedback for learning. Learning and Instruction, 20, 304-315.

hanrahan, s. J., & issacs, G. (2001). assessing self- and peer-assessment: The students’ views. Higher Education Research and Development, 20, 53–70.

harlen, W. (2007). holding up a mirror to classroom practice. Primary Science Review, 100, 29–31.

harrison, c., & harlen, W. (2006). children’s self– and peer–assessment. in W. harlen (ed), ASE Guide to Primary Science Education (pp. 183-190). hat-field: association for science education.

hattie, J., & Timperley, h. (2007). The power of feedback. Review of Educa-tional Research, 77 (1), 81–112.

Peer Assessment Among Secondary School Students 461

heinich, r., Molenda, M., russell, J. d. & smaldino, s. e. (2002) Instructional media and technologies for learning (7th ed) (upper saddle river, nJ: Mer-rill).

hoppe, h. u., pinkwart, n., Oelinger, M., Zeini, s., verdejo, F., Barros, B., et al. (2005). Building bridges within learning communities through ontologies and “thematic objects”. proceedings of the 2005 conference on computer support for collaborative learning (pp. 211–220). Mahwah (nJ): law-rence erlbaum.

Jenkins, h., clinton, K., purushotma, r., robison, a. J., & Weigel, M. (2006). confronting the challenges of participatory culture: Media education of the 21st century. chicago: The Macarthur Foundation.

Kennedy, K. J., chan, J. K. s., Fok, p. K., & Yu, W. M. (2008). Forms of assess-ment and their potential for enhancing learning: conceptual and cultural is-sues. Educational Research for Policy and Practice, 7, 197-207.

Kollar, i., & Fischer, F. (2010). peer assessment as collaborative learning: a cog-nitive perspective. Learning and Instruction, 20 (4), 344-348.

lindblom- Ylanne, s., pihlajamaki, h., & Kotkas, T. (2006). self-, peer- and teacher-assessment of student essays. Learning in Higher Education, 7 (1), 51-62.

lindsay, c., & clarke, s. (2001). enhancing primary science through self– and paired–assessment. Primary Science Review, 68, 15–18.

lorenzo, G. & ittelson, J. (2005). demonstrating and assessing student learning with e- portfolios. Educause Learning Initiative. retrieved april 10, 2011, from: http://net.educause.edu/ir/library/pdf/eli3003.pdf

lu, r., & Bol, l. (2007). a comparison of anonymous versus identifiable e-peer review on college student writing performance and the extent of critical feedback. Journal of Interactive Online Learning, 6 (2), 100-115.

noonan, B., & duncan, r. (2005). peer and self-assessment in high schools. Practical Assessment, Research & Evaluation, 10 (17).

Orsmond, p., Merely, s., & reining, K. (1996). The importance of marking cri-teria in the use of peer assessment. Assessment and Evaluation in Higher Education, 21, 239-249.

paré, d. e., & Joordens, s. (2008). peering into large lectures: examining peer and expert mark agreement using peerscholar, an online peer assessment tool. Journal of Computer Assisted Learning, 24, 526–540.

pellegrino, J. W., chudowsky, n., & Glaser, r. (2001). Knowing what students know: The science and design of educational assessment. Washington, dc: national academic press.

peterson, e. r., & irving, s. e. (2008). secondary school students’ conceptions of assessment and feedback. Learning and Instruction, 18, 238-250.

ploegh, K., Tillema, h. h., & segers, M. s. r. (2009). in search of quality crite-ria in peer assessment practises. Studies in Educational Evaluation, 5, 102-109.

prins, F. J., sluijsmans, d. M. a., Kirschner, p. a., & strijbos, J. W. (2005). For-mative peer assessment in a cscl environment: a case study. Assessment & Evaluation in Higher Education, 30, 417-444.

462 Tsivitanidou, Zacharia, Hovardas, and Nicolaou

purchase, h. c. (2000). learning about interface through peer assessment. As-sessment & Evaluation in Higher Education, 25 (4), 341- 352.

ronen, M., & langley, d. (2004). scaffolding complex tasks by open on-line submission: emerging patterns and profiles. Journal of Asynchronous Learning Networks, 8, 39–61.

shute, v. J. (2008). Focus on formative feedback. Review of Educational Re-search, 78, 153–189.

sluijsmans, d. (2002). establishing learning effects with integrated peer assess-ment tasks. The higher education academy retrieved 15 May 2011 from. http://www.palatine. ac.uk/files/930.pdf

sluijsmans, d., Brand-Gruwel, s., & van Merriënboer, J., J., G. (2002). peer as-sessment training in teacher education: effects on performance and percep-tions. Assessment and Evaluation in Higher Education, 27 (5), 443–454.

smith, h., cooper, a. & lancaster, l. (2002). improving the quality of under-graduate peer assessment: a case for student and staff development. Inno-vations in Education and Teaching International, 39, 71-81.

strijbos, J. W. & sluijsmans, d. (2009). unravelling peer assessment: Methodo-logical, functional and conceptual developments. Learning and Instruction, 20, 265-269.

sung, Y. T., chang, K.e., chiou, s. K & hou, h.T. (2004). The design and ap-plication of a web- based self- and peer – assessment system. Computers & Education, 45, 187-202.

swan, K., shen, J., & hiltz, s. (2006). assessment and collaboration in online learning. Journal of Asynchronous Learning Networks, 10 (1), 44–61.

Topping, K. (1998). peer assessment between students in colleges and universi-ties. review of Educational Research, 68, 249-276.

Topping, K. (2003). self and peer assessment in school and university: reli-ability, validity and utility. in M. segers, F. dochy, & e. cascaller (eds.), Optimising new modes of assessment: in search of qualities and standards (pp. 55-87). The netherlands: Kluwer academic publishers.

Topping, K. (2009). peer assessment. Theory and Practise, 48, 20-27. Tsai, c. c., & liang, J. c. (2009). The development of science activities via on-

line peer assessment: the role of scientific epistemological views. Instruc-tional Science, 37, 293–310.

Tsai, c.-c., lin, s. s. J., & Yuan, s.-M. (2002). developing science activities through a network peer assessment system. Computers & Education, 38 (1-3), 241-252.

Tsai, c. c., liu, e. Z. F., lin, s. s. J., & Yuan, s. M. (2001). a networked peer assessment system based on a vee heuristic. Innovations in Education and Teaching International, 38, 220-230.

Tseng, s.c., & Tsai, c.c. (2007). On-line peer assessment and the role of the peer feedback: a study of high school computer course. Computers & Edu-cation, 49, 1161–1174.

Peer Assessment Among Secondary School Students 463

Tsivitanidou, e., O., Zacharias, Z., hovardas, T. (2011). investigating secondary school students’ unmediated peer assessment skills. Learning and Instruc-tion, 21 (4), 506-519.

van dyke, n. (2008). self- and peer-assessment disparities in university rank-ing schemes. Higher Education in Europe, 33 (2 & 3), 285-293.

van Gennip, n.a.e., segers, M.s.r & Tillema, h.h. (2010). peer assessment as a collaborative learning activity: The role of interpersonal variables and conceptions. Learning and Instruction, 20 (4), 280-290.

van steendam, e., rijlaarsdam, G., sercu, l., & van den Bergh, h. (2010). The effect of instruction type and dyadic or individual emulation on the qual-ity of higher-order peer feedback in eFl. Learning and Instruction, 20 (4), 316-327.

van Zundert, M., sluijsmans, d. M. a., & van Merriënboer, J. J. G. (2010). ef-fective peer assessment processes: research findings and future directions. Learning and Instruction, 20, 270-279.

Walker, a. (2001). British psychology students’ perceptions of group-work and peer assessment. Psychology Learning and Teaching, 1, 28-36.

Wasson, B. & vold, v. (in press). leveraging new Media skills for peer Feed-back. The Internet and Higher Education. http://dx.doi.org/10.1016/j.bbr.2011.03.031

Wasson, B., vold, v., & de Jong, T. (2012). Orchestrating assessment: assessing emerging learning Objects. in K. littleton, e. schanlon, & M. sharples (eds.), Orchestrating inquiry learning: contemporary perspectives on sup-porting scientific inquiry learning (pp. 175–192). london: routledge.

Wen, M. l. & Tsai, c. c. (2006). university students’ perceptions of and atti-tudes toward (online) peer assessment. Higher Education, 51, 27-44.

Xiao, Y. & lucking, r. (2008). The impact of two types of peer assessment on students’ performance and satisfaction within a Wiki environment. Internet and Higher Education, 11, 186-193.

van Zundert, M., sluijsmans, d. M. a., & van Merriënboer, J. J. G. (2010). ef-fective peer assessment processes: research findings and future directions. Learning and Instruction, 20, 270-279.

Yu, F., liu, Y., & chan, T. (2005). a web-based learning system for question posing and peer assessment. Innovations in Education and Teaching Inter-national, 42 (4), 337-348

Zhang, J., cooley, d. h. & ni, Y. (2001) netTest: an integrated web-based test tool, International Journal of Educational Telecommunications, 7 (1), 33–35.

464 Tsivitanidou, Zacharia, Hovardas, and Nicolaou

APPEndIx A

The activity sequence of the “healthy pizza” mission and the corresponding elOs and mode of work.Number Activity ELO Mode of work1 Design a virtual artefact My Favourite Pizza (Pizza 1) Individual2 Watch video Notes On Unhealthy Diet Group3 Read article Questions About Pizza Benefits Group4 Organize data Food And Exercise Diary Individual5 Define Nutrition Table Individual6 Give examples Nutrient And Energy Calculations Individual7 Give examples Pizza Ingredient Table Group8 Identify relevant concepts

and criteriaQuestions About The Food Pyramid Group

9 Build a model Construction Of The Food Pyramid Individual10 Organize data Daily Calorie Intake Individual11 Interpret data Evaluate Diet (Food Pyramid) Individual12 Identify relevant concepts

and criteria & defineEnergy Fact Sheet Individual

13 Organize and interpret data

Basal Metabolic Rate (Health Passport)

Individual

14 Organize and interpret data

Estimated Energy Requirements (Health Passport)

Individual

15 Identify relevant concepts Map Of The Digestive System Individual16 Define Fact Sheet Of One Organ Individual17 Identify relevant concepts Personal Comments Individual18 Draw conclusions Body Mass Index (Health Passport) Group19 Draw conclusions Heart Rate (Health Passport) Individual20 Organize and interpret

dataHealth Passport Individual

21 Design a virtual artefact My First Healthy Pizza (Pizza 2) Individual22 Reflect on group

processesMethodology Steps Group

23 Draw conclusions Reflection On Importance Of Criteria Group24 Reflect on individual

processesCriteria Table Individual

25 Identify (prior) knowledge?

Criteria Weight Table Individual

26 Identify (prior) knowledge?

Criteria Final Table Individual

27 Design a virtual artefact My Optimized Healthy Pizza (Pizza 3) Individual28 Evaluate processes &

ELOIndividual Report Group

29 Evaluate ELO Group Report Group30 Build an artefact Taste Scores Group31 Evaluate processes and

ELOLetter To School Canteen Individual

Peer Assessment Among Secondary School Students 465

APPEndIx B

The interview protocol

1. a. did you use the scYFeedback tool in the scY-lab environment?

b. if not why didn’t you use it?

c. if yes, what were your impressions after using the scYFeedback tool?

2. how did you use the scYFeedback Tool?

3. a. if you have given feedback what kind of feedback did you give?

b. if you have received feedback what kind of feedback did you receive?

c. if you have received feedback did it help you to improve your work?

d. if you have not received any feedback, would you like to receive one? do

you think that receiving feedback from your classmates would have helped

you improve your work? if no, why not and, if so, how?

4. a. Given your experience with scY-lab, were you satisfied with the

scYFeedback tool? if no, how would you prefer the scYFeedback tool to be?

b. What improvements/changes would you recommend for the scYFeedback

tool?

5. is there something that you liked in the scYFeedback tool? if so what is it?

6. What kind of feedback would you ask through this tool when working in

scY-lab?

7. What kind of feedback would you like to receive back from your classmates

through the scYFeedback tool?

8. have you received feedback when working in the scY-lab environment?

if yes, what do you think of the quality of this feedback? have you used in

any way? if yes, how? if no, why?

9. What kind of feedback would you prefer to receive?