Digital Scaffolds for Reading Multiple Online Sources and Writing Argumentative Texts. Paper...

19
1 Digital Scaffolds for Reading Multiple Online Sources and Writing an Argumentative Text Julie Coiro, Carita Kiili, Jari Hämäläinen, Leonardo Cedillo, Rachel Naylor, Ryan O’Connell, & David Quinn Introduction It is clear that today’s secondary students have difficulty writing argumentative essays. Students appear unfamiliar with the conventions of written arguments (Beach et al., 2012) and they have difficulties considering counterarguments in their writing (Perksy, Dan & Jin, 2003). Using the Internet as a source for the essays creates additional challenges. On the Internet, answers to open-ended problems are rarely found from a single source. To make sense of controversial issues, learners require skills in organizing, evaluating, comparing and contrasting information from multiple sources (Britt & Rouet, 2012) and in moving beyond their own perspective (Barzalai & Zohar, 2012). Unfortunately, many students engage with online sources in an uncritical manner and fail to see connections across multiple sources (Kiili, 2013). These students in particular may benefit from digital scaffolds that make steps of online inquiry more explicit (Stadtler & Bromme, 2008). In this paper, we introduce and explore the potentiality of a newly developed Digital Online Inquiry Tool specifically designed to support online exploration of controversial issues. Theoretical and Conceptual Frameworks Two theories inform our study. First, a new literacies perspective of online reading comprehension (Leu, Kinzer, Coiro, Castek, & Henry, 2013) underlies the rationale for a digital tool that supports online inquiry. This perspective defines reading on the Internet as a process of problem-based inquiry involving at least five sets of important practices. Online readers, motivated by a problem to solve, should be skilled at generating effective search queries; locating relevant information; critically evaluating information for accuracy, reliability, and perspective; synthesizing information across multiple sources; and communicating their findings to others. The proposed project examines how students use a newly developed Digital Online Inquiry Tool (see Figures 1 and 2) to support the location, evaluation, synthesis and writing of argumentative online texts -- a task with which online readers have been found to struggle (Kiili, Laurinen, Marttunen, & Leu, 2012). This study is also informed by cognitive load theory (Chipperfield, 2006) that argues our working memory is limited in the amount of information it can hold, and the number of operations it can perform on that information (van Gerven et al, 2003). Because online reading itself already imposes a heavy cognitive load on learners (Brand-Gruwell et al., 2005), the interface of our Digital Online Inquiry Tool is designed to optimize germane load, or effort associated with constructing a cohesive synthesis, and minimize any extraneous cognitive load while supporting key processes of online reading (see Figure 3). This study also makes use of elements of Toulmin’s (1958) Argument Model to explore how students construct arguments within the digital tool space. Toulmin defines an argument as a logical appeal that includes claims, evidence, warrants, backings, and rebuttals. For the purposes of our work, we focus our attention on how students use the digital tool to generate and organize arguments and counterarguments to support a claim related to a controversial issue.

Transcript of Digital Scaffolds for Reading Multiple Online Sources and Writing Argumentative Texts. Paper...

 

Digital Scaffolds for Reading Multiple Online Sources and Writing an Argumentative Text

Julie Coiro, Carita Kiili, Jari Hämäläinen, Leonardo Cedillo, Rachel Naylor, Ryan O’Connell, & David Quinn

Introduction

It is clear that today’s secondary students have difficulty writing argumentative essays. Students appear unfamiliar with the conventions of written arguments (Beach et al., 2012) and they have difficulties considering counterarguments in their writing (Perksy, Dan & Jin, 2003). Using the Internet as a source for the essays creates additional challenges. On the Internet, answers to open-ended problems are rarely found from a single source. To make sense of controversial issues, learners require skills in organizing, evaluating, comparing and contrasting information from multiple sources (Britt & Rouet, 2012) and in moving beyond their own perspective (Barzalai & Zohar, 2012). Unfortunately, many students engage with online sources in an uncritical manner and fail to see connections across multiple sources (Kiili, 2013). These students in particular may benefit from digital scaffolds that make steps of online inquiry more explicit (Stadtler & Bromme, 2008). In this paper, we introduce and explore the potentiality of a newly developed Digital Online Inquiry Tool specifically designed to support online exploration of controversial issues.

Theoretical and Conceptual Frameworks Two theories inform our study. First, a new literacies perspective of online reading comprehension (Leu, Kinzer, Coiro, Castek, & Henry, 2013) underlies the rationale for a digital tool that supports online inquiry. This perspective defines reading on the Internet as a process of problem-based inquiry involving at least five sets of important practices. Online readers, motivated by a problem to solve, should be skilled at generating effective search queries; locating relevant information; critically evaluating information for accuracy, reliability, and perspective; synthesizing information across multiple sources; and communicating their findings to others. The proposed project examines how students use a newly developed Digital Online Inquiry Tool (see Figures 1 and 2) to support the location, evaluation, synthesis and writing of argumentative online texts -- a task with which online readers have been found to struggle (Kiili, Laurinen, Marttunen, & Leu, 2012). This study is also informed by cognitive load theory (Chipperfield, 2006) that argues our working memory is limited in the amount of information it can hold, and the number of operations it can perform on that information (van Gerven et al, 2003). Because online reading itself already imposes a heavy cognitive load on learners (Brand-Gruwell et al., 2005), the interface of our Digital Online Inquiry Tool is designed to optimize germane load, or effort associated with constructing a cohesive synthesis, and minimize any extraneous cognitive load while supporting key processes of online reading (see Figure 3). This study also makes use of elements of Toulmin’s (1958) Argument Model to explore how students construct arguments within the digital tool space. Toulmin defines an argument as a logical appeal that includes claims, evidence, warrants, backings, and rebuttals. For the purposes of our work, we focus our attention on how students use the digital tool to generate and organize arguments and counterarguments to support a claim related to a controversial issue.

 

Figure 1. A partially completed graph by a student exploring the claim “Energy drinks are not safe for teenagers.”  

Figure 2. Features embedded into the Digital Online Inquiry Tool for Controversial Issues to scaffold a student’s use of several complex cognitive processes during online reading  Component Embedded scaffolds in the Digital Online Inquiry Tool Planning Prompts readers to start the task by pondering perspectives from which to approach

the issue at hand

Locating information

Asks readers to formulate guiding questions which may help them recognize effective search terms. Helps readers structure their information search by concentrating on one perspective at the time

Evaluating sources

Prompts readers to rate the trustworthiness of each source with the traffic lights and provides a pop-up box to justify their evaluations

Identifying arguments

Helps readers focus on identifying arguments in source texts while encouraging them to search for both supportive arguments and counterarguments

Synthesizing information

Allows readers to build a synthesis one perspective at the time and helps include arguments both for and against the issue with each perspective

Composing an argumentative text

Helps readers develop the structure for their essay and move beyond their own perspective in their writing

 

Figure 3. Explanation and location of each feature embedded into the Digital Online Inquiry Tool for Controversial Issues to scaffold a student’s use of several complex cognitive processes during online reading

Previous Research

Today’s learners spend close to three hours per day online, engaged in a variety of tasks, including solving problems related to school projects (Lenhart & Maddan, 2007). On the Internet, answers to open-ended problems are rarely found from a single source. Students encounter diverse sources with different purposes and quality of information (Kuiper et al., 2005). To effectively integrate and reconcile competing points of views while making sense of controversial issues, learners require skills in organizing, evaluating, comparing and contrasting information drawn from multiple sources (Britt & Rouet, 2012). Thus, the ability to recall and summarize single texts is not enough. To truly understand complicated issues in society, students need to move beyond their own perspective and form a representation that reflects multiple perspectives (Barzalai & Zohar, 2012). Unfortunately, recent research has shown that many students engage with online sources in a superficial and uncritical manner (see Walraven et al., 2008) and fail to see the connections within and across different types of sources (Barzalai &

 

Zohar, 2012). These students in particular may benefit from digital scaffolds that support them through these complex cognitive processes. Notably, although there is an apparent need to support students as they negotiate complex, multifaceted online environments, there are only a few efforts to design scaffolds that support readers’ engagement with online sources. Digital scaffolds can make the steps of online inquiry more explicit by providing prompts and separate spaces for online readers to report the results of their inquiry (Stadtler & Bromme, 2008). While digital tools are available to support the reading of scientific resources (Zhang & Quintana, 2012) and medical information (Stadtler & Bromme, 2008), the proposed project will test the efficacy of a digital scaffolding system designed to support students’ online reading processes while they are engaged in synthesizing argumentative online texts either individually or collaboratively. The interface of our digital scaffolding tool (described more fully in the Methods section) has been kept as simple as possible in order to minimize any extraneous cognitive load (Van Merriënboer & Kirschner, 2007). This is because online reading itself already imposes a heavy cognitive load on learners. Online readers are expected to negotiate and organize multiple complex cognitive processes (Brand-Gruwell et al., 2005). In addition, many features of online hypertext structures have been found to increase cognitive load demands (DeStefano & LeFevre, 2007). Consequently, this online inquiry tool (see screenshot in Figure 1) is specifically designed to optimize germane load, or the effort associated with processing new schema to construct a cohesive synthesis (Chipperfield, 2006). First, the synthesis process is sequenced so that students can concentrate on creating their synthesis of one perspective using one limited set of source documents at a time. Then, when students compose their final, concluding synthesis across multiple perspectives, the tool enables them to take advantage of efforts to synthesize previous information on a smaller scale without having to hold in their memory the set of documents they encountered at each different point in their search. Second, because source evaluation is a crucial part of generating a synthesis of multiple documents (Britt & Rouet, 2012; Wiley et al., 2009), traffic lights serve as a quick visual indicator that remind readers of their previous credibility evaluations to better inform their selection of arguments to include in their final synthesis of each perspective. In essence, our digital online inquiry tool provides students with a carefully sequenced but flexible set of opportunities to monitor and control their cognitive steps toward deeper knowledge construction for the purposes of constructing an argumentation essay.

Methodology

Exploratory qualitative design was applied to address three questions: 1. What types of curriculum-based tasks can be used to engage high school students with

this digital tool designed to support online inquiry and the reading and writing of argumentative texts?

2. After shown a short video about how to use the digital tool, how do students engage with the tool to organize and evaluate conflicting information across multiple sources and compose argumentation essays?

3. How do students and teachers perceive the utility of the tool and its ability to support processes required to read online and write argumentative texts?

 

Participants Participants in this exploratory study included six teachers and 175 high school students in Grades 10-12 from five classes from Northeastern United States and 63 students (Grades 10-11) from four classes in Finland. Students took part in either English Language Arts (ELA) or Social Studies (SS) classes. Participants represented a diverse range of ability levels and socioeconomic and cultural backgrounds across different curricula. [NOTE: Only a portion of the student graphs and essays were analyzed for this exploratory study; the exact numbers of each are reported later in the paper.] Procedures Participating teachers engaged one or more classes in using the digital online inquiry tool to help write an essay about a controversial issue based on online sources. Teachers defined the specific task to align with their curriculum. One example of a controversial issue in Language Arts is whether social media increases people’s quality of life. In the United States, argumentation tasks took place over three 45-minute periods. After a brief training in how to use the digital online inquiry tool, students worked in two phases. In the search and close reading phase (Day 1 and 2), students read online sources and sometimes searched for additional information online to identify arguments for the issue. Then they were asked to organize their notes in the digital graph, evaluate the quality of each source they read, and justify their evaluations. In the writing phase (Day 3), students composed their essays by utilizing their graph. The teaching experiment arranged in Finland followed a slightly different pattern. The students engaged in a preparation session (90 minutes) during which they were introduced to the basic concepts of the Online Inquiry Tool. The teacher first modeled the use of the tool and then students completed a practice task guided by the teacher. The preparation session was followed by an exam task. In the exam, students worked either individually or in pairs. In the exam, they had 40 to 70 minutes to search for information on the Internet on the given topic (students chose from two alternatives) and fill in the graph after which they had 1.5 to 2.5 hours to write their essay. Finally, students completed a short survey with three open-ended questions designed to determine what they liked and disliked about the argument graph as a tool to support their online inquiry. Teachers also shared reactions to the digital tool in a follow-up interview. Data sources analyzed for this paper included a portion of the completed argument graphs from the United States (102 of 175 total) and completed essays (102 of 175 total) as well as a brief review of students’ open-ended survey responses. From the Finnish data, we analyzed student argument graphs only for criteria students used to evaluate the quality of online sources.

Analysis

Examining Curriculum-Based Argumentation Tasks (United States & Finnish Data) To better understand how teachers might use the digital tool to support online reading and argumentation writing in their curriculum (Research Question #1), we created a descriptive summary of each task including the task focus, the types of texts students were asked to include in their research, the level of scaffolding provided in the directions and the task itself, and how students were graded on their work as part of their course grade.

 

Participating teachers in the United States were also asked to complete a short five-item questionnaire indicating their students’ level of familiarity with the task topic and terms associated with argumentation (e.g., claims, evidence, warrants); their students’ prior experience with writing argumentation essays, and their own comfort level and training in teaching about reading and writing argumentation texts. Analyzing Content in Argument Graphs (United States Data) To inform the design of a future study with this tool, we also sought to get more information about how students use the tool and general trends in how typical high school students read online texts and write about topics that include conflicting information (see Research Question 2). To make sense of the data across a wide range of tasks and student populations, we conducted two phases of content analyses – looking first at the quality of information in each student’s argument graph and then examining the quality of information in each student’s final essay. In our analysis of the argument graphs, we analyzed the quality of content students provided in each argument box of the graph. The two researchers and four graduate students were each assigned a different class/task, and the quality of each completed argument box in each student graph was coded as one of the following:

• Clear/Inferred Argument: Argument box contained students’ own words or words copy/pasted words from a source that represented a reasonable argument that supported or refuted a given claim. Because the boxes represented informal notes, arguments could be stated in the form of a clear argument or an inferred argument.

• Relevant Information: Argument box contained topically relevant information that aligned with correct side of the argument, but it did not directly or indirectly support or refute the main claim.

• Partially Relevant Information: Argument box contained information that was partially relevant to the topic of the claim but it did not directly or indirectly support or refute the main claim and it did not align with correct side of the argument.

• Irrelevant Information: Argument box contained information that was not relevant to the claim it was intended to support or refute.

• Other/Unclear Information: Argument box contained information that was difficult to understand and/or the information did not appear to connect with the topic or the student’s main claim.

Tallies of each type of content in the argument boxes were totaled across students in each task to explore general trends associated with each task.

The content in each student’s graph was also analyzed on two other dimensions to determine the following:

a) Copy/pasted or modified: Did the content in each argument box appeared to be mostly copy/pasted from the digital sources or if the wording of the student’s notes was modified to suggest an attempt to use his/her own words?

b) One sided or fairly balanced: Was the overall number of filled in argument boxes primarily one-sided (mostly an attempt to support either the claim or counter claim) or

 

somewhat balanced (attempting to offer at least two ideas intended to support and refute the main claim)?

Judgments about the first dimension were made from a visual analysis of the argument boxes in each student’s graph compared to the digital sources provided. Judgments about the second dimension were made from a visual analysis of the number of argument boxes for and against the main claim. A summary statement was written for each student about the quality of content in each dimension. Then, tallies were totaled across students in each task to explore general trends associated with each task. Analyzing Quality of Content in Evaluation Boxes (Finnish Data) From the Finnish data, which was collected very recently, we only analyzed different criteria students used for evaluating online sources. We intended to analyze data about the evaluation boxes from all students in the United States as well, but very few students in our U.S. sample filled in the evaluation boxes because teachers told them it was optional rather than required.

In the digital tool interface, students could click on one of three “traffic lights” to indicate the quality of each source they use as reliable, somewhat reliable, or questionable. Then, they had the option to justify their choice in an evaluation box with at least one or more reasons. To analyze content in the evaluation boxes, we first separated all of the utterances that indicated a justification for their source evaluation. Then, codes were generated to reflect patterns of each type of justification provided and applied to each evaluation box. The coding scheme for evaluating the reliability of a source included the following justifications: 1) quality of the author of the text or the person interviewed; 2) organization affiliated with the web page (e.g. university, political party, newspaper); 3) practices of an internet forum; 4) author or publisher information provided; 5) Content of the web page; 6) Other; 7) Irrelevant evaluation; 8) No justification provided. The fifth category, justifications related to content of the web page included 10 sub-categories (e.g. objectivity, quality of argumentation, informativeness, use of sources, etc.).

Analyzing Quality of Student Essays To analyze the quality of student essays, we piloted a rubric with six criteria: 1) breadth of perspectives; 2) balance of argumentation within the perspectives; 3) relevancy of reasons provided; 4) use of multiple online sources; 5) integration of multiple sources, and 6) essay coherence. Each element was assigned a score from 1-3 points, for a total of 18 possible points (see Appendix A for complete rubric). Item scores were totaled within student and across students, and then we calculated a mean score of each item within and across tasks to report observed trends in student performance on each criterion.

Analyzing Student Survey Responses To better understand student perceptions about the tool, we looked for preliminary patterns across their open-ended responses. For this paper, we report illustrative comments and emerging themes representing typical responses to three open-ended questions:

1. What did you like about the argument graph? 2. What did you dislike about the argument graph?

 

3. What features would you like to see in the argument graph if you were to use it again in the future?

Findings

RQ1: What types of curriculum-based tasks can be used to engage high school students with a digital online inquiry tool that scaffolds reading and writing of argumentative texts? After talking with teachers and exploring the range of tasks they developed for this three-day assignment, data suggests the tool and related tasks are flexible enough to fulfill the varied curricular needs of high school language arts and history teachers. We recommended that each teacher upload and publish their assignments in some type of digital space so students could easily access the task and related texts to conduct their inquiry. One history teacher published her task using Google Sites with assistance from the researcher (see hyperlink in Example 1 below). The other four teachers, who worked at the same school, used the Weebly webpage maker to organize a common starting place for all five classes to learn more about the tool prior to completing the task (see http://www.reedahs.com/uri-writing-project.html). Then, each teacher created their own companion website with specific task directions and related texts, as described below in Examples 2-5. The Finnish task, as described in Example 6, was designed by both the Finnish teachers and the research.

• Example 1: Grade 10: Controversy around the use of the Atomic Bomb: https://sites.google.com/site/ushistorydigitoolsp2014/ This task was designed for tenth graders of average to low-level overall academic performance at the end of a history unit about World War II. The task asked students to conduct research and write an essay that explored the controversy around the use of the atomic bomb in Hiroshima and Nagasaki. Specifically, the claim given to students was “The decision to drop the atomic bombs on the cities of Hiroshima and Nagasaki was necessary to end the war with Japan.” Because students in this group had struggled with previous argumentation assignments, task directions were designed with a great deal of scaffolding to support students as they organized evidence informed by three specific perspectives (e.g., perspectives of politicians, military leaders, and scientists). Each perspective box provided a specific question to guide students’ thinking. Eight websites of varying difficulty and varying amounts of embedded links to additional online sources were included in the text set. These websites were part of the open Internet rather than being contained in a library database of isolated texts, which was the case for the remaining example tasks. Students were given an essay grade for their work using a common writing rubric their school developed for assessing argumentation essays and the completion of this task replaced their normal school-wide assessment for writing argumentative essays for that marking period.

• Example 2: Grade 9: Controversial Issues in World History (Teams): http://www.reedahs.com/argumentative-writing-project.html This task engaged small groups of 3-4 students in a World History class with information about one of seven different conflicts from around the world. Students were 9th graders of average to low-level overall academic performance and the topics included conflicts occurring in Israel/Palestine, Cuba, Nicaragua, El Salvador, Afghanistan, Korea, and

 

Egypt. In each case, students were asked to gather evidence to support the claim (and counter-claim) that their assigned country was moving toward a stable democracy or working toward some type of reunification and/or efforts for peace. Students had no prior instruction about any of these topics before they were asked to complete the task. Each team was directed to a different Google Word Document that included a specific claim to support and linked to three specific informational articles about their topic selected from within their library database so students did not have to search for information on the Internet. Each article was intended to give students background about the issue and ideas for crafting their arguments. Students were given an essay grade for their work using a simple rubric developed by the teacher for the task, but a large portion of their grade was based on class participation.

• Example 3: Grade 9 Honors: Who Killed Romeo & Juliet? http://mrsjthibodeau.weebly.com/romeo-and-juliet.html This task asked students in two tenth grade Honors English classes to conduct research and write an essay that explained who was ultimately responsible for the deaths of Romeo & Juliet. This task took place at the end of a unit on Shakespeare and the play Romeo & Juliet. The task was different than the other examples in that it involved looking for evidence in a narrative play (as opposed to an informational article), although students were also directed to three critical essays about the play to use for additional support. Each essay was available only within the school’s library database and none of the essays contained any hyperlinks to outside sources (although students were able to explore additional online sources if they chose). Most students generated claims that a certain character was ultimately responsible and crafted their arguments with evidence pointing to reasons why that character was responsible and evidence to reasons why other characters were not responsible. Students were given a class participation grade for their work, but were not graded formally for this assignment.

• Example 4: Grade 9 (Alternative): Should Minors Be Allowed to Play Video Games http://swsahs.weebly.com/9th-grade-assignments This task was assigned to two groups of ninth grade students who attended an alternative “school within a school” program for students at-risk of dropping out. Students were asked to conduct research and write an argumentation essay that answered the question, “Should minors be allowed to play violent games?” Students were directed to six websites organized by their teacher under one of three headings that classified the information as supporting one of three positions on the issue (e.g., There is a link between video games and real-life violence; there might be a link, and there is not a link between these two topics). Students worked for three class periods on the task (like students in the other groups), but there were some technical problems with the computers as well as several absences, so the teacher allowed students two additional class periods to conduct their research, fill in their argument graphs, and write their final essays. Students were given an essay grade for their work using a simple rubric developed by the teacher for the task.

• Example 5: Grade 10: Controversy around Genetic Engineering http://mrwaltonenglish.weebly.com/ This task asked two classes of average to above-average level tenth grade students to conduct research on the Internet and write an argumentation essay about whether or not the genetic manipulation of human embryos should be allowed. Students were directed to

 

10 

two online websites and two short documentary videos to inform their research. Several students also took advantage of the teacher’s invitation to search for other websites they thought could be useful for their inquiry. Students had no prior instruction about the topic, but according to their teacher, they were familiar with procedures for analyzing literary texts and writing persuasive essays. Students were also asked to construct their own claim, but the directions suggested some specific questions that could be used to guide their thinking. Students were given a class participation grade for their work and told they would receive full credit for trying their hardest to complete the task.

• Example 6: Grade 10-11: Exam Task. This task was designed as an exam task for Finnish students. In the exam, students were given a scenario and were then able to chose one of two topics, each stated as a claim: 1) Social media increases people’s quality of life and 2) Genetic engineering of plants and animals should be allowed. The students either worked individually or in pairs. No additional elements of the task were given nor were students pointed to specific online texts. They were asked to search for relevant sources on the Internet and fill in the graph with arguments to support and refute the main claim (40-70 minutes). After the searching phase, students had 1.5 to 2.5 hours to write their essay. Before the exam, students had a 90-minute class in which they were introduced to the basic concepts included in the Online Inquiry Tool (claim, argument, counter-argument, perspective, synthesis). During the class, the teacher modeled use of argument graph and students completed a practice task guided by the teacher. The teachers also explained expectations for graph and essay.

After data was collected in the United States, participating teachers were also asked to complete a short five-item questionnaire indicating their students’ level of familiarity with the task topic and terms associated with argumentation (e.g., claims, evidence, warrants); their students’ prior experience with writing argumentation essays, and their own comfort level and training in teaching about reading and writing argumentation texts. Their responses indicated most students had some exposure to argumentation concepts and practice writing argumentation essays in English Language Arts classes, but most teachers indicated their students were more accustomed to analyzing literature (as opposed to informational texts) and to writing persuasive essays as opposed to writing essays that considered both arguments and counter-arguments as part of their response. The class that completed the task about the atomic bomb was more used to regular school-wide argumentation tasks, but students in this class typically struggled to complete these types of assignments. With respect to their own preparation to teach argumentation writing skills, three of the six teachers responded to this question. Each teacher ranked his/her own comfort level teaching argumentation tasks as relatively high (7 or 8 out of 10). The history teacher at the school with the atomic bomb task reportedly received five PD sessions on the topic and completed several background readings about argumentation writing while the history teacher of the global issues task had no formal training and the two ELA teachers were familiar with persuasive writing with literary texts, but did not typically work with students on how to read or write informational texts.

 

11 

RQ2: After shown a short video about how to use the digital tool, how do students engage with the tool to organize and integrate conflicting information across multiple sources and compose argumentation essays?

Overall, the data we collected suggested that regardless of content area or level of typical academic performance, many students struggled with many aspects of these types of tasks.

Quality of Content in Student Graphs

Across the tasks, students varied greatly in their ability to use reasonable arguments to support or refute the main claim (see Table 1). For the most part, students were able to locate ideas from the text that were topically relevant. However, less often were students able to craft these ideas as arguments that supported or refuted the main claim. Note, because the students were using the graph to take informal notes, it was not expected that the argument boxes contained fully constructed arguments. Thus, the researchers inferred from the student’s notes if the content could be used as a reasonable argument that connected with the claim they wrote at the top of the graph.

Table 1. Quality of content in argument boxes

# of

Argument Boxes

Inferred Argument

Relevant Information

Partially Relevant

Information

Irrelevant, Unclear, or

Other

ELA Gr. 9: Video Games

44 61% 23% 9% 7%

History Gr. 9: Atomic Bomb

74 36% 31% 24% 9%

History Gr. 9: Country Conflicts

79 24% 14% 23% 39%

ELA Gr. 10 Honors: Rom & Juliet

157 73% 22% 4% 1%

ELA Gr. 10: Genetics

327 14% 30% 1% 55%

In some ways, we believe these different trends could have been explained by differences in each task’s structure, differences in the readability and familiarity of texts and topics, differences in how students were being graded for the quality of their work, and differences in the typical academic performance among students in each class.

 

12 

For example, although ninth grade students in the violent video games task belonged to a special program for at-risk lower performing students, they were very familiar with the topic because the teacher wanted to lessen the possibility of behavior problems by selecting a topic that students would likely be interested in. In addition, the teacher highly structured the task by selecting only a few sources (that were relatively easy to read) and organizing them under headings that indicated whether the text was taking a stance for or against the topic. Finally, the teacher required that all students complete the assignment to an acceptable quality or they would not be able to attend the end-of-year field trip. Thus, a relatively high percentage (61%) of all of the argument boxes for this class of students represented inferred arguments (many of which of which were copy/pasted from the sources), and a relatively low percentage of responses (16%) was classified as partially relevant, irrelevant, or otherwise unclear. The high percentage of arguments in this class reflects the students’ ability to locate argumentative statements in the text and copy/paste them into the graph as opposed to being able to generate arguments in their own words.

In contrast, typically low performing ninth graders who completed the U.S. History task about country conflicts had a relatively low percentage (24%) of content in the argument boxes that was classified as arguments and a relatively high percentage (63%) of content that was classified as partially relevant, irrelevant, or otherwise unclear. Notably, this task was structured similarly to the video games task as students in each group were directed to only three sources that were organized under claims they appeared to support. However, these findings could have been related to a combination of challenging factors including students’ very low familiarity with any of the six global conflicts, the very high readability levels of the texts, and the complexity of the tasks (e.g., one group was asked to decide whether Israel should offer a two-state solution; another group was asked to determine if Latin America was moving toward a stable democracy). In addition, the teacher graded student work with a simple rubric but they were told they would be graded primarily for their willing participation in the project.

In another example, tenth grade students in the genetics task represented a large range of ability levels across the three classes. While some students did provide reasonable arguments to support or refute the claim, many students struggled to make sense of the key ideas in the texts they read or to line them up on the correct side of the graphs to accurately support or refute the claim. In addition, the topic of genetic manipulation was new to many students and two of the sources were videos as opposed to specific texts, which could have made it much more difficult to locate/record information in the form of an argument to copy/paste or reword in the argument box. Also important was that students were not graded for the quality of their graphs and their essays; they were told simply that if they attended class both days and made an attempt to complete the tasks, they would get the full 40 points for participation. Thus, many students may not have put their full effort into completing the tasks or found it difficult to generate ideas in the form of supporting arguments, which may account for the low percentage (14%) of content in the argument boxes that was classified as arguments and a relatively high percentage (60%) of content that was classified as partially relevant, irrelevant, or otherwise unclear.

The content of the students’ graphs were also analyzed on two other dimensions: a) whether the content in the argument boxes were mostly copy-pasted from sources or modified and written in own words and b) whether filled in argument boxes were primarily one-sided or somewhat balanced. In three of the tasks (video gaming, atomic bomb and country conflicts), more than

 

13 

half of the argument boxes included copy-pasted information whereas in other the two tasks (Romeo & Juliet and Genetics), the majority of the argument boxes included information written in the students’ own words (see Table 2). Furthermore, in three tasks (Video gaming, Romeo & Juliet and Genetics), students attempted to consider different sides of the issue in a pretty balanced way whereas in the other two tasks, the focus was pretty one-sided. Table 2. Type of Content in Argument Boxes Most

content copy/pasted

Most content

modified

Content mostly

one-sided

Content fairly

balanced

Gr. 9: Video Games (N=16)

56 % 44 % 31% 69%

Gr. 9: Atomic Bomb (N=17)

65% 35% 59% 41%

Gr. 9: Country Conflicts (N=19)

74% 26% 84% 16%

Gr. 10: Romeo & Juliet (N=26)

23% 77% 15% 85%

Gr. 10: Genetics (N=25) 32% 68% 12% 88%

How do students justify their source evaluations in the argument graph? (Finnish data only) Students most commonly justified their evaluations by paying the attention to the quality of content (50% of all justifications). These justifications were most often related to objectivity, use of sources, and research-based information. Judgments of objectivity included both positive and negative indicators of reliability, depending on whether students had evaluated the source as either reliable or somewhat reliable. In particular, when students had some doubts about reliability, they felt the information was too biased or based too much on opinion.

The students also justified their evaluations by considering the trustworthiness of the organization (e.g. university, company, newspaper) that sponsored the web page (18.7 % of all justifications) or the level of expertise, education, reputation or position of the author or a person interviewed (10.2%).

When students worked in pairs (a situation unique to the Finnish tasks), they provided slightly more justifications (M=6.18) than students who worked individually (M=4.32). However, this difference was not statistically significant. The pairs (M=4.50) and individual readers (M=2.58) differed in how many different kinds of justifications they provided for their evaluations (t =-3.65; p<0.01).

 

14 

Although research suggests that many students do not spontaneously evaluate online sources, these findings suggest they may be capable of providing appropriate justification for evaluations when they are prompted and motivated to do so. These findings also suggest that collaborative work may provide students with opportunities to learn from each other about different ways to critically evaluate online sources. Quality of Student Essays Across the tasks, students also varied greatly in their ability to write argumentation essays (see Table 3). Trends in the data we collected suggest students, on average, scored more points for a) using information from multiple online sources in their essay (averaging a range of 1.2 - 2.3 out of 3 points); b) representing two or more perspectives on the issue (averaging a range of 1.1 – 2.2); and c) providing relevant reasons for supporting or refuting claims (averaging a range of 1.7 - 2.0). Item scores also suggest that students, on average, scored fewer points for a) including a balance of arguments and counterarguments in their essay related to the main claim (averaging a range of 1.3 - 1.8 out of 3 points) and b) integrating ideas from multiple sources (averaging a range of 1.1 – 1.8 out of 3 points) rather than listing information from each source separately. These trends suggest students may have found it slightly easier to include multiple and relevant ideas in their essays (resembling elements of persuasion) compared to writing in ways that consider and cohesively integrate these relevant ideas with appropriate counter arguments (resembling elements of argumentation). This would make sense since most secondary school students in the United States are used to writing persuasive essays, but far fewer students are accustomed to writing argumentation essays that include arguments and counterarguments. This finding also aligns with language arts teachers’ comments in their survey that indicate they teach students how to analyze literary texts and write essays, but they rarely ask students to consider counterarguments in their writing. Especially interesting is that, on average, the ninth grade students with the two highest essay scores were also the groups with many at-risk and struggling readers (see Atomic Bomb task and Video Gaming task in Table 3 below). Again, while there could be many explanations for the range of essay scores, there may be some plausible reasons for the performance of these two groups. First, students who completed the atomic bomb task were quite familiar with writing argumentation essays. Before the study, they had received a year of focused instruction in both language arts and history and two rounds of school-wide practice in writing argumentation essays. The teacher in this class was also personally invested in reading about how to teach argumentation in high school and she had participated in 5 professional development sessions about argumentation (compared to few or no PD sessions among the other participating teachers in this study). In contrast, students in the classes with lower total scores received little to no practice writing argumentation essays in history and their language arts instruction focused on analyzing literary texts as opposed to reading and writing argumentative informational texts. Another important difference is that the atomic bomb task took place at the end of a unit about World War II, so the scenario and the essay was much more connected to their previous class

 

15 

work whereas most of the other tasks were isolated assignments not connected to what they were studying in class. And finally, because students in this class had not been very successful in writing strong argumentation essays in the past, the teacher designed the Atomic Bomb task with a great deal of scaffolding that most likely supported students in their reading and writing of argumentation texts. With respect to the relatively high essay scores among at-risk students who completed the Video Gaming task, these findings likely reflect a combination of the extended time given to write their essays and the expectation from their teacher that they needed to fill in both sides of the argument graphs and write their essays to an acceptable level in order to attend the end-of-year picnic with the rest of their grade. Thus, it makes sense students who are held accountable for quality work would likely try harder to complete the task as assigned. Table 3. Mean Scores on Aspects of Essays Mean Scores of Aspects of Essay Quality

(max. of each aspect 3 points)

Breadth of perspectives

Balance of arguments

Relevance of reasons

Use of multiple sources

Integration Coherence

Total score (max 18)

History: Atomic Bomb

2.2

1.8

2.5

2.3

1.8

1.9

12.5

ELA: Video gaming

1.1

1.6

2.7

1.8

1.6

1.8

10.6

Lang Arts: Genetics

1.6 1.8 1.7 1.6 1.1 1.6 9.4

History: Country Conflicts

1.3

1.3

2.0

1.2

1.4

1.8

9

Lang Arts: Romeo & Juliet

1.3

1.6

2.0

1.2

1.1

1.6

8.8

RQ3. How do students and teachers perceive the utility of the tool and its ability to support processes required to read online and write argumentative texts? Below are some of the emerging themes and illustrative comments representing typical responses to three open-ended survey questions.

 

16 

1. What did you like about the tool?

Helped organize information in one place • “How easy I can sort what goes where and where I got my information.” • “It’s clear and straight to the point - it helped me see what needed to be done and I

got it done.” • “I like how it helped me organize my thoughts all in one place.”

Helped focus on each perspective and both sides of argument • “It helps me focus on one perspective at a time and I don't have to click back to the

websites and look for the information again.” • “It made me have more than one perspective in mind.” • “It helped me make sure I had a counter-argument for every point I have.”

2. What did you dislike about the tool?

• Argumentation is unfamiliar, hard, and it times up time – I want to be engaged! • “It was a little confusing at first, but I got the hang of it.” • “The synthesis and perspective boxes were hard.” • “I didn't like how it took up time from actually writing the essay.” • “It needs more features to grab my attention.”

• Critical Evaluation...takes up time and what’s the point? • The signal lights. They were unneeded. If you were writing an essay, you would

already know if your sources are credible.” • “I didn’t get the reason why to have the red, yellow, and green lights on the side so I

ignored that.”

3. What features would you like to see if you used the tool again? • Functionality and flexibility

o A fun facts – sides notes box for random ideas that you’re not sure where they fit o A little more flexibility to structure the boxes and labels like I want; connection

boxes for in between the pieces of evidence o Bigger boxes and more space in the argument boxes for writing reasons why

(maybe a separate box for warrants)? • More explicit support embedded into the tool (and into instruction)

o More helpful hints and explanations – especially for the big words like perspective and argument and synthesis

o “Get rid of the traffic light feature” Comments from Teachers • I certainly feel it has potential to be a beneficial tool in the argumentative writing process. I

appreciate the fact that it slows students down and makes them look closely at what evidence will support or refute their claims.

• The tool seemed a bit basic - like a paper graphic organizer on a computer screen. I think students/teachers will expect more of an electronic organizer moving forward. more interactive perhaps?

 

17 

• The source evaluation tool seemed to pose a slight problem for some students. Likewise, the perspective option also proved to be confusing for the students. They appeared unsure of how to effectively use that box in processing their source.

Conclusion

Having the ability to comprehend and respond to information on the Internet will play a central role in our students’ success in an information age. This study builds on previous work (Author 2012; 2013) that studied how to support online inquiry processes while reading multiple sources and writing argumentative texts, two critical dimensions of the Common Core State Standards. Findings can guide the development of additional scaffolds within the Digital Online Inquiry Tool. They can also inform future research and instructional decisions about how to enhance deeper engagement with online texts while improving the quality of secondary students’ source-based argumentative writing in academic contexts.

References

Barzilai, S. & Zohar, A. (2012). Epistemic thinking in action: Evaluating and integrating online sources. Cognition and Instruction, 30(1), 39–85.

Beach, R., Thein, A.H., & Webb, A. (2012). Teaching to exceed the English language arts common core state standards. New York, NY: Routledge.

Bogdan, R. C., & Biklen, S. K. (2003). Qualitative research for education: An introduction to theories and methods (4th ed.). Boston, MA: Allyn & Bacon.

Brand-Gruwel, S. Wopereis, I., & Vermetten, Y. Information problem solving by experts and novices: Analysis of a complex cognitive skill. Computers in Human Behavior, 21(3), 487-508.

Britt, M. A., & Rouet, J. F. (2012). Learning with multiple documents: Component skills and their acquisition. M.J. Lawson & J.R. Kirby (Eds.), The Quality of Learning: Dispositions, Instruction, and Mental Structures. Cambridge University Press.

Chipperfield, B. (2006). Cognitive Load Theory and Instructional Design Saskatoon. Saskatchewan, Canada: University of Saskatchewan (USASK). Retrieved from http://www.usask.ca/education/ coursework/802papers/ chipperfield/ chipperfield.pdf

Gerjets, P., Kammerer, Y., & Werner, B. (2011). Measuring spontaneous and instructed evaluation process during Web search: Integrating concurrent thinking-aloud protocols and eye-tracking data. Learning and Instruction, 21(2), 220–231.

Kiili, C., Laurinen, L., Marttunen, M., & Leu, D. J. (2012). Working on understanding during collaborative online reading. Journal of Literacy Research, 44(4), 1–36.

Krippendorff, K. (2004). Content Analysis: An Introduction to Its Methodology (2nd ed.). Thousand Oaks, CA: Sage

Kuiper, E., Volman, M., & Terwel, J. (2005). The Web as an information resource in K–12 education: Strategies for supporting students in searching and processing information. Review of Educational Research, 75(3), 285–328.

Leu, D. J., Kinzer, C. K., Coiro, J., Castek, J., & Henry, L. A. (2013). New literacies: A dual-level theory of the changing nature of literacy, instruction, and assessment. In R. B. Ruddell & D. Alvermann (Eds.), Theoretical models and processes of reading (Sixth Ed., pp. 1150–1181). Newark, DE: International Reading Association.

 

18 

Morgan, D. L. (2007). Paradigms lost and pragmatism regained: Methodological implications of combining qualitative and quantitative methods. Journal of Mixed Methods Research, 1(1), 48-76.

Persky, H. R., Daane, M. C., & Jin, Y. (2003). The nation’s report card: Writing 2002. Washington, DC: US Department of Education, National Center for Education Statistics.

Stadtler, M., & Bromme, R. (2008). Effects of the metacognitive computer-tool “met. a. ware” on the wweb search of laypersons. Computers in Human Behavior, 24(3), 716–737.

Toulmin, S. (1958). The uses of argument. Cambridge: Cambridge University Press

 

19 

 Appendix A 

Rubric used to score the quality of student essays 

Student Name:   Quality Score Area of essay quality

1 2 3

Breadth of perspectives

The essay represents only one perspective on the issue.

The essay represents two perspectives on the issue.

The essay represents multiple perspectives on the issue.

Balance of argumentation within the perspectives

None of the presented perspectives is covered from both arguments for and counterarguments.

At least one of the presented perspectives is covered from both argument(s) for and counterargument(s).

More than one of the presented perspectives is covered from both argument(s) for and counterargument(s).

Relevancy of the reasons

Essay provides mainly irrelevant reasons for presented claims.

Essay provides both relevant and irrelevant reasons for presented claims.

Essay provides primarily relevant reasons for presented claims.

Use of source-based evidence

Evidence provided in the essay is informed mainly by personal knowledge.

Evidence provided in the essay is informed both by sources and personal knowledge.

Evidence provided in the essay is informed primarily by sources not only personal knowledge.

Use of multiple online sources

The essay utilizes one or two sources

The essay utilizes three sources

The essay utilizes more than three sources.

Integration of multiple sources

The essay lists information from each source separately.

The essay partly integrates ideas from multiple sources.

The essay sufficiently integrates ideas from multiple sources.

Coherence in the essay

The essay is organized as a separate list of ideas.

The essay is clearly organized but it lacks cohesive ties that link the paragraphs together or the cohesive ties are used in a mechanical way.

The essay is clearly organized and it forms a coherent whole. Cohesive ties are used in versatile ways.

 Comments: