1 Differentiated e-Learning: Five Approaches through ...

25
1 Differentiated e-Learning: Five Approaches through Instructional Technology Kathleen Scalise, Faculty of Educational Leadership, University of Oregon, 5267 University of Oregon, Eugene, OR 97403-5267, USA E-mail: [email protected] Revised April 2007 Abstract: Differentiated instruction is an approach to teaching that acknowledges people have multiple paths for learning and for making sense of ideas. In e-learning, differentiated instruction has the same meaning as in traditional instruction, but different tools are available to help students learn and to provide information in ways most appropriate to them, including types of new media inclusion, levels of interactivity, response actions, and enhanced ability to collect data on the fly and to deliver custom content. This paper discusses what the tools of e-learning contribute to differentiated instruction and shares a framework for five common approaches to adaptive courseware. Keywords: differentiated instruction, e-learning, adaptivity, personalized learning, personalised learning, individualization, assessment, evaluation, rule-based methods, self-determined learning, diffuse differentiation, Gaussian, Bayesian networks, neural networks, item response models. 1 Introduction Differentiated instruction is an approach to teaching that acknowledges people have multiple paths for learning and for making sense of ideas (Tomlinson and McTighe, 2006, Willis and Mann, 2000, Tomlinson and Allan, 2000, Tomlinson, 2001, Sizer, 2001, Reis et al., 1988, Hall, 2002). As instructors when we differentiate instruction in the classroom, we are saying that we know students come to us with different backgrounds, preferences and needs. We believe how we respond will make a difference. In the world of e-learning and online instruction, differentiated instruction is playing out in new forms (Scalise, 2005, Trivantis, 2005, Turker et al., 2006, Taylor, 2002). In e-learning, differentiated instruction has the same meaning as in traditional instruction, but different tools are available to help students learn and to provide information in ways most

Transcript of 1 Differentiated e-Learning: Five Approaches through ...

1

Differentiated e-Learning: Five Approaches through Instructional Technology

Kathleen Scalise, Faculty of Educational Leadership, University of Oregon, 5267 University of Oregon, Eugene, OR 97403-5267, USA E-mail: [email protected] Revised April 2007 Abstract: Differentiated instruction is an approach to teaching that acknowledges people have multiple paths for learning and for making sense of ideas. In e-learning, differentiated instruction has the same meaning as in traditional instruction, but different tools are available to help students learn and to provide information in ways most appropriate to them, including types of new media inclusion, levels of interactivity, response actions, and enhanced ability to collect data on the fly and to deliver custom content. This paper discusses what the tools of e-learning contribute to differentiated instruction and shares a framework for five common approaches to adaptive courseware. Keywords: differentiated instruction, e-learning, adaptivity, personalized learning, personalised learning, individualization, assessment, evaluation, rule-based methods, self-determined learning, diffuse differentiation, Gaussian, Bayesian networks, neural networks, item response models.

1 Introduction

Differentiated instruction is an approach to teaching that acknowledges people

have multiple paths for learning and for making sense of ideas (Tomlinson and McTighe,

2006, Willis and Mann, 2000, Tomlinson and Allan, 2000, Tomlinson, 2001, Sizer, 2001,

Reis et al., 1988, Hall, 2002). As instructors when we differentiate instruction in the

classroom, we are saying that we know students come to us with different backgrounds,

preferences and needs. We believe how we respond will make a difference. In the world

of e-learning and online instruction, differentiated instruction is playing out in new forms

(Scalise, 2005, Trivantis, 2005, Turker et al., 2006, Taylor, 2002). In e-learning,

differentiated instruction has the same meaning as in traditional instruction, but different

tools are available to help students learn and to provide information in ways most

2

appropriate to them, including types of new media inclusion, levels of interactivity,

response actions, and enhanced ability to collect data on the fly and to deliver custom

content (Parshall et al., 2002, Parshall, 1996, Bennett, 2000). This paper discusses what

the tools of e-learning contribute to differentiated instruction and shares a framework for

five common approaches to differentiated e-learning: diffuse, self-directed, naïve,

Boolean, and model-based. The framework introduced here considers one way in which

the various approaches can be categorized, based on what types of decision-making and

evidence are used to establish the differentiation choices.

One interesting aspect of differentiated e-learning — or e-diff — is how quietly

personalization or individualization, one form of differentiation, has slipped into online

learning products in recent years (Trivantis, 2005, Hopkins, 2004). In e-learning

products, a variety of assessment approaches are being used for such diverse purposes as

adaptive delivery of content, individualizing learning materials, dynamic feedback,

cognitive diagnosis, score reporting and course placement (Gifford, 2001).

The instructional decisions being made based on the differentiation approach

could have substantial consequences for the learner. If the assessment approaches are

being made formatively, or in other words to guide instruction during the process of

learning, then differentiating the challenge level, types of formats, representations and

feedback (Black and Wiliam, 1998, Black et al., 2002) might make a difference in how or

how much the child or adult learns. Information can also be used summatively, or in

order to make a judgment about student learning, such as the appropriate course

placement or who gets access to what educational opportunities (Resnick and Resnick,

1992). Feed forward to teachers by systems that collect and report information can also

3

influence teacher expectations of students. Taken all together, the potential of

differentiation to affect student learning can be great (Tomlinson and McTighe, 2006). In

the e-learning context, it also becomes faster and easier to do for some types of

differentiation, so it is important that differentiation is well done, just as is true in the

classroom-based context.

When instruction is differentiated in the classroom, it is often clear that multiple

approaches are spiraled into the curriculum. For instance, experiences repeat in different

forms or students are grouped and regrouped for course placement and learning activities.

Online, however, it can be much less apparent. If one learner is given something different

on the computer than some other learner, either locally or at a remote site, it can be hard

to tell since the two learners aren’t looking at the same screens. Neither knew what to

expect in the first place and students in online settings often are used to an asynchronous

pace that would leave students in different places at different times anyway. So there

typically is no basis for comparison. The learner may not even realize that had he or she

interacted differently with the computer, it would have interacted differently with them.

Also, unless disclosed, we don’t necessarily know what e-interfaces are gleaning about a

learner or the purposes to which the inferences are being put (Nielsen, 1998).

2 What is meant by differentiation?

Here some common language is introduced as a foundation for what is meant in this

paper by differentiated instruction, whether it be in the classroom or in e-settings. Many

teaching approaches that focus on meeting the needs of the individual student tend to

involve one of five types of differentiation, in response to student needs of readiness,

4

interest and learning profile (Tomlinson and McTighe, 2006, Willis and Mann, 2000,

Tomlinson and Allan, 2000, Tomlinson, 2001, Sizer, 2001, Reis et al., 1988, Hall, 2002).

These general types of differentiation are:

1. Differentiation of content – when students start at different places in the

curriculum and may proceed at different paces.

2. Differentiation of process– emphasizing many modalities of learning profiles,

including individual learning skills profiles (Boyatzis and Kolb, 1991), learning

inventories (Dunn et al., 1984, Lovelace, 2005), cognitive dimensions (Sternberg,

1997), multiples intelligences (Gardner, 1999) or other types of learning theories

that may suggest how we prefer to learn. Learning profiles are controversial

(Curry, 1990, Stahl, 2002) and different approaches can be informed by diverse

elements. These can include in addition to learning styles and intelligence

preferences also some demographic aspect of the learner such as gender or a

group factor such as possible cultural indications.

3. Differentiation of product – different students have different assignments and turn

in different products. This is a somewhat controversial type of differentiation –

what are the standards we judge students by and when can their work products be

different assignments and still be assessed and evaluated as the same “course” for

learning credit?

4. Differentiation of affect – affect, or the feelings and attitudes of the learner, may

be a differentiation premise in e-learning, sometimes for building the affective

characteristics of the learner in relation to the learning task and self. These include

5

confidence building exercises such as dynamic assessment, where hints are

provided to the learner until he or she grasps the learning objective.

5. Differentiation of learning environment – in the e-learning context this can

include individual, small or large group learning; learning with or without

technology; incorporating various forms of blended hybrid instruction that

combine elements of technology with offline or classroom-based instruction in

different ways; differences in learning location; and synchronous and

asynchronous learning.

This paper is not about the fifth kind of differentiation listed above, including

learning delivery location, as in traditional schools with some hybrid e-learning content

compared to fully online courses and/or cyberschools (National Leadership Institute,

2005). Location differentiation, and time differentiation as in synchronous and

asynchronous learning, are discussed frequently in the technical and popular literature. E-

diff courseware as a paradigm for the four other types of differentiated instruction can be

used in distributed, hybrid or fully cyber contexts, synchronously or asynchronously, and

in various blended, direct or child-centered instructional relationships with teachers and

instructors. So location and synchronous/asynchronous time are not the elements of

differentiation that we discuss here.

3 Some differences between classroom-based and e-learning differentiation

One significant way that the approach to differentiation in e-learning products often

differs from classroom-based approaches is in intent. While in classroom-based

approaches, the differentiation of content can refer to different knowledge, understanding

6

and skills, some researchers argue that it should likely refer not to different learning

outcomes but rather to different ways to access those learning outcomes (Tomlinson and

McTighe, 2006). The argument often is that the strongest classroom-based differentiation

approaches ensure that all students work with the essential understandings for a segment

of learning, thus ensuring stability of the most substantial learning goals. However, e-

learning products are often designed to stretch the individual student’s opportunity to

learn, to include going beyond learning objectives where interest, engagement, and

motivation are strong, or to give the learner choice among objectives. This is partly

because products may be used outside a formal learning setting such as standards-based

classroom instruction, and instead in home environments, extended study facilities such

as student learning enrichment centers, higher education, and in adult work environments.

But even for K-12 classroom-based products, the philosophy of individual opportunity to

learn may encourage mastery beyond or in addition to standards. So while e-learning

differentiation definitely can refer to different ways to access the same learning

objectives, it also often adjusts learning objectives, within a scope of what deemed

desirable for the goals of the product. When, where and whether this is appropriate to do,

and what the learning gains might or might not be, remains a substantial discussion and

area of research for the e-learning community.

4 A Framework for Differentiated e-Learning Strategies

Computers aren’t instructors who can perceive and adjust on the fly and students

aren’t markets with known characteristics. So how does e-learning content know how to

work differently with different people? The technology for sending different content to

different people is easy to implement, requiring only basic html and back-end databases,

7

but the logic of knowing what to effectively offer different students is much more

challenging (Scalise and Wilson, 2006).

There are numerous strategies for establishing the logic for differentiation in e-

learning products. The framework introduced here considers one way in which the

various approaches can be categorized, based on what types of decision-making and

evidence are used to establish the differentiation choices. The framework stems from the

author’s research on differentiated e-learning products with the UC Berkeley/University

of Oregon Technology and Assessment Group (Scalise et al., 2006a, Scalise et al., 2006b,

Scalise et al., pending, Kennedy et al., 2007), the Berkeley Evaluation and Assessment

Research Center (Scalise and Wilson, 2006), and the Berkeley-based Distributed

Learning Workshop (Gifford, 2001, Scalise and Gifford, 2006).

The five categories of the framework are summarized below and some examples of

products are given in the next section:

“Diffuse” approaches to differentiation, in which students receive the same

content but have multiple opportunities for learning and are provided with

different approaches for making sense of ideas carefully planned to be

“diffused” throughout the content.

Self-directed approaches, in which students receive different content by a

mechanism of self-selection built in the content. This introduces

differentiation through student choice.

Naïve differentiation, in which the computer is determining the course of

differentiation, not the user, but that no real plan or overall strategy is in place

8

in the e-learning content for why differentiation is happening, or what it is

intended to mean in the learning context.

Boolean differentiation, in which the computers use types of Boolean logic,

such as various types of rule-based frameworks or decision trees, to determine

how to adjust content for different students.

Model-based differentiation, in which expert opinion is combined with a

variety of data mining techniques to generate ideas for how content might be

appropriately differentiated online.

Note that each of these categories is intended to represent an iconic theme or family

of strategies. But the approaches can also be combined and often are in e-learning

products. So in many e-learning products, examples of several of these different types of

e-diff can be seen.

5 How Each of the Five Strategies Works

In diffuse differentiation, e-learning content is intentionally designed to span a range

of goals such as learning modalities and performance abilities. There is no direct

intention to assess or match the needs of individual users, or to customize content or

feedback, as all students receive the same content. But enough variety and different

sources of stimulation are provided to interest and engage diverse audiences. Similar to

ideas in classroom differentiated instruction, careful attention can be paid to inclusion of

a range of media that may be appropriate for different learners, such as video, audio,

interaction and written expression, and preferred representations and interactions. The

hope here is that with enough variety provided, everyone’s needs can be addressed. This

9

is one of the most common approaches to classroom-based differentiation instruction.

With the capabilities offered by the technology, the e-learning platform can allow easy

integration of varied media for diffuse differentiation and may make distribution of

content and activities, and collection of assessment data, simpler than without use of

technology.

An example of diffuse differentiation in a commercial e-learning product can be seen

in CAHSEE Conductor (California High School Exit Exam) computer-mediated learning

materials. The materials generate topic and lesson specific advice for students on how

they can become more aware of their own learning strategies and practices. The goal is

meeting the learning proficiencies specified by the California High School Exit

Examination required for high school graduation in the state. A diffuse differentiation

strategy employed in the product relies on extensive use of interactive instructional

“objects,” such as dynamic graphs and charts which students change and manipulate,

accompanied by audio. This feature is intended to allow the student to visually attend to

representations, kinesthetically interact with learning concepts, and simultaneously listen

to explanatory information. In this way, students receive the same content but diffused

throughout are multiple opportunities for learning and different approaches for making

sense of ideas.

The second strategy, self differentiation, is a self-determination approach to

differentiating instruction. Numerous, or at least more than one, possible route or path of

learning is made available in the courseware, and students select their personal choices as

they go. This can consist of simply selecting the order of completion among a fixed menu

of learning activities or modules. More flexibility and choice comes about when students

10

get to select from among a range of different activities, leaving some out and doing

others. In the self-determination approach, as in all the other approaches, the target of

choice can be content, such as what you want to learn and how fast; learning modality,

such as a new media form in which you want the instruction provided; or product, the

activity or tasks in which you are going to engage. The instructional design of the

courseware determines where these choice points of differentiation are allowed.

This is a very common type of differentiation seen in e-learning content. E-learning

environments are often built on a “hyperlink” paradigm, such as seen on the Web, where

links in the content can be self-selected for additional information or learning

opportunities.

An example of self differentiation in an interesting professional context are the e-

learning products of the Collaborative IRB Training Initiative (CITI), housed at the

University of Miami. CITI offers adult-learning courses in the protection of human

research subjects. The Basic Course includes 17 modules for Biomedical investigators

and 11 modules for Investigators conducting Social/Behavioral research. Each module is

focused on a different aspect of bio-ethics and human subjects research. This is an

interesting context for e-learning differentiation as human subjects protocols for the

responsible conduct of research in the U.S. are quite specific and might not seem a

context for differentiated learning. Here the self-selection differentiation is by course

module. For CITI certification, completion of a uniform set of course modules is

required. But an additional set of self-selected course modules are also required. The

group of self-selected modules covers different topics, such as human subjects

protections for special needs populations or for research in schools. The learner is able to

11

select the topics they feel are most appropriate to their learning needs. Multiple Learner

Groups also can be established to customize the course to the learners role in human

subjects research.

While self differentiation is common in courseware, examples of the next type of

approach, naïve differentiation, are also very prevalent. Naïve differentiation, in which

differentiation is happening but not based on planned decisions about learning or other

student outcomes, comes about almost inadvertently in many e-learning products. Since it

is quite easy to interact with students online, it often seems that once a student does

something, the computer should do something particular in response. After all, it might

seem quite dull if no matter what the student did, the computer just proceeded to dole out

the same thing. And since it is easy enough to change content online, why not throw in a

randomizing factor in math graphs or do a shuffle on the graphics that appear within a

reading text? The point here is not whether a random generator makes the change or the

designer hard codes it in for certain interactions, but that in naïve differentiation, by

definition, what the student did and what the computer does in response has no direct tie

from the perspective of learning theory or instructional design.

An example of this is use of the random image java script in e-learning web

interfaces. The script is placed just below the body tag on web pages and gives a learner a

different random graphic each time they visit the page. Another example of

randomization of this type is in rollover effects. A rollover effect displays something on

the screen when the user mouses over a link or hot spot on the screen. Sometimes

rollover displays change for different users, even unintentionally, as in the example of

12

TortoiseSVN, a source control software for Microsoft Windows. It assigned random

graphics to rollovers, depending on versions of the software in use.

These types of randomization are as compared to the SAT Words College e-learning

software that helps to prepare students for the verbal SAT. It features a screensaver that

randomly pronounces and defines words. However, the random selection is within a

range of words aligned with the learning constructs, so in this way the randomization has

an embedded planned learning strategy, and is a type of diffuse strategy.

Though diffuse and self-directed strategies can be quite consistent with improved

learning objectives of differentiated instruction, it is harder to make the case for naïve

differentiation. It is possible to argue for gains in motivation and engagement as learning

displays change, based on reducing monotony factors. But the monotony is for the

instructional designer, who sees all the screens, or the instructor, who sees numerous

students. If the same student only sees one of the displays, it cannot be considered

monotonous for that student, unless the content was simply dull in the first place. So

simply shuffling content around with no intentional plan may not be effective

differentiation, in this single screen context.

It is also important to make the distinction between diffuse and naïve differentiation

strategies. In the diffuse strategy, different approaches for making sense of ideas are

intentionally “diffused” throughout the content, such that students may encounter a video

teaching segment, followed by an audio activity, and so forth. But in a shuffle version of

naïve differentiation, one student might receive just the video version and another just the

audio, with neither self-directed nor planned directed learning approaches (see below in

the Boolean and model-based strategies) underlying the choice for who gets what. It often

13

makes for intriguing technology effects in demos and marketing materials, but the

learning justification for naive differentiation can be weak.

The fourth strategy for customization is Boolean differentiation. The word “Boolean”

used here describes the logic that computers can use to determine if a statement is true of

false. There are four so-called main Boolean operators: AND, NOT, OR and XOR. For

example, taking the case of the AND operator, if an assessment about a student is found

true — say they can add two numbers — AND (note the Boolean operator) another

aspect is also true — they can also subtract two numbers — then maybe the student is

ready for something else, say multiplication. The same is true for NOT and OR, and XOR

means that either x or y are true, but not both at the same time.

Operators and sequences of operators can create quite complex Boolean logic with

different decision methods. Problem solving and planning agents are some examples of

software programs written to find sequences of actions that lead to desirable states for

differentiated learning (Russell and Norvig, 1995). Such agents contain ways to represent

learning goals, learner knowledge states and possible actions that can be taken by the

courseware. This information is then used to generate plans for roll-out of differentiated

content online. When making decisions, agents may draw on “bug bases,” or databases of

learning misconceptions in a particular context, or algorithms that consider uncertainty,

such as the probability of encountering a likely or unlikely event. But the idea across

logical rule-based systems that employ Boolean-type operators is that a set of rules have

been devised, often by very carefully studying the learning pathways and attributes of

many students. So depending how the student performs and what assessment data are

14

collected — they can do that and NOT this or that AND this — the computer takes some

particular action for each student.

These rule-based Boolean methods make up some of the oldest forms of e-diff. The

simplest forms look like a checklist of learning objectives. Students go down the list and

complete the objectives. If they successfully complete 1 AND 2, they go onto 3, for

instance. But 1 and NOT 2 and maybe the student is redirected to 2A, or given some

additional feedback or other learning intervention that passing students don’t get. Rule-

based methods can take much more elaborate forms, and have been used to describe in

very fine-grained ways the multitude of conceptions and misconceptions that students

hold in certain subject matter areas, and what to do about them. Intricate decision trees

can dispatch students along learning paths, and analyze when they are ready to hop to the

another “limb” or the next “tree” altogether. But the challenges in this are apparent. Just

imagine an area of learning with which you are familiar and picture coming up with the

myriad of rules one would need to describe a multitude of pertinent learning factors. It is

a daunting task fraught with complications, especially when multiple ways of knowing or

non-codified outcomes are possible or encouraged in the learning space. So though in use

for more than 30 years and part of some intriguing and effective products, the elaborate

rule-based forms that go beyond simple check lists have gained limited market share in e-

learning products over the years.

An example of this type of complex Boolean differentiation based on careful

development of learning rules are Quantum Tutors, developed by Quantum Simulations,

Inc, and designed for students from middle school through college to improve their

knowledge and appreciation for the sciences. They are Internet-delivered with a text-

15

based, dialogue-driven interface. The system uses rule-based methods that model how an

expert would perform a task so that the students can observe and build a conceptual

model of the processes that are required. The approach also involves coaching that

consists of observing students while they carry out the task and offering hints,

scaffolding, and feedback.

The final form of e-diff to be mentioned here, model-based, is actually a large family

of approaches that will be grouped together here for the sake of discussion. Some of the

approaches are among the newer e-diff forms and some have been around for some time.

Most use some form of expert opinion, including teachers and other subject matter

experts, combined with types of data mining to generate ideas about how content might

be differentiated.

Note that in the world of statistics, the term “data mining” can be defined differently

than in computer sciences. The term data mining in statistics can mean that numerous

statistical models have been applied to analyze a single data set, trolling for some

significant result. The problem with this is statistics itself predicts that if you run enough

different kinds of tests on a given set of data, something is bound to look significant just

by random chance. Statisticians handle this by discounting the significance, or making a

more conservative estimate of significance, based on the number of tests that were run.

The term data mining in computer sciences product development doesn’t mean this,

and isn’t intended to imply that numerous statistical tests have been used on a single data

set. In its most general form, data mining in the computer science context simply means

that the data has been examined for trends that may be of use in some kind of prediction,

or inference.

16

Common data mining techniques include a variety of linear regression and

Gaussian statistical models, Bayesian networks, artificial neural networks and item

response models. These models and their differences and similarities can get quite

technically complicated to consider. Below is a brief description of some of these

techniques (Scalise and Wilson, 2006, Scalise, 2004):

Linear regression and Gaussian methods are based on using statistical

analysis of student learning data in a particular area to show how to

"weight" evidence from observations of student work to make inferences

about what might help particular students. Linear models assume that

there is a linear relationship between learning variables in the model – as

one variable goes up, another responds up or down in a fairly predictable

linear fashion for a given student. Gaussian methods make assumptions

about normal distributions for the data.

Bayesian networks use some observed measures of student performance

combined with conditional probabilities on other related measures to infer

the probability of unobserved learning outcomes. Bayesian networks

represent beliefs about student proficiencies as a joint probability

distribution over proficiency variables identified by the courseware

developers. The Bayesian network diagram, which is constructed by the

developer of the assessment, embodies these beliefs, and Bayesian

statistical estimation is used to determine probabilities that drive

differentiation decisions.

17

Artificial neural networks are models that attempt to loosely mimic the

massive parallel processing that occurs in the brain (Harvey, 2003). Real

neurons can be thought of as collecting data from their environment and

passing information along, or transmitting it, to other neurons. Receiving

neurons accumulate the signals – add them up — over some period of time

until a threshold is met and a decision is triggered. In similar way, e-

learning products can accumulate evidence on students until some

threshold is met specifying that a differentiation action should be taken.

Item response models are mathematically the same as one type of neural

network, in which the function that specifies the threshold activity is

sigmoidal, or s-shaped, so that the probability of an inference can be better

taken into consideration. However, item response models originate out of

a different body of research and are extensively used in psychology and

educational testing, including as a primary basis for computer-adaptive

testing, so are usually considered separately from artificial neural

networks.

These methods may be combined in different ways, and may include both

quantitative and qualitative data to make interpretive or generative predictions about

student learning, which can be compared to expert opinion.

We take up here a few examples of model-based differentiation in e-learning

products. Bayesian networks can be a popular approach because they seem to combine

the best of several worlds: the ability to capitalize on the expertise of content experts in

the form of the structure of the network, and the advantage of being able to rapidly

18

update differentiation decisions based empirical data. Furthermore, network structures

can be highly complex and unconstrained, allowing e-learning content architectures that

seem to better fit real world data (be more authentic) However, a major drawback is that

a strong theoretical basis usually is required to justify the strong assumptions in the

model, for specifying conditional independence and parent/child node relationships in

Bayes nets. Theoretically, many path diagrams could fit the data (Loehlin, 1998),

especially as the networks become complex and include more than a few nodes

(variables). Yet, differentiation results are entirely dependent on the specification and

credibility of these path assumptions.

An example of Bayesian networks differentiation is the Networking Performance

Skill System (NetPASS), an assessment of computer networking skills from the Cisco

Learning Institute. NetPASS is a performance-based e-learning product for assessment in

which students encounter simulations and live interactions in computer network design,

implementation and troubleshooting. It uses Bayesian net probability estimations to

assign a probably measure of what a student knows and has yet to learn.

An example of item response modeling differentiation is the Full Option Science

System (FOSS) Self-assessment System. FOSS is a hands-on approach to teaching

science that uses kits and materials to bring inquiry-based science education into

classrooms. It delivers curricular materials to nearly 1 million K-8 students worldwide.

The FOSS Self-assessment System is an e-learning product that has been used to provide

supplementary assistance to the FOSS science curriculum on force and motion. The

system customizes hints to be delivered to students as they learn physical science. An

advantage of item response modeling over Bayesian networks is that the theoretical

19

model can be better checked by the empirical data, and revised if the model does not well

fit what student learning patterns really like look in a particular area.

With model-based approaches like this, the question often is which model to use, and

why. Also crucial in the case of e-learning is whether the model really is doing an

appropriate job of saying something credible about students. Such data mining

approaches can be faster and easier than deriving complex rule-based forms, and they can

deliver effects just as cool as naïve differentiation, with better substantiation for potential

learning gains from the e-diff content. They also can offer more planned control in the

instructional design than self-direction, with less of a scatter-shot approach than diffuse

differentiation. And students can still shuffle their way through content in whatever ways

the instructional designer think might work best, using the evidence of the data to help

fine-tune learning theories. But one difference is that the models can allow e-learning

designer to compare this predictions to actual student learning data. This can help

developers better understand which kinds of content might be a good bet, under particular

circumstances, given what the data are saying about how students are learning in a

particular area.

For model-based approaches, some principles to be considered include establishing a

developmental perspective of student learning for which the models can then be used,

clear alignment with the goals of instruction, valid and reliable evidence of what students

know and can do, and, ultimately, information and learning products that are useful to

teachers and students to improve learning outcomes (Scalise et al., 2006a, Scalise and

Wilson, 2006).

20

6 Effective Strategies: A Note on Strengths and Weaknesses in e-Diff

E-learning product developers and users or learners purchasing products often

want detailed information on which differentiation approaches are best to employ and

what will most impact learning outcomes. Some strengths and weaknesses of particular e-

diff approaches have been described in the sections above, as each technique was

discussed. Since this is a fairly new and rapidly emerging field, limited research is

available to answer comparative outcome questions more fully. Essentially no large-scale

randomized controlled trials are available. Although many small studies have been done

on particular products in specific situations, larger meta-analysis studies that could help

to combine these results also have not been completed. Since the research base for

differentiation in e-learning is still limited, employing the much better researched

principles of differentiated instruction generally is advisable (Tomlinson and McTighe,

2006, Willis and Mann, 2000, Tomlinson and Allan, 2000, Tomlinson, 2001, Sizer, 2001,

Reis et al., 1988, Hall, 2002). It can also be expected that more research will become

available specifically in e-learning, as the field grows. A summary of some differentiated

e-learning research is available at www.ekgarden.com, which profiles differentiated e-

learning products and approaches.

7 Conclusion: What’s the Take-Away on e-Diff?

Differentiated instruction is an approach to teaching that acknowledges people

have multiple paths for learning and for making sense of ideas. It is based on the premise

that students come to learning with different backgrounds, preferences and needs, and

how instructional approaches that take this into account may make a difference in

21

learning outcomes. In e-learning, differentiated instruction has the same meaning as in

traditional instruction, but different tools are available to help students learn and to

provide information in ways most appropriate to them, including types of new media

inclusion, levels of interactivity, response actions, and enhanced ability to collect data on

the fly and to deliver custom content. This paper discussed what the tools of e-learning

contribute to differentiated instruction and shares a framework for five common

approaches to differentiated e-learning: diffuse, self-directed, naïve, Boolean, and model-

based. The framework introduced here considers one way in which the various

approaches can be categorized, based on what types of decision-making and evidence are

used to establish the differentiation choices.

If instructional decisions are being made based on the differentiation approach,

this can have substantial consequences for the learner. This can be true whether the

information and decisions are used formatively to guide the instructional process, or

summatively to make a judgment about learning that results in a consequence for

students, such as course placement or feed forward reporting to teachers. In the e-learning

context, it is faster and easier to do for some types of differentiation, so it is important

that differentiation is well done, just as is true in the classroom-based context.

From the point of view of technologists developing and using e-learning products,

one important factor is simply to explicitly consider when differentiation is being

introduced into products and to be knowledge about what some of the learning

considerations might be. Design of products can be informed by identifying desired

elements of differentiation in content, process, product, affect or learning environment

within a product goals. Whether the differentiation is in response to student needs of

22

readiness, interest and learning profile may affect some of the choices of what

differentiation logic is best in a given e-learning context or product. The five types of

diffuse, self, naïve, Boolean and model-based logic may be desirable in different

contexts, depending on what the learning goals are.

It is also important that product developers begin to release sufficient information

and evidence on the differentiation logic in their e-learning products. Teachers,

instructors, district officials, workplace professional development groups and others in

charge of adopting or instructing through e-learning resources should have enough

information available to be informed consumers in this area. Marketing brochures often

don’t provide enough information to evaluate some of the differentiation approaches in

these increasingly sophisticated products. Instructors and consumers need to have enough

information available to evaluate how well the products work, and what is actually being

done within the instructional design to differentiate for learners, when differentiation is

employed in e-learning. If such inferences about students are a black box to instructors

and those responsible for adopting curricular materials, this may undermine some of the

potential for differentiation to be useful in the instructional process and may constrain its

potential as a valuable addition to e-learning products.

References BENNETT, R. E., MORLEY, M., QUARDT, D. (2000) Three Response Types for

Broadening the Conception of Mathematical Problem Solving in Computerized-Adaptive Tests. National Council of Measurement Education. San Diego, CA.

BLACK, P., HARRISON, C., LEE, C., MARSHALL, B. & WILIAM, D. (2002) Working Inside the Black Box: Assessment for Learning in the Classroom, London, King's College.

BLACK, P. & WILIAM, D. (1998) Inside the Black Box: Raising Standards through Classroom Assessment. Phi Delta Kappan, 80, 139-148.

23

BOYATZIS, R. E. & KOLB, D. A. (1991) Assessing Individuality in learning: The learning skills profile. Educational Psychology, 11, 279-295.

CURRY, L. (1990) One critique of the research on learning styles. Educational Leadership, 48, 50-56.

DUNN, R., DUNN, K. & PRICE, G. E. (1984) Learning style inventory, Lawrence, KS, USA, Price Systems.

GARDNER, H. (1999) Intelligence Reframed: Multiple Intelligences for the 21st Century, New York, Basic Books.

GIFFORD, B. R. (2001) Transformational Instructional Materials, Settings and Economics. The Case for the Distributed Learning Workshop. Minneapolis, MN, The Distributed Learning Workshop.

HALL, T. (2002) Differentiated instruction. Wakefield, MA, National Center on Accessing the General Curriculum.

HARVEY, C. R. (2003) Campbell R. Harvey's Hypertextual Finance Glossary. HOPKINS, D. (2004) Assessment for personalised learning: The quiet revolution.

Perspectives on Pupil Assessment, New Relationships: Teaching, Learning and Accountability, General Teaching Council Conference. London, England.

KENNEDY, C. A., SCALISE, K., BERNBAUM, D. J., TIMMS, M. J., HARRELL, S. V. & BURMESTER, K. (2007) A Framework for Designing and Evaluating Interactive E-Learning Products. 2007 AERA Annual Meeting: The World of Educational Quality. Chicago, 2007 AERA Annual Meeting: The World of Educational Quality.

LOEHLIN, J. C. (1998) Latent Variable Models, Mahwah, NJ, Erlbaum. LOVELACE, M. K. (2005) Meta-Analysis of Experimental Research Based on the Dunn

and Dunn Model. Journal of Educational Research, 98, 176-183. NATIONAL LEADERSHIP INSTITUTE (2005) Toolkit 2005 @ Virtual Learning. State

Educational Technology Directors Association (SETDA). NIELSEN, J. (1998) Jakob Nielsen's Alertbox for October 4, 1998: Personalization if

Over-Rated. PARSHALL, C. G., SPRAY, J., KALOHN, J. & DAVEY, T. (2002) Issues in Innovative

Item Types. Practical Considerations in Computer-Based Testing. New York, Springer.

PARSHALL, C. G., STEWART, R., RITTER, J. (1996) Innovations: Sound, Graphics, and Alternative Response Modes. National Council on Measurement in Education. New York.

REIS, S. M., KAPLAN, S. N., TOMLINSON, C. A., WESTBERT, K. L., CALLAHAN, C. M. & COOPER, C. R. (1988) How the brain learns, A response: Equal does not mean identical. Educational Leadership, 56.

RESNICK, L. B. & RESNICK, D. P. (1992) Assessing the Thinking Curriculum: New Tools for Educational Reform. IN GIFFORD, B. R. & O'CONNOR, M. C. (Eds.) Changing Assessments: Alternative Views of Aptitude, Achievement and Instruction. Boston, MA, Kluwer Academic Publishers.

RUSSELL, S. & NORVIG, P. (1995) Artificial Intelligence, A Modern Approach, Upper Saddle River, Prentice Hall.

SCALISE, K. (2004) BEAR CAT: Toward a Theoretical Basis for Dynamically Driven Content in Computer-Mediated Environments. Graduate School of Education,

24

Quantitative Measurement & Evaluation. Dissertation University of California, Berkeley.

SCALISE, K. (2005) Data Driven Content in e-Learning: Integrating Instruction and Assessment. American Association for Higher Education (AAHE) Virtual Assessment Conference, Strand 1: Promoting and Institutional Culture of Integrated Assessment. http://www.aahe.org.

SCALISE, K., BERNBAUM, D. J., TIMMS, M. J., HARRELL, S. V., BURMESTER, K., KENNEDY, C. A. & WILSON, M. (2006a) Assessment for e-Learning: Case studies of an emerging field. 13th International Objective Measurement Workshop. Berkeley, CA.

SCALISE, K., BERNBAUM, D. J., TIMMS, M. J., HARRELL, S. V., BURMESTER, K., KENNEDY, C. A. & WILSON, M. (2006b) BEAR and CAESL Seminars — Assessment and e-Learning: Case studies of an emerging field.

SCALISE, K., BERNBAUM, D. J., TIMMS, M. J., HARRELL, S. V., BURMESTER, K., KENNEDY, C. A. & WILSON, M. (pending) Adaptive Technology for e-Learning: Principles and Case Studies of an Emerging Field. Journal of the American Society for Information Science and Technology.

SCALISE, K. & GIFFORD, B. R. (2006) Computer-Based Assessment in E-Learning: A Framework for Constructing "Intermediate Constraint" Questions and Tasks for Technology Platforms. Journal of Teaching, Learning and Assessment, 4.

SCALISE, K. & WILSON, M. (2006) Analysis and Comparison of Automated Scoring Approaches: Addressing Evidence-Based Assessment Principles. IN WILLIAMSON, D. M., BEJAR, I. J. & MISLEVY, R. J. (Eds.) Automated Scoring of Complex Tasks in Computer Based Testing. Mahwah, NJ, Lawrence Erlbaum Associates, Inc.

SIZER, T. R. (2001) No two are quite alike: Personalized learning. Educational Leadership, 57.

STAHL, S. A. (2002) Different strokes for different folks? IN ABBEDUTO, L. (Ed.) Taking sides: Clashing on controversial issues in educational psychology. Guilford, CT, USA, McGraw-Hill.

STERNBERG, R. J. (1997) Thinking Styles, New York, Cambridge University Press. TAYLOR, C. R. (2002) E-Learning: The Second Wave. Learning Circuits. TOMLINSON, C. A. (2001) How to differentiate instruction in mixed-ability classrooms,

Alexandria, VA, ASCD. TOMLINSON, C. A. & ALLAN, S. D. (2000) Leadership for differentiating schools and

classrooms, Alexandria, VA, ASCD. TOMLINSON, C. A. & MCTIGHE, J. (2006) Integrating Differentiated Instruction +

Understanding by Design: Connecting Content and Kids, Alexandria, VA, Association for Supervision and Curriculum Development.

TRIVANTIS (2005) Present Day Custom eLearning. TURKER, A., GÖRGÜN, I. & CONLAN, O. (2006) The Challenge of Content Creation

to Facilitate Personalized E-Learning Experiences. International Journal on E-Learning, 5, 11-17.

WILLIS, S. & MANN, L. (2000) Differentiating Instruction. Curriculum Update, Education

Topics, ASCD.

25