We!Design: A student-centred participatory methodology for the design of educational applications

15
We!Design: A student-centred participatory methodology for the design of educational applications George N. Triantafyllakos, George E. Palaigeorgiou and Ioannis A. Tsoukalas George N. Triantafyllakos is a doctoral student in the Informatics Department of Aristotle University in Thessaloniki. George E. Palaigeorgiou is a researcher in the Informatics Department of Aristotle Univer- sity in Thessaloniki. Ioannis A. Tsoukalas is a professor in the Informatics Department of Aristotle University in Thessaloniki. Address for correspondence: George N. Triantafyllakos, Manousogiannaki 20, 54621, Thessaloniki, Greece, Email: [email protected]. Dr George E. Palaigeorgiou, Email: [email protected]. Ioannis A. Tsoukalas, Email: [email protected] Abstract The development of educational applications has always been a challenging and complex issue, mainly because of the complications imposed by the cog- nitive and psychological aspects of student–computer interactions. This article presents a methodology, named We!Design, that tries to encounter the com- plexity of educational applications development from within the participatory design framework. The methodology enables computer literate students and designers to cooperate in the design of applications that (1) enhance typical educational processes for which students have extensive experience in, such as note taking or assessment, and (2) are well-suited to the technological, social and cultural particularities of each educational environment. The methodol- ogy can be easily applied in real educational contexts and consists of two phases. During the first phase, students participate in short-duration design sessions where they formulate needs, tasks and interface prototypes for the educational application under examination. In the second phase, the designers systematically analyse and then integrate student suggestions. In order to evaluate the methodology, it was applied in the design of two educational applications: an electronic assessment environment and a course website. A total of 86 undergraduate informatics students participated in 22 4-hour design sessions. The methodology was evaluated by collecting students’ responses through questionnaires and by introspection on the video record- ings of the design sessions. The design sessions proved to be a very intriguing experience for the students while the methodology’s products managed to respond to their personal needs and expectations in an efficient and effective way. British Journal of Educational Technology Vol 39 No 1 2008 125–139 doi:10.1111/j.1467-8535.2007.00740.x © 2007 The Authors. Journal compilation © 2007 British Educational Communications and Technology Agency. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA.

Transcript of We!Design: A student-centred participatory methodology for the design of educational applications

We!Design: A student-centred participatory methodologyfor the design of educational applications

George N. Triantafyllakos, George E. Palaigeorgiouand Ioannis A. Tsoukalas

George N. Triantafyllakos is a doctoral student in the Informatics Department of Aristotle University inThessaloniki. George E. Palaigeorgiou is a researcher in the Informatics Department of Aristotle Univer-sity in Thessaloniki. Ioannis A. Tsoukalas is a professor in the Informatics Department of AristotleUniversity in Thessaloniki. Address for correspondence: George N. Triantafyllakos, Manousogiannaki20, 54621, Thessaloniki, Greece, Email: [email protected]. Dr George E. Palaigeorgiou, Email:[email protected]. Ioannis A. Tsoukalas, Email: [email protected]

AbstractThe development of educational applications has always been a challengingand complex issue, mainly because of the complications imposed by the cog-nitive and psychological aspects of student–computer interactions. This articlepresents a methodology, named We!Design, that tries to encounter the com-plexity of educational applications development from within the participatorydesign framework. The methodology enables computer literate students anddesigners to cooperate in the design of applications that (1) enhance typicaleducational processes for which students have extensive experience in, such asnote taking or assessment, and (2) are well-suited to the technological, socialand cultural particularities of each educational environment. The methodol-ogy can be easily applied in real educational contexts and consists of twophases. During the first phase, students participate in short-duration designsessions where they formulate needs, tasks and interface prototypes for theeducational application under examination. In the second phase, the designerssystematically analyse and then integrate student suggestions. In order toevaluate the methodology, it was applied in the design of two educationalapplications: an electronic assessment environment and a course website. Atotal of 86 undergraduate informatics students participated in 22 4-hourdesign sessions. The methodology was evaluated by collecting students’responses through questionnaires and by introspection on the video record-ings of the design sessions. The design sessions proved to be a very intriguingexperience for the students while the methodology’s products managed torespond to their personal needs and expectations in an efficient and effectiveway.

British Journal of Educational Technology Vol 39 No 1 2008 125–139doi:10.1111/j.1467-8535.2007.00740.x

© 2007 The Authors. Journal compilation © 2007 British Educational Communications and Technology Agency. Published byBlackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA.

IntroductionTraditionally, software design is regarded as a twofold process of identifying the users’needs and finding an optimal way of fulfilling them. Customary user-centred softwaredesign approaches are quite explicit about assigning responsibilities: users shouldsupply the needs and designers the solutions. Alternatively, researchers and softwaredesigners that endorse participatory design (PD) approaches suggest that users arecapable of playing a more active role in the design process. They claim that users havethe necessary knowledge and skills to participate in decision-making processes regard-ing technology products that concern them and they can facilitate the creation of moreusable and satisfying software products.

Several researchers and practitioners have used PD methodologies in the design ofeducational software, with students as participants, as a means of confronting thechallenging issues that arise when designing such applications. For instance, Druin et al(1997) reported on the success of children’s participation in the design of KidPad, alearning environment for children, and underscored their ability to contribute to thedesign of new learning technologies as equal design partners. In addition, Roda (2004)described the way in which a multidisciplinary team of students managed to create adigital gallery of artwork. Students’ participation seems to have had encouraging sideeffects as well. A number of case studies show that students gain valuable technologicaland domain-specific knowledge from their participation (Facer & Williamson, 2004;Greenbaum, 1993; Kafai, Carter-Ching & Marshall, 1997) and, at the same time, theydevelop basic social skills, such as the ability to cooperate with an interdisciplinary team(Roda, 2004) and the ability to respect their fellow students’ judgments and beliefs(Druin, 1999).

Thus, there are strong indications that students can and should play a prominent rolein the design process. However, most of the existing participatory design methodologieshave exacting requirements. For instance, several of them demand long-term coopera-tion between students and designers, making the process of involving and motivatingstudents difficult. Moreover, most methodologies are oriented towards the design of thelearning content of the educational applications. They are less geared towards thedesign of applications that support content-independent educational processes, such asnote taking or various forms of assessment. Finally, they are aimed at the participationof students with average technological knowledge, and do not exploit the design com-petencies of more highly computer-literate students that share the same learning needsas the rest.

With these facts in mind, we have selected and compiled a set of techniques aimed at thedevelopment of a novel student-centred participatory design methodology that can beeasily applied in real educational environments. The methodology, named We!Design,enables the design of applications that enhance common educational tasks for whichstudents have extensive experience in and that are adapted to the various sociotechnicalsettings of each educational environment. In the following sections, we describe the

126 British Journal of Educational Technology Vol 39 No 1 2008

© 2007 The Authors. Journal compilation © 2007 British Educational Communications and Technology Agency.

We!Design methodology and present its evaluation after its being applied in the devel-opment of two applications, a course website and an electronic assessment system.

The We!Design methodologyThe We!Design methodology’s main hypothesis is that students, as a result of theirextensive experience with common educational tasks, such as information gathering,(1) are able to easily recall, state and elaborate on their prior problems and needs, (2)have unconsciously or deliberately thought of and formed solutions and proposalsconcerning those educational processes, (3) are willing to collaborate with their col-leagues on engineering joint solutions to their problems and, consequently, (4) mayproduce numerous diverse ideas for the construction of prototypes in a short amount oftime. The methodology necessitates that participating students have hands-on experi-ence with interactive interface objects (Ehn & Kyng, 1991), so that they can transformefficiently their ideas into prototype interfaces. Although many commercial andresearch applications strive to support and enhance similar educational processes, theyare incapable of taking into account the technological, social and cultural particulari-ties of each educational context, such as its social organization (eg, collaboration orschooling practices), the students’ computer knowledge or the technical infrastructureof the learning environment (eg, computer availability or network infrastructure). TheWe!Design methodology’s objective is to enable the design of applications that respondaccurately to those distinctive conditions of diverse educational environments.

The We!Design methodology consists of two phases (see Figure 1). Initially, a number ofdesign sessions take place with the participation of small groups of students. In eachsession the students work cooperatively in order to: (1) form, externalise and discuss

Figure 1: We!Design methodology

The We!Design methodology 127

© 2007 The Authors. Journal compilation © 2007 British Educational Communications and Technology Agency.

their needs and problems concerning the educational process in hand, (2) suggest andformulate ways of satisfying their needs and overcoming their problems, and (3) designa low-tech prototype which addresses their requirements. During the next phase, thedesigners systematically analyse each session’s products in order to synthesise a singleapplication.

Following the paradigm of a communicative approach to design (Visscher-Voerman &Gustafson, 2004), the methodology assigns students a primary role in the designprocess, confronting design dilemmas through discussion and democratic decisionmaking. The methodology differs from other participatory design approaches in itsdemand for several iterations of the same brief and structured process while employingdifferent students as participants in each repetition. This makes the methodology fit todesign circumstances where wide-ranging perspectives on the design problem areessential, time barriers are restricting and participants’ long-term involvement is notfeasible. Thus, it can become an integrated part of the everyday reality of an educa-tional institution, in a way that does not disrupt students’ primary academic activities.In the following section we describe in more detail the methodology’s different phases.

The methodology’s two phasesPhase 1: design sessions with studentsEach design session requires the participation of three or four students and two coor-dinators. The small number of students minimises the number of possible conflictsbetween them and reduces the time required for establishing a friendly and informalatmosphere. The first coordinator must have an intimate understanding of the educa-tional process under examination and his responsibility is to ensure that the students’design decisions do not run contrary to commonly accepted pedagogical norms andinstructors’ interests. The second coordinator is a human–computer interaction (HCI)expert, whose role is to preserve the usability of the prototype designed (Bødker, Green-baum & Kyng, 1991; Muller, 1991; Naslund, 1997). Both coordinators have to ensurethat each student has the same chances as everyone else to initiate a design activity(Guha et al, 2005; Muller, 1991), and that all the suggestions are carefully and equallyconsidered (Guha et al, 2004; Muller, 1991). Additionally, they should not interferewith students’ decision-making processes.

All participants sit around a table. A whiteboard is used for the design of a low-techprototype of the application during the last stage of the methodology. A video recordercreates a setting-oriented record of the design surface (Suchman & Trigg, 1991) andprovides a detailed documentation of the design process.

At the beginning of each session, the coordinators give a concise description of themethodology and try to ensure that students appreciate the significance of their partici-pation (Hanna, Risden & Alexander, 1997). The domain expert introduces the educa-tional procedure under consideration and describes circumstances and examples thatcan stimulate students to recall similar experiences (Nesset & Large, 2004; Svanæs &Seland, 2004). Then, he asks the students to build a set of needs either based on their

128 British Journal of Educational Technology Vol 39 No 1 2008

© 2007 The Authors. Journal compilation © 2007 British Educational Communications and Technology Agency.

prior experiences, strategies, problems and goals or by envisioning relevant facilitiesthat could be provided by using electronic means. Students present those to their fellowstudents in terms of scenarios (see examples in Table 1). The domain expert may ask forclarifications of the suggested needs, so as to ensure that they are unambiguouslydescribed, while the rest of the students can elaborate on them, ask questions or suggestrelevant needs. The description of the needs should not refer to the way in which theneeds will be satisfied (Go & Carroll, 2004) because that would shift the focus of thediscussion to technical issues. The commonly accepted needs are recorded and at the endof the first stage, the students evaluate the importance of each one on a 5-point Likertscale. In that way, students become acquainted with each other’s needs (Muller, 1992)and agree upon the design priorities to be followed in the rest of the design session. Ashort break follows, in which the suggested needs are sorted based on student rankings.

At the second stage, students are asked to describe task sequences, exact sequences ofuser actions and system responses that can satisfy each previously stated need. The tasksequences should contain the analysis of all physical and computational objectsinvolved in those interactions (see examples in Table 1). Afterwards, the students areasked to agree upon one commonly accepted task sequence for each need through aprocess of mutual agreement and compromise. The second stage ends when all theneeds suggested have been examined or when the stage exceeds its prearranged dura-

Table 1: Students’ needs and task sequences examples

Examples of students’ needs Corresponding task sequences

Project 1 During the exams, I reach a questionwhose answer I am not sure of. I want tobe able to mark this question (whether Ianswer it or not), so that it is easy for meto return and answer it later.

Next to each question, there is a smallcheck box that I can check when I wantto mark a question:1. If I want, I can give a temporary

answer and then click on the checkbox.

2. The colour of the question changes.3. The question is copied in a list of

‘uncertain’ questions.4. If I change my mind, I uncheck the

box and the question is removed fromthe list of ‘uncertain’ questions.

Project 2 For several reasons, I can’t visit thecourse websites as often as I should. Iwould like the system to inform medynamically whenever needed (eg, aboutforthcoming deadlines, announcements).

In the set-up page of the course website:1. I select the way in which I want to be

informed (via email or via SMS)2. I choose when I want to be informed

(of new announcements, of learningcontent update, of deadlines, etc).

3. I submit my email or mobile number.4. If I change my mind I can

unsubscribe from the notificationservice.

The We!Design methodology 129

© 2007 The Authors. Journal compilation © 2007 British Educational Communications and Technology Agency.

tion (approximately 1 hour and 20 minutes). However, since the needs are orderedaccording to their importance, the most significant ones are always considered.

In the third and final stage of the design sessions, students design a low-tech prototypeapplication using a process similar to PICTIVE (Muller, 1991). Students are asked toportray the interactive objects of each final task sequence upon the whiteboard usingcoloured markers and Post-it notes. The design begins with the task sequence of themost important need and goes on from there. Each task sequence is re-evaluated againstthe interim state of the prototype interface and either new interaction solutions areinitiated or previously designed objects are transformed in order for the analysed tasksto be embedded in the interface prototype. Eventually, a walkthrough testing is con-ducted; some team members role play stereotypical students using the educationalapplication while others record usability issues and potential revisions.

All three stages of the first phase of the methodology are closely connected, since eachone uses the products of its precedent as its input. That helps the coordinators ensurethe controlled flow of the sessions, and increases students’ reliance on the methodologystructure.

Phase 2: synthesis of the final applicationDuring the second phase, the designers gather and analyse the products of the designsessions in order to synthesise a single application. Initially, the suggested needs areorganised based on their content, and similar needs are grouped and rewritten so thatoverlapping is avoided. Each need is re-evaluated in accordance with the number ofsessions in which it appeared, and students’ average assessment of its importance. Theneeds are sorted based on their new rankings, and the most important needs areselected in order to constrain the design space. The designers compile the diverse tasksequences of each final need into one task sequence, analyse the students’ prototypes inorder to detect common interactive objects and workspaces, and progressively build thefinal application.

However, the synthesis of the various students’ task sequences and prototypes stemsfrom the HCI expert’s subjective interpretation of the students’ proposals. Although themethodology ensures that different HCI experts would suggest similar features for thefinal application, it is also probable that they would propose slightly different interfacesfor the application. Finally, the application produced is once again presented to thestudents in order to pinpoint weaknesses or flaws.

Research methodologyIn order to assess the We!Design methodology, we applied it in the design of two appli-cations: an electronic assessment system and a course website. For the first application,we conducted 10 design sessions with the participation of 40 undergraduate studentsof an informatics department. Eighteen students were female (45%) and 22 were male(55%). For the design of the course website, we conducted 12 design sessions withanother 46 undergraduate students of the same Department. Twenty-one students

130 British Journal of Educational Technology Vol 39 No 1 2008

© 2007 The Authors. Journal compilation © 2007 British Educational Communications and Technology Agency.

were female (46%) and 25 were male (54%). All students were in the 3rd or 4th year oftheir studies and had limited experience in software engineering. Thus, they could beconsidered as merely experienced computer users. Students’ registration in the designsessions was voluntary and was conducted through the use of a web-based registrationsystem; hence, group synthesis was not controlled. Twenty sessions were conductedwith the participation of four students and two sessions with three students.

Students were asked to complete two questionnaires at the end of each session andanother one after the design of the final application of each project. The first twoquestionnaires aimed to examine students’ beliefs about the methodology (satisfactionwith their participation, satisfaction with the methodology outcomes, the coordinators’influence on the final products, freedom of expression during the design process, dis-comfort with the video camera) and the perceived usefulness and ease of use of eachsession’s interface prototype. The final questionnaire, which was concerned with ease ofuse and perceived usefulness, was used for the evaluation of the final application’sinterface prototype. All questionnaires contained responses on 5-point Likert scales (seeAppendix). Besides the questionnaires, the researchers examined the video recordingsthoroughly in order to identify issues concerning students’ problems and inefficienciesin the methodology.

Implementing the methodologyEach project lasted for approximately 8–9 weeks; the design sessions were conducted inthe first 3 weeks, while in the remaining weeks the designers analysed the sessions’products, synthesised the requirements and built the mock-up application. The ses-sions’ products and the resulting applications from both projects can be found at theURL address: http://ierg.csd.auth.gr/We!Design.

Descriptive statistics from the realisation of each design session for both applications areshown in Table 2. The mean duration of each design session for both projects wasapproximately 3.30 hours, while the duration of each stage of the design sessionsvaried considerably. The students’ personal preferences, their creativity, their ability tosynthesise different views and resolve conflicts affected the duration of the first twostages. The low-tech prototyping stage was the shorter one on all occasions because itmerely represented design decisions already made concerning the application require-ments; it lacked any significant exploration of the design space.

Students’ attitudeIn order to evaluate student responses in the questionnaires, the scores of the negativelyformulated questions were reversed so as to correlate higher values with more positiveattitude. Students evaluated the methodology as a very satisfying experience, and theirunofficial comments at the end of the design sessions reconfirmed that claim: eg, ‘I wishwe had more opportunities to participate in such activities’, ‘It was a stimulatingexperience’, ‘It was my best university experience’, etc. Seventy-three students (84%)claimed that they would rely more on educational software designed with the We!Designmethodology, thereby expressing indirectly their confidence in its efficiency.

The We!Design methodology 131

© 2007 The Authors. Journal compilation © 2007 British Educational Communications and Technology Agency.

Tabl

e2

:N

eeds

,dur

atio

nan

dst

uden

ts’a

ttit

ude

inea

chde

sign

sess

ion

Total session time (hours)

Satisfaction with the methodology

Usefulness

Ease of use

Proposed needs

Selected needs

Contribution of needs that did notappear in prior sessions

Total session time (hours)

Satisfaction with the methodology

Usefulness

Ease of use

Proposed needs

Examined needs

Contribution of needs that did notappear in prior sessions

Pro

ject

1S1

3.4

34

.50

4.4

44

.70

24

10

10

Pro

ject

2S1

3.3

54

.75

4.5

64

.35

23

15

15

S24

.05

54

.69

4.2

52

29

7S2

3.3

04

.67

3.6

74

.00

24

17

7S3

3.4

14

.88

4.0

04

.30

28

17

5S3

3.0

54

.00

4.2

54

.00

21

14

3S4

3.2

55

4.2

54

.50

24

12

2S4

3.3

53

.88

3.9

03

.85

28

17

5S5

3.4

95

4.3

13

.85

25

13

3S5

3.0

04

.00

3.8

13

.90

20

14

—S6

3.3

54

.88

4.6

94

.60

17

10

2S6

3.2

54

.88

4.8

14

.50

22

14

—S7

3.0

74

.38

4.2

53

.90

23

12

—S7

3.3

54

.88

4.7

54

.55

17

10

—S8

3.2

14

.88

55

20

11

—S8

3.1

54

.33

4.1

74

.33

20

13

1S9

3.3

44

.88

54

.80

23

10

—S9

3.2

54

.50

4.6

74

.67

19

15

—S1

03

.47

4.3

83

.33

3.8

61

67

—S1

03

.05

4.6

75

4.8

71

71

41

S11

3.0

54

.50

4.5

04

.55

17

13

1S1

23

.25

4.9

24

.58

4.4

02

01

2—

SD0

.25

0.2

50

.50

0.4

1—

——

—0

.37

0.4

20

.32

——

—A

vera

ge3

.36

4.7

84

.40

4.3

72

2.2

11

.1—

3.2

04

.50

4.4

24

.33

20

.11

8.8

—Fi

nal

——

4.2

74

.52

59

29

29

——

4.4

23

.97

53

33

33

S,se

ssio

n;S

D,s

tan

dard

devi

atio

n.

132 British Journal of Educational Technology Vol 39 No 1 2008

© 2007 The Authors. Journal compilation © 2007 British Educational Communications and Technology Agency.

Students indicated that the coordinators followed the methodology guidelines. Themajority of them claimed that they felt totally free when they expressed their thoughtsduring the sessions (M = 4.54, SD = 0.47), while most of them argued that the finalproduct of each session was the outcome of their cooperative work (M = 4.20,SD = 0.42, reversed). A few students claimed to be discontented with the intransigenceof some of their fellow students (M = 3.98, SD = 0.59, reversed), while the majority ofthe students assessed the coordinators’ role as noninfluential (M = 4.12, SD = 0.73)and helpful in establishing a friendly atmosphere throughout the session (M = 4.70,SD = 0.40). Only six students stated that the use of the video recorder made themanxious and strongly objected to the use of the video recorder during the designsessions.

Having ensured the compliance of the design sessions with the methodology guidelines,we could then safely evaluate its products. The variables ‘usefulness’ and ‘ease of use’ inthe two questionnaires gave a relatively high Cronbach a (over 0.75) in both projects.The mean values of the variables are shown in Table 2. In general, students assessedthe usefulness and ease of use of both their session’s interface prototype and the cor-responding final application positively. However, we distinguished three categoriesof cases in which the methodology or the prototype interfaces were assessed as lesssatisfying:

• Divergent team members (all variables relatively low): in two sessions(Project 1 > Session 10/Project 2 > Session 5), team members had contradictoryviews about the purpose of their participation and continuously challenged theessence of the methodology’s steps. Despite coordinators’ efforts, those sessions werenot conducted in a creative atmosphere and were not focused on the products; henceboth satisfaction with the methodology and its products were relatively low. This canalso be confirmed by the small number of needs that were produced in those sessions.Consequently, there is a possibility that some sessions will not be efficient when theinterests and preferences of its members will not match with the methodology’sphilosophy.

• Type of educational application under examination (relatively low ease of use and useful-ness): the design of the course website seemed to be less intriguing, since all studentshad used quite a lot of similar websites in the past. This made them question theirability to produce something original or more sophisticated. In one session(Project 2 > Session 2), that perception prevailed and it influenced the students’ dis-position towards the products of the procedure but not the procedure itself. In theremaining sessions that perception was soon reversed after the students started togenerate and discuss their ideas.

• Early commitment to design decisions (relatively low ease of use): because the low-techprototyping stage was a sequential process of integrating the task flows into aninterface, students had to continuously re-adapt their prior design decisions.However, sometimes they seemed to be reluctant to do so because of the potentialburden of restructuring the whole interface (Bowers & Pycock, 1994), and thus theredesign of the interface was neglected (Project 1 > Session 5 and Session 7).

The We!Design methodology 133

© 2007 The Authors. Journal compilation © 2007 British Educational Communications and Technology Agency.

The students were significantly satisfied by the final applications as well. The evalua-tions of both the initial prototypes and the final application were positive. Students’reactions towards the final application must be given added value, bearing in mind thatthe final application was not completely ‘their’ product.

Qualitative resultsNeeds analysis and assessmentThree repetitive patterns of behaviour were detected during the progress of the firststage of the design sessions. First, most of the students tended to start by proposingtechnical solutions and characteristics for the educational application and did not recallneeds and problems that they had dealt with in the past. Only once prompted to adoptthe role of a student being assessed or that of a student exploring a course website didthey start to comprehend the way in which their needs should have been described.Second, when students started to state their needs, some general contexts of needsemerged. Those contexts were definitive in the formation of new needs which more orless elaborated upon them. As expected, each session dealt with a subset of the issuesthat the final application would cope with. Thus, the need for the realisation of severaldifferent sessions was evident. Thirdly, some students stated needs that ran againstcommonly accepted pedagogical norms. They tried to take advantage of the designprocess in order to satisfy their opportunistic preferences. For example, in the assess-ment project, many students asked for a service which would help them ‘negotiate’ theirscores by replacing difficult questions with easier ones. The domain expert’s interven-tion was crucial in such circumstances in order to reject those needs and to promote theprincipal values of the procedure.

Generally, students started to state their needs rather quickly. Unsurprisingly manyexpressed the same needs and while each student wrote down about 10 needs, only22.2 was the average number of the final needs per session in the first project and 20.1in the second. As shown from the column “Contribution of needs not appearing in priorsessions” in Table 2, no more than six design sessions for the assessment system, andseven for the course website, were sufficient for the gathering of the needs that wereaddressed by the final applications. Even so, all design sessions played an important rolein the evaluation of the suggested needs, in the production of alternative tasksequences and in the proposal of multiple and diverse interactive objects.

Task sequences generationAt this stage, students had to work cooperatively for the first time in order to synthesisetheir suggestions. Inevitably, some conflicts occurred during the synthesis of the tasksequences. Because the task descriptions made references to interaction objects, stu-dents were compelled to negotiate their personal interaction preferences. It was ratherdifficult to resolve those conflicts, because students’ arguments were based on intuitiverationales, subjective experiences and empirical evaluations rather than formal knowl-edge of the HCI domain. At that point, it was essential for the coordinators to place thesolution upon a choice from among two or more alternatives and to establish a demo-cratic approach for resolving further disagreements. Despite such conflicts, student

134 British Journal of Educational Technology Vol 39 No 1 2008

© 2007 The Authors. Journal compilation © 2007 British Educational Communications and Technology Agency.

cooperation evolved smoothly. Initially, it was necessary for the domain and HCI expertto coordinate students’ cooperation, but eventually the students took over the process.In the majority of the design sessions the coordinators’ interference was minimal.

Low-tech prototypingThe low-tech prototyping was without a doubt the most enjoyable part of the designsessions, as students left their seats to design the interface of the application. Most of thetime, they worked in parallel and felt comfortable with the use of the coloured markersin order to draw on the whiteboard or on A4 paper. There were times when studentstried to reproduce familiar interactive objects and workspaces from well-known appli-cations. In 16 sessions there were direct references to features of software applicationssuch as Microsoft Office Word or Adobe Reader for the first project or to characteristicsof well-known websites for the second project. At other times they did not hesitate tointroduce novel interactive objects such as, eg, an enhanced scrollbar for navigatingamong multiple-choice questions in the first project.

The interfaces were not finalised and suffered from a few usability and consistencyproblems. This is not unusual when working with low-fidelity prototypes (Rudd, Stern& Isensee, 1996). Generally, we can claim that students’ competences in constructinginteraction schemes were limited.

Synthesis of the final applicationBoth coordinators undertook a central role in the second phase of the methodology. Thetask of synthesising the overlapping needs and organising them thematically proved tobe time consuming and demanding as a result of their enormity (222 needs inProject 1, and 248 needs in Project 2). However, both projects highlighted the fact thatstudents share a common ground of unrealised needs as regards established educa-tional processes. The reordering of needs based on their occurrences in the designsessions and the students’ rankings, was of special importance because any effort tosatisfy all needs in the application would have required the creation of a very compli-cated system. The HCI expert’s knowledge was invaluable during the phase of studyingthe task sequences that corresponded to the same needs. The HCI expert had to under-stand and combine the different and generally incomplete students’ suggestions intosatisfying and more concrete ones. Thus, from this point forward, the final applicationexpressed a combination of students’ suggestions together with the HCI expert’s per-spective on those suggestions.

DiscussionIn retrospect, the initial hypotheses that led us to the creation of the We!Design meth-odology were confirmed. The methodology enabled computer-literate students tocollaboratively design prototypes for typical educational processes in short-durationdesign sessions. The cohesion of the methodology stages, the progressive explorationof the design problem space and the managed demand for student cooperation ensuredthe success of the overwhelming majority of the sessions. Students were committed tothe methodology and characterised it as enjoyable and interesting. The sessions were

The We!Design methodology 135

© 2007 The Authors. Journal compilation © 2007 British Educational Communications and Technology Agency.

fruitful, providing multiple needs and task sequences while the students’ computerexperience proved essential in the formation of interface design suggestions. Thevarious artefacts produced, even though they were incomplete, represented a tangibleinterpretation of the students’ expectations. The designers systematically analysed thedesign sessions’ products and developed applications that were specifically tailored tothe students’ learning needs, interaction preferences and prospects. Student responsesindicated their overall satisfaction with the methodology and its products.

Nevertheless, design is an exploratory, rhetorical, emergent, opportunistic and reflec-tive human activity (Cross, 1999) and, consequently, any attempt of framing it isindirectly influenced by a number of unexamined assumptions. For example, as Kroes(2002) argued, ‘the design process and the design products are so intimately related toeach other that an understanding of the nature of the design process requires a morethorough insight into the nature of the product designed and vice versa’. Althoughsome of the characteristics of the typical learning processes drove the development ofour methodology, those processes differed significantly between them. Hence, the meth-odology’s evaluation should involve a more detailed analysis for each of the productsdesigned. Additionally, no information about the students’ design rationale and expe-rience were taken into account in our analysis. Even though one could claim thatstudents of an informatics department belong to the same community of practice asprofessional software designers (Bødker & Iversen, 2002) and can thus participate intheir advanced language games, it was apparent that the students’ contribution wasbased more on their intuitive experiences than on their knowledge of end-user appli-cations design. Their profile closely resembled that of experienced users, but whetherthe prospective participants should be novice software designers-developers or justexperienced computer users, remains unanswered. Finally, another hidden drawbackin our methodology was that the success of the final products was not confirmed bynovice users. Experienced users externalised their needs and interaction preferenceswithout explicitly adapting them to computer-illiterate users. Although the prototypeswere minimal and the final applications were elaborated by HCI experts, those facts donot ensure their acceptance by less experienced users.

We aim to further develop and test the We!Design methodology and collect more evi-dence that support two hypotheses: that it leads to essentially better results compared toexisting design methodologies, and that it is very probable that it produces a successfuloutcome. The prerequisite is the detailed examination of the communication modusoperandi that applies to the students’ interactions throughout the various stages of themethodology and the identification of the problems that affect those interactions inconjunction with the students’ prior design competencies. This analysis will hopefullyhelp us to further understand and improve the methodology.

The We!Design methodology has already been used in an informatics department forthe design of four educational applications (an electronic assessment system, a coursewebsite, a web-based portfolio of students’ projects and an electronic note-taking tool).We envision an environment where different learning organisations will utilise partici-

136 British Journal of Educational Technology Vol 39 No 1 2008

© 2007 The Authors. Journal compilation © 2007 British Educational Communications and Technology Agency.

patory methodologies and techniques in order to develop or re-engineer educationalapplications specifically tailored to their needs. Already, many universities, schools andother educational business units develop their own software so as to address theirlearning needs following less systematic design processes. The methodology seemsadequate for such circumstances and could help them to advance the efficiency and theeffectiveness of their efforts.

ReferencesBødker, S., & Iversen, O. S. (2002). Staging a professional participatory design practice. Proceedings of

the second Nordic Conference on Human–Computer Interaction (pp. 11–18). New York, NY: ACMPress.

Bødker, S., Greenbaum, J. & Kyng, M. (1991). Setting the stage for design as action. In J. Green-baum & M. Kyng (Eds), Design at work: cooperative design of computer systems (pp. 139–154).Hillsdale, NJ: Lawrence Erlbaum Associates.

Bowers, J. & Pycock, J. (1994). Talking through design: requirements and resistance in coopera-tive prototyping. In B. Adelson, S. Dumais & J. S. Olson (Eds), Proceedings of CHI’94 (pp.299–305). Reading: Addison-Wesley.

Cross, N. (1999). Natural intelligence in design. Design Studies, 20, 1, 25–39.Druin, A. (1999). Cooperative inquiry: developing new technologies for children with children. Proceed-

ings of Human Factors in Computing Systems (CHI’99) (pp. 592–599). Pittsburgh, PA: ACMPress.

Druin, A., Stewart, J., Proft, D., Bederson, B. B. & Hollan, J. D. (1997). KidPad: a design collaborationbetween children, technologists and educators. Proceedings of Human Factors in Computing Systems(CHI’97) (pp. 463–470). Atlanta, GA: ACM Press.

Ehn, P. & Kyng, M. (1991). Cardboard computers: mocking-it-up or hands-on the future. In J.Greenbaum & M. Kyng (Eds), Design at work: cooperative design of computer systems (pp. 169–195). Hillsdale, NJ: Lawrence Erlbaum Associates.

Facer, K. & Williamson, B. (2004). Designing educational technologies with users. A handbookfrom NESTA Futurelab. Retrieved 25 October, 2006, from http://www.futurelab.org.uk/

Go, K. & Carroll, J. M. (2004). Scenario-based task analysis. In D. Diaper & N. Stanton (Eds), Thehandbook of task analysis for human–computer interaction (pp. 117–134). Hillsdale, NJ: LawrenceErlbaum Associates.

Greenbaum, J. (1993). A design of one’s own: towards participatory design in the United States.In D. Schuler & A. Namioka (Eds), Participatory design: principles and practices (pp. 27–37).Hillsdale, NJ: Lawrence Erlbaum Associates.

Guha, M. L., Druin, A., Chipman, G., Fails, J. A., Simms, S. & Farber, A. (2004). Mixing ideas: a newtechnique for working with young children as design partners. Proceedings of Interaction Design andChildren (IDC04) (pp. 35–42). MA: ACM Press.

Guha, M. L., Druin, A., Chipman, G., Fails, J. A., Simms, S. & Farber, A. (2005). Workingwith young children as technology design partners. Communications of the ACM, 48, 1, 39–42.

Hanna, L., Risden, K. & Alexander, K. J. (1997). Guidelines for usability testing with children.Interactions, 4, 5, 9–14.

Kafai, Y. B., Carter-Ching, C. & Marshall, S. (1997). Children as designers of educational multi-media software. Computers & Education, 29, 2/3, 117–126.

Kroes, P. (2002). Design methodology and the nature of technical artefacts. Design Studies, 23, 3,287–302.

Muller, M. J. (1991). PICTIVE: democratizing the dynamics of the design session. In D. Schuler &A. Namioka (Eds), Participatory design: principles and practices (pp. 211–237). Hillsdale, NJ:Lawrence Erlbaum Associates.

The We!Design methodology 137

© 2007 The Authors. Journal compilation © 2007 British Educational Communications and Technology Agency.

Muller, M. J. (1992). Retrospective on a year of participatory design using the PICTIVE technique.Proceedings of the SIGCHI conference on Human Factors in computing systems (pp. 455–462).Monterey, CA.

Naslund, T. (1997). Computers in context—but in which context? In M. Kyng & L. Mathiassen(Eds), Computers and design in context (pp. 171–200). Cambridge, MA: The MIT Press.

Nesset, V. & Large, A. (2004). Children in the information technology design process. Library &Information Science Research, 26, 140–161.

Roda, C. (2004). Participatory system design as a tool for learning. Proceedings of InternationalConference on Cognition and Exploratory Learning in the Digital Age (CELDA) (pp. 366–372).CELDA 2004, IADIS.

Rudd, J., Stern, K. & Isensee, S. (1996). Low vs. high fidelity prototyping debate. Interactions, 3, 1,76–85.

Suchman, L. A. & Trigg, R. H. (1991). Understanding practice: video as a medium of reflectionand design. In J. Greenbaum & M. Kyng (Eds), Design at work: cooperative design of computersystems (pp. 65–89). Hillsdale, NJ: Lawrence Erlbaum Associates.

Svanæs, D. & Seland, G. (2004). Putting the users center stage: role playing and low-fi prototyp-ing enable end users to design mobile systems. Proceedings of the SIGCHI Conference on HumanFactors in Computing Systems (pp. 479–486). CHI ’04. New York, NY: ACM Press.

Visscher-Voerman, I. & Gustafson, K. L. (2004). Paradigms in the theory and practice of educa-tion and training design. Educational Technology Research & Development, 52, 2, 69–89.

AppendixA) Usability Evaluation QuestionnaireUsefulness

• I consider the application I designed cooperatively with my fellow students to beespecially useful.

• Using the assessment tool I could perform better during my exams. / Using the coursewebsite I will support my learning skills better.

• The assessment tool doesn’t have anything more to offer than a traditional multiple-choice based examination does. / The course website doesn’t have anything more tooffer than other well-known LMSs I have used.

• I would like to use the assessment tool in the final assessment examinations of morecourses / I would like to use a similar course website for all of my courses.

Ease of use

• The application is not easy to use.• It is easy to learn how to use the application and its features.• The interactions with the application are vague and obscure.• A novice computer user will face difficulties using the application.• I can easily find all the features I want in the application interface.

B) Methodology Evaluation QuestionnaireSatisfaction with the methodology

• The methodology was enjoyable.• The methodology was interesting.

138 British Journal of Educational Technology Vol 39 No 1 2008

© 2007 The Authors. Journal compilation © 2007 British Educational Communications and Technology Agency.

Satisfaction with participation

• I would rely more on educational software designed in this manner.• I am satisfied with the application I co-designed with my fellow students. I feel that it

responds effectively to my needs.

Coordinators’ Role

• The coordinators made me anxious by asking me to speak even though I did not wantto.

• The coordinators helped to establish a friendly atmosphere throughout the session.• The coordinators could be better prepared and contribute more efficiently to the

methodology.• The coordinators interfered and directed the design process. They did not allow me to

think freely.

Freedom of expression

• During cooperation with my fellow students, I feel that I drew back a lot of times. Iwould like them to be less intransigent towards my ideas.

• I think that some of my fellow students directed the design process and did not allowa democratic synthesis of ideas and suggestions.

• I was totally free to express my thoughts without any kind of restriction.

Video recording

• The fact that a video camera was recording the procedure made me anxious. When-ever I talked I was conscious of being recorded.

• I didn’t like the fact that a video camera was recording the procedure.

The We!Design methodology 139

© 2007 The Authors. Journal compilation © 2007 British Educational Communications and Technology Agency.