Electronic assessment in higher education

11
This article was downloaded by: [198.54.173.194] On: 17 March 2015, At: 02:41 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Educational Studies Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/ceds20 Electronic assessment in higher education Roelien Brink a & Geoffrey Lautenbach b a Department of Business Information Technology , University of Johannesburg , South Africa b Department of Mathematics, Science, Technology and Computer Education , University of Johannesburg , South Africa Published online: 18 Jan 2011. To cite this article: Roelien Brink & Geoffrey Lautenbach (2011) Electronic assessment in higher education, Educational Studies, 37:5, 503-512, DOI: 10.1080/03055698.2010.539733 To link to this article: http://dx.doi.org/10.1080/03055698.2010.539733 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http://www.tandfonline.com/page/terms- and-conditions

Transcript of Electronic assessment in higher education

This article was downloaded by: [198.54.173.194]On: 17 March 2015, At: 02:41Publisher: RoutledgeInforma Ltd Registered in England and Wales Registered Number: 1072954 Registeredoffice: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Educational StudiesPublication details, including instructions for authors andsubscription information:http://www.tandfonline.com/loi/ceds20

Electronic assessment in highereducationRoelien Brink a & Geoffrey Lautenbach ba Department of Business Information Technology , University ofJohannesburg , South Africab Department of Mathematics, Science, Technology and ComputerEducation , University of Johannesburg , South AfricaPublished online: 18 Jan 2011.

To cite this article: Roelien Brink & Geoffrey Lautenbach (2011) Electronic assessment in highereducation, Educational Studies, 37:5, 503-512, DOI: 10.1080/03055698.2010.539733

To link to this article: http://dx.doi.org/10.1080/03055698.2010.539733

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the“Content”) contained in the publications on our platform. However, Taylor & Francis,our agents, and our licensors make no representations or warranties whatsoever as tothe accuracy, completeness, or suitability for any purpose of the Content. Any opinionsand views expressed in this publication are the opinions and views of the authors,and are not the views of or endorsed by Taylor & Francis. The accuracy of the Contentshould not be relied upon and should be independently verified with primary sourcesof information. Taylor and Francis shall not be liable for any losses, actions, claims,proceedings, demands, costs, expenses, damages, and other liabilities whatsoeveror howsoever caused arising directly or indirectly in connection with, in relation to orarising out of the use of the Content.

This article may be used for research, teaching, and private study purposes. Anysubstantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

Educational StudiesVol. 37, No. 5, December 2011, 503–512

ISSN 0305-5698 print/ISSN 1465-3400 online© 2011 Taylor & FrancisDOI: 10.1080/03055698.2010.539733http://www.informaworld.com

Electronic assessment in higher education

Roelien Brinka* and Geoffrey Lautenbachb

aDepartment of Business Information Technology, University of Johannesburg, South Africa; bDepartment of Mathematics, Science, Technology and Computer Education, University of Johannesburg, South AfricaTaylor and FrancisCEDS_A_539733.sgm10.1080/03055698.2010.539733Educational Studies0305-5698 (print)/1465-3400 (online)Original Article2010Taylor & [email protected]

Assessment is an important cornerstone of education. A world trend in stayingabreast of the latest developments in the field of information and communicationtechnology (ICT) has led to an increased demand for electronic assessment ineducation circles. The critical need and responsibility for higher education to stayon par with the latest techniques regarding assessment subsequently led theUniversity of Johannesburg (UJ) to implement electronic assessment in somedepartments in 2004. Several challenges led to this exploration into the use of onee-assessment tool within the University.

Keywords: assessment; electronic assessment; e-assessment; higher education

Introduction and background to the inquiry

With technological advances and the increased expectations on individuals, the use oftechnology in the teaching and learning process has become essential. Computertechnology is continuously advancing and has impacted on teaching and learning(Shelly, Cashman, Waggoner, and Waggoner 2000). Tertiary institutions worldwidehave accordingly realised the value of electronic assessment (e-assessment). Elec-tronic assessment can be defined as a method of using information technology for anyassessment-related activities (Graff 2004). Personnel shortages and the need forenhanced throughput both add to the rationale to move towards this new method ofassessment. Traditional paper-and-pencil testing is now seen as obsolete and outdatedcompared with the latest techniques in teaching, learning and assessment (Parshall,Spray, Kalohn, and Davey 2002). Being computer literate has become a necessity notby choice, but due to the requirements and standards set by industry where there is aneed for trained and digitally literate personnel. This has evolved into an increaseddemand for electronic assessment for training and education (Pangali 2003).

According to Benson (2003), the principles of assessment do not change in anelectronic environment. E-assessment is underpinned by the same principles of valid-ity, flexibility and fairness and also uses the same strategies as traditional assessmentmethods (Booth et al. 2002). According to Hricko and Howell (2006), one of theimportant considerations for effective e-assessment is to ensure that the tool incorpo-rates these elements, fits the mode of delivery and legitimately measures the desiredoutcome. Therefore, both the lecturer and student can obtain maximum benefit from

*Corresponding author. Email: [email protected]

Dow

nloa

ded

by [

198.

54.1

73.1

94]

at 0

2:41

17

Mar

ch 2

015

504 R. Brink and G. Lautenbach

the electronic assessment process, if there is a good relationship between learningtargets, instruction and assessment.

Electronic assessment is utilised at many universities in South Africa. The demandhas also been fuelled by a growing number of students and a decrease in allotted classtimes. Universities using e-assessment products such as Skills Assessment Manager(SAM – a Web-based application measuring proficiency in MSOffice applications,including MSWord, MSExcel, MSPowerPoint and MSAccess – it can also measureuser skills in Windows 2000, Windows XP and Internet usage) and Electric Paper (anautomated testing system for the International Computer Driving Licence, withimmediate and accurate evaluation, in a self-contained system consisting of softwaresimulations, requiring no other software applications to run it) include WitsUniversity, Nelson Mandela Metropolitan University and UNISA.

The sub-department End User Computing (EUC) at the University of Johannes-burg (UJ) implemented CompAssess as an e-assessment tool in 2004. CompAssess isa tool that allows the student to work in a simulated environment for MSWord, MSEx-cel, MSPowerPoint and MSAccess. CompAssess facilitates the creation of customisedassessments for any of the software applications mentioned above. It allows for theselection and customisation of generic built-in tests and the specification of assess-ment parameters such as time, question weighting and passing grades. Input of studentdetails and the exporting and printing of reports are included (Masterskill 2006). Thesoftware logs and records the last choice made as the final answer. When submitted,the assessment is marked and the system informs the lecturer of the submission.

The module EUC is a practical computer module, which assesses the competencyof students in a software application environment. EUC is a practical module, and itis required from students to display their ability in using a computer. Students demon-strate their capacity in the execution of specific tasks on a computer. Instructions areexecuted in a software-driven environment which guides and tests students in aMicrosoft Office environment. According to Heywood (2000), the concept of authen-tic assessment relates to the idea of competency and performance-based learning.Stiggins (1987) states that performance assessments call upon the examinee to demon-strate specific skills and competencies, that is, to apply the skills and knowledge theyhave mastered. The module EUC gives students the opportunity to apply skills in arealistic simulated electronic assessment environment.

Methods

Both qualitative and quantitative methods were used in this inquiry, with the qualita-tive data collected first. According to Creswell (2003), a sequential exploratory designis conducted in two phases, the first phase involving qualitative methods, where aftera quantitative phase it is developed to test or generalise initial results. According toCreswell (2003), if qualitative data are collected first, the intent is to explore the topicwith participants at sites. Creswell also states that the second phase exploresunderstanding of data collected from a large number of people. In this inquiry, thequalitative data were collected and then analysed to identify variables to be exploredin the second, quantitative phase of the inquiry. According to Bergman (2008) inexploratory design, the participants in the first phase of data collection are typicallynot the same as those in the second phase. The purpose of the second phase, in thisinquiry the quantitative phase, is to generalise the results to the population. Bothqualitative and quantitative phases carried the same weight and emphasis.

Dow

nloa

ded

by [

198.

54.1

73.1

94]

at 0

2:41

17

Mar

ch 2

015

Educational Studies 505

Qualitative research aims to understand the phenomena in context-specificsettings, such as a “real world setting where the researcher does not attempt to manip-ulate the phenomenon of interest” (Patton 2001, 39). Both permanent and part-timeEUC lecturers from the sub-department EUC and one technical support staff memberfrom the University were included in two focus group interviews. This phase wasconducted with the researcher aiming to explore the engagement of lecturers using ane-assessment tool. This took place in the lecturers’ natural setting, as qualitativeresearchers study phenomena in natural settings to make sense of phenomena(Creswell 1998; Henning, Van Rensburg, and Smit 2004). Two focus group inter-views allowed participants to share their feelings, opinions, experiences, behavioursand meanings (Denzin 2001; Rubin and Rubin 1995). This was also a useful way ofgathering large volumes of data quickly (Marshall and Rossman 1999). The first focusgroup consisted of three lecturers and one technical support specialist who wasdirectly involved with the administration of the e-assessment tool. The second focusgroup consisted of lecturers involved with the actual assessment of learners in theclassroom environment. From this qualitative phase, variables were identified forfurther quantitative analysis.

Quantitative methods are based on numerical measurements of specific aspects ofphenomena (Thomas 2003). A quantitative phase was used in this inquiry utilising astructured questionnaire compiled using variables that were identified during the qual-itative phase of the inquiry. In this phase of the inquiry, the data collection wasconducted sequentially within an exploratory design from 330 EUC students toexplore the correlation between variables identified in the first phase of qualitativedata collection. The questionnaire used for this purpose was compiled in collaborationwith the statistical department at the University. The intent was to pose questions thatcould secure responses from students which could corroborate or negate lecturers’perceptions of e-assessment. In this process, students completed the questionnaire,structured in three categories, namely, background information, previous assessmentexperience and e-assessment experience during EUC studies at the University.

The first section of the questionnaire was the demographic category to securebackground information from the students. This was to check the representativenessof the sample and to enable the researcher to make statistical comparisons (Gray et al.2007). An example of the type of background information in this section was the“home language of students”. Through the analysis of this background information theresearcher can, for example, determine if the students are comfortable being assessedin English. The second part of the questionnaire included contingency questionswhere the researcher determined students’ previous experiences with an e-assessmenttool. Contingency questions are used to identify a subgroup for further questioning(Gray et al. 2007). An example of this section is when students need to answer “yes”or “no” to a question. If they answer in the affirmative, they will complete a particularsubset questions and ignore others.

The last section of the questionnaire consisted of filtered- and Likert scale-typequestions. A filtered question is posed to determine which respondents have sufficientinformation on an issue that can provide meaningful opinions (Gray et al. 2007). Anexample of this type of question is when the researcher poses a question, for example,“were students introduced by way of a tutorial before doing the electronic assess-ment?” During this process, students only need to reply “yes” or “no”. The lastnumber of questions posed will be based on a Likert scale format, where the studentsselect the category which they believe best reflects their perspective on the statement.

Dow

nloa

ded

by [

198.

54.1

73.1

94]

at 0

2:41

17

Mar

ch 2

015

506 R. Brink and G. Lautenbach

Likert scales are presented with a series of statements where the respondents indicatetheir levels of agreement or disagreement (Bryman and Cramer 2003). One question,for example, was posed to determine whether the terminology used in the electronicassessment environment was similar to the terminology used in class.

Data analysis

Henning, Van Rensburg, and Smit (2004) state that the true test of a competent qual-itative researcher comes in the analysis of the data. This process requires analyticalcraftsmanship and the ability to capture an understanding of the data writing. Merriam(1998) states that data analysis is the process of making sense out of the data, by eval-uating, consolidating, reducing and interpreting what the participants said. It relates tothe researcher securing information from diverse sources as identified by way ofaccepted research principles. Interviews were recorded and copies stored as audio fileson CD-ROMS for archiving purposes. Transcriptions were perused several timesbefore beginning the process of analysis (Creswell 2002). According to Henning, VanRensburg, and Smit (2004), one needs to first read through the entire text in order toget a global impression of the content. After this, data were categorised as per Figure 1into themes or categories.Figure 1. Coding from texts (adapted from Henning, Van Rensburg, and Smit 2004).Quantitative analysis of the survey data was done by the statistical servicesdepartment, Statcon. Statcon used the Statistical Package for Social Sciences (SPSS)

Figure 1. Coding from texts (adapted from Henning, Van Rensburg, and Smit 2004).

Dow

nloa

ded

by [

198.

54.1

73.1

94]

at 0

2:41

17

Mar

ch 2

015

Educational Studies 507

programme to process the questionnaire. We initially requested a factor analysis inorder to group together related variables, as one combined variable or factor.Mentioned variables related to specific group were then compiled in such a way as toform a combined variable. Bryman and Cramer (2003) state that a “factor analysis isa set of techniques for determining the extent to which variables that are related canbe grouped together so that they can be treated as one combined variable or factor,rather than as a series of separate variables” (121). Sapsford and Jupp (2006) supportthis by stating that a factor analysis is a technique based on assessing the correlationbetween variables. Simplification of data is secured by way of identification of groupsof variables which are connected and may then be regarded as part of a single factor.

In this inquiry, an exploratory factor analysis was used to indicate how manygroups or dimensions were found within this questionnaire. According to Bryman andCramer (2003), exploratory factor analysis is a more formal method to determine howmany groups there are in quantitative data analysis. In order to determine the extent towhich the variables are correlated to each other we used Cronbach’s alpha (α).According to Sapsford and Jupp (2006), Cronbach’s alpha (α) is a statistic calculatedto assess the extent to which items in a scale are correlated with each other. Theseitems should be highly correlated if they all measure the same thing. Cronbach’s alpha(α) will generally increase with increase in the correlation between the items. Thismeans that the coefficient is also called the internal consistency reliability of the test(Schmitt 1996). Cronbach’s alpha (α) assessed the extent to which the items on asubscale were measuring the same thing. It allowed us to discard some of the items toproduce a more unidimensional subscale with less internal error variance (Sapsfordand Jupp 2006). Cronbach’s alpha (α) was, therefore, used to test the consistency andreliability of the unidimensional subscale of the dimensions as found in the question-naires provided to students as part of the quantitative analysis process.

All quantitative data were analysed with the SPSS software, displayed as tables,charts and numerical statistical measurements which were then interpreted further.

Findings

The qualitative data referred to in this inquiry have been thoroughly interpreted andcategorised based on their relevancy (Sapsford and Jupp 2006). The following threecategories related to lecturers’ perceptions of student engagement with the electronicassessment tool emerged from the qualitative analysis of the two focus group inter-views: issues related to language, issues related to the electronic assessment systemand issues related to teaching and learning with the electronic assessment tool.

In issues related to language, lecturers were concerned with the language andterminology used in questions and in the phrasing of questions. Students, however,experienced difficulty understanding scope and the context of the questions. Students’concerns are in contrast to those of the lecturers. Lecturers felt that the language usedin the electronic assessment tool was too difficult for the students to understand. Incontrast, 79.40% of students were of the opinion that the language used in electronicassessments was understandable. Students (56.90%) agreed that the terminology usedin class and in the electronic assessment environment was the same.

The fact that students came from countries all over the African continent also hadan impact on their engagement with the tool. These students attended school in theirhome tongue and at times did not receive training in English. This was also corrobo-rated when lecturers mentioned that they had to explain certain concepts to students

Dow

nloa

ded

by [

198.

54.1

73.1

94]

at 0

2:41

17

Mar

ch 2

015

508 R. Brink and G. Lautenbach

but found that after several attempts they failed to convey information in such a waythat students understood.

In issues related to the e-assessment system, both lecturers and studentsexperienced major challenges in their engagement with the electronic assessmentsystem. The system challenged lecturers by not allowing them to import assessmentsfrom multiple campuses. As the EUC offices are located on different campuses at theUniversity, it is at times necessary to import assessments from one campus to another.

The students experienced the time allowed for assessments to be insufficient. Theslow pace of the system negatively affected all persons utilising the system. The poorperformance rate of system (including delayed responses and “hang time”) impactednegatively on students in that they were having difficulty completing exams in theallotted time period.

Moreover, lecturers had difficulty in viewing results. The system also displayedresults achieved by students immediately upon submission of the tests, and thisfunctionality resulted in instances where students could benefit unfairly. It was foundthat the functionality to view results could affect fellow student’s performance. Toaddress this possible unfair practice, certain administrative instructions were activatedon the system to deactivate viewing functionality. Both lecturers and students wereconcerned with the security of the system in terms of access to results during assess-ment sessions. From the quantitative data analysis, 68.1% of the students were foundto be concerned with privacy issues.

Issues related to teaching and learning with the e-assessment tool also emergedfrom the qualitative data. Electronic assessment is a new concept at the University andlecturers are of the opinion that it is important for students to be exposed to a standardway of assessment. Lecturers have specific teaching styles, and the questioning usedin the electronic assessment may not be aligned with the way they teach. Lecturersneed training on the use of the electronic assessment tool, and it is also important forthem to address challenges during assessment in class.

Lecturers believe that if they have knowledge of the assessment system, it will putthem in full control of their teaching and assessment. This capacity will also allowlecturers the ability to solve their own challenges on the system and appear moreprofessional from a student’s perspective. Some lecturers even requested accreditedtraining on the electronic assessment system.

On the other hand, the following three dimensions related to students’ engagementwith an electronic assessment tool emerged from the quantitative analysis of the ques-tionnaire: language and usability of electronic assessment, fairness of electronicassessment and preparation for electronic assessment. The dimension which relatesto the preparation for electronic assessment incorporates three variables which weregrouped together as one combined variable: tutorial assisted students to understandthe electronic assessment environment; the tutorial prepared students adequately forthe electronic assessment session; and the assessment provided a true reflection ofability at the time. Students were of the opinion that the preparation for the electronicassessment was sufficient. This is in direct contrast to the lecturers’ comments on thequalitative analysis as contained in interviews. The lecturers stated that, if students areallowed access to system before hand, they will be more successful with assessmentresults.

During the quantitative analysis, a four-point Likert scale was used to determinethe level of agreement from the students with the variables during the qualitative dataprocess. The following four-point Likert scale was used in this inquiry: 1: Strongly

Dow

nloa

ded

by [

198.

54.1

73.1

94]

at 0

2:41

17

Mar

ch 2

015

Educational Studies 509

Disagree (SD); 2: Disagree (D), 3: Agree (A) and 4: Strongly Agree (SA). The aver-age value for each dimension within the quantitative component was calculated. Thesevalues were used to group variables based on their value. Values relate to a scale of 1to 4. To calculate the average of each dimension from the quantitative analysis thefollowing formula was used:

Figure 2 shows the first dimension of the quantitative analysis language andusability of electronic assessment. Five items were grouped together as one combinedvariable – the sum of the means being 14.20. Therefore, the average for this dimensionwas 2.84. This relates to the number which is closer to 3 on the Likert scale. Therefore,it can be said that the majority of the students agree that the language and usability ofthe electronic assessment tool are understandable.Figure 2. Items and scale statistics of language and usability of electronic assessment.In Figure 3, the second dimension of the quantitative analysis fairness of the elec-tronic assessment tool, two items were grouped together as one combined variable.The mean of the variables are 4.72, and the number of items was two. Therefore, theaverage for this dimension is 2.36. This relates to the number which is closer to 2 onthe Likert scale. Therefore, it can be said that the majority of the students disagree thatthe electronic assessment tool is fair.Figure 3. Fairness of electronic assessment.In Figure 4, the last dimension of the quantitative analysis preparation forelectronic assessment, three items were grouped together as one combined variable.The mean of the variables is 8.09, and the numbers of items was three. Therefore, theaverage for this dimension is 2.69. This relates to the number which is closer to 3 on

Average per ension

MeanNumber of variables

dim =

Figure 2. Items and scale statistics of language and usability of electronic assessment.

Dow

nloa

ded

by [

198.

54.1

73.1

94]

at 0

2:41

17

Mar

ch 2

015

510 R. Brink and G. Lautenbach

the Likert scale. Therefore, it can be said that the majority of students also feel thatthe electronic assessment results were a true reflection of their ability at the time.Figure 4. Preparation for electronic assessment.The lecturers’ concerns from the qualitative analysis were, therefore, largelyconfirmed by the quantitative analysis process. These concerns will be dealt with inmore detail below.

Conclusion

With the implementation of the CompAssess system at the University, it becameapparent that several “gaps” exist within the system. It had become critical to identifyand implement a system that would allow more time to be spent with an increasingnumber of students at the University. Students expect more time to be allocated tothem for class presentations, facilitation and assessment. The use of the electronic

Figure 3. Fairness of electronic assessment.

Figure 4. Preparation for electronic assessment.

Dow

nloa

ded

by [

198.

54.1

73.1

94]

at 0

2:41

17

Mar

ch 2

015

Educational Studies 511

assessment tool CompAssess has significantly contributed to alleviating some of thispressure. As with all systems, CompAssess needs to be continuously evaluated andcorrective steps taken with system developers. Lecturers should receive comprehen-sive training on the system. These trained accredited staff with specific administrativerights should be allowed to access all administrative functions on system and be ableto address and correct challenges as they arise or are reported by students undertakingassessments. Lecturers also need to be able to have access to the system in order toprepare and run initial assessment sessions to test system and actually undertake actualassessment exam to verify system credibility and performance.

Another requirement of lecturers is the necessity to secure authorised access to thesystem in order to take control of setup and preparation settings on system. This accessis also required to resolve system failures or system crashes whenever they occur. Itis important that lecturers have greater knowledge of the system so that they canaddress challenges and explain them to students who require explanations of systemissues.

Students should be allowed access to the system before assessments take place.There should be practical sessions where students are exposed to the system. Studentsshould be able to take an actual preparation assessment testing session to prepare forassessment to be completed for marks.

The security of an electronic assessment system is a priority. Unauthorised accessto an electronic assessment system or observation of the electronic assessment systemcontent should not be allowed

This inquiry is relevant to developments in technology and processes related tothe information technology (IT) landscape. Educational technologies have developedat such a fast pace that it has become critical to keep up to date to optimally benefitfrom these advances. This paper provides a contextualised overview of challengesfacing both lecturers and students at a University engaging with a single electronicassessment tool.

Notes on contributorsRoelien Brink is currently a deputy head of Department and lecturer in the Department of Busi-ness Information Technology at the University of Johannesburg in South Africa. She receivedher MEd (computer-based education) in 2009 and is currently busy with her PhD in informa-tion and knowledge management. Current research involves the development of a frameworkfor information management for the work integrated learning process at a Higher EducationInstitution.

Dr Geoffrey Lautenbach is currently a senior lecturer in educational computing (undergraduateand postgraduate programmes). He also supervises a number of postgraduate students at bothdoctoral and masters’ level in the field of educational technologies in education. One of hisrecent publication is “Stories of engagement with e-learning: Revisiting the taxonomy of learn-ing”. International Journal of Information and Communication Technology Education 4, no. 3.

ReferencesBenson, A.D. 2003. Assessing participant learning in online environment. New Directions for

Adult and Continuing Education, no. 100: 69–78.Bergman, M.M. 2008. Advances in mixed methods research. London: Sage.Booth, R., B. Clayton, P. Hyde, R. Hartcher, and S. Hungar. 2002. Creative, quality online

assessment. Progress report to FLAG, July 2002. In Australian Flexible Learning Frame-

Dow

nloa

ded

by [

198.

54.1

73.1

94]

at 0

2:41

17

Mar

ch 2

015

512 R. Brink and G. Lautenbach

work. Assessment and Online Teaching. http://www.flexiblelearning.net.au/guides/assess-ment.pdf (accessed January 29, 2008).

Bryman, A., and D. Cramer. 2003. Advanced quantitative data analysis. Philadelphia, PA:McGraw Hill Open University Press.

Creswell, J.W. 1998. Qualitative inquiry and research design: Choosing among five traditions.Thousand Oaks, CA: Sage.

Creswell, J.W. 2002. Educational research: Planning, conducting and evaluating quantitativeand qualitative research. Upper Saddle River, NJ: Merrill Prentice Hall.

Creswell, J.W. 2003. Research design: Qualitative, quantitative and mixed methodsapproaches. 2nd ed. Thousand Oaks, CA: Sage.

Denzin, N.K. 2001. The reflexive interview and performative social life. Qualitative research1: 23–46.

Graff, M. 2004. Cognitive style and attitudes towards using online learning and assessmentmethods. Electronic Journal of e-Learning 2, no. 2.

Gray, P.S., J.B. Williamson, D.A. Karp, and J.R. Dalphin. 2007. The research imagination:An introduction to qualitative and quantitative methods. New York: CambridgeUniversity Press.

Henning, E., W. Van Rensburg, and B. Smit. 2004. Finding your way in qualitative research.Pretoria: Van Schaik.

Heywood, J. 2000. Assessment in higher education. London: Jessica Kingsley.Hricko, M., and S.L. Howel. 2006. Online assessment and measurement: Foundations and

challenges. Hershey, PA: Information Science Publishing.Marshall, C., and G.B. Rossman. 1999. Designing qualitative research. 3rd ed. London: Sage.Masterskill. 2006. CompAssess. http://onsitetraining.compassessonline.co.za/ (accessed April

20, 2007).Merriam, S.B. 1998. Qualitative research and case study applications in education. San

Francisco, CA: Jossey-Bass.Pangali, C. 2003. Using technology to measure learning. Chief Learning Officer. http://clomedia.

com/articles/view/using_technology_to_measure_learningParshall, C.G., J.A. Spray, J.C. Kalohn, and T. Davey. 2002. Practical considerations in

computer-based testing. New York: Springer.Patton, M.Q. 2001. Qualitative evaluation and research methods. 3rd ed. Thousand Oaks,

CA: Sage.Rubin, J., and S. Rubin. 1995. Qualitative interviewing: The art of hearing data. London/

Thousand Oaks, CA: Sage.Sapsford, R., and V. Jupp. 2006. Data collection and analysis. 2nd ed. London: Sage.Schmitt, N. 1996. Uses and abuses of coefficient alpha. Psychological Assessment 8, no. 4,

350–3.Shelly, G.B., T.J. Cashman, G.A. Waggoner, and W.C. Waggoner. 2000. Discovering

computers. Brief edition. San Francisco, CA: Boyd & Fraser.Stiggins, R.J. 1987. The design and development of performance assessment. Educational

Measurement: Issues and Practice 6: 33–42.Thomas, R.M. 2003. Blending qualitative and quantitative research methods in theses and

dissertations. Thousand Oaks, CA: Corwin Press.

Dow

nloa

ded

by [

198.

54.1

73.1

94]

at 0

2:41

17

Mar

ch 2

015