CONTINUOUS ASSESSMENT IN COMMUNICATION SKILLS AT UNIVERSITY OF JOHANNESBURG

128
CONTINUOUS ASSESSMENT IN COMMUNICATION SKILLS AT UNIVERSITY OF JOHANNESBURG EMMANUEL MATSEBATLELA 2005

Transcript of CONTINUOUS ASSESSMENT IN COMMUNICATION SKILLS AT UNIVERSITY OF JOHANNESBURG

CONTINUOUS ASSESSMENT IN COMMUNICATION SKILLS AT UNIVERSITY OF JOHANNESBURG

EMMANUEL MATSEBATLELA

2005

CONTINUOUS ASSESSMENT IN COMMUNICATION SKILLS AT UNIVERSITY OF JOHANNESBURG

by

EMMANUEL MOGOBOYA MATSEBATLELA

Submitted in partial fulfilment of the requirements for the

MAGISTER TECHNOLOGIAE: EDUCATION

in the

Department of Postgraduate Studies

in Education

FACULTY OF EDUCATION TSHWANE UNIVERSITY OF TECHNOLOGY

Supervisor: Prof CJ White May 2005

DECLARATION

I hereby declare that the dissertation submitted for the degree Magister Technologiae:

Education at Tshwane University of Technology, is my own original work and has not

been submitted to any other institution of higher education. I further declare that all

sources cited or quoted are indicated and acknowledged by means of a comprehensive list

of references.

EMMANUEL MATSEBATLELA

© Tshwane University of Technology 2005

i

DEDICATION

I would like to dedicate this study to the following people:

My parents, John and Paulina Matsebatlela, for their unwavering support and

constant encouragement.

My brothers, Thabe, Ntsakane and Mogotlo; and my sisters Mosike and Elizabeth,

for their assistance and moral support.

My fiancé, Molebogeng, for her patience and understanding.

ii

ABSTRACT

This study investigates educators’ and learners’ experiences of continuous assessment

(CASS) in the subject Communication Skills. In spite of the fact that continuous

assessment is implemented in schools and tertiary institutions throughout South Africa,

this study confines itself to the teaching and learning of the subject Communication Skills

at the Doornfontein campus of the University of Johannesburg (UJ).

The study specifically focuses on three departments, namely, Civil Engineering,

Radiography and Somatology; these were the only departments in which learners were

doing Communication Skills as a continuous assessment subject. The researcher used

both qualitative and quantitative approaches in this study. Questionnaires were

distributed to learners, and interviews were conducted with educators.

The results from the data collected revealed that the university did not provide the

educators with sufficient training in CASS. As a result, the educators were not adequately

prepared to implement CASS properly. It is, therefore, crucial that the university provides

such training to ensure proper implementation of CASS.

Since this study only focused on the subject Communication Skills at one campus

(Doornfontein) of UJ, further research is invited on CASS practices in other subjects, and

at other institutions of higher learning.

iii

ACKNOWLEDGEMENTS

I would like to acknowledge:

My supervisor, Prof CJ White, for his guidance, positive attitude, expert advice

and encouragement.

STATKON at the University of Johannesburg for the statistical analysis of

questionnaires and expert advice.

iv

TABLE OF CONTENTS

DECLARATION i

DEDICATION ii

ABSTRACT iii

ACKNOWLEDGEMENT iv

TABLE OF CONTENTS v

LIST OF TABLES x

LIST OF FIGURES xii

CHAPTER 1- INTRODUCTION AND BACKGROUND 1

1.1 INTRODUCTION 1

1.2 BACKGROUND 2

1.3 STATEMENT OF THE RESEARCH PROBLEM 2

1.4 RESEARCH QUESTIONS 3

1.4.1 Grand tour question 3

1.4.2 Sub-questions 3

1.5 OBJECTIVES OF THE STUDY 4

1.6 RESEARCH METHODOLOGY 4

1.6.1 Research design 4

1.6 2 Population and Sampling 4

1.6.2.1 Population 4

1.6.2.2 Sampling 5

1.6.3 Data Collection 5

1.6.3.1 Questionnaires 5

1.6.3.2 Interviews 6

1.6.3.3 Literature Study 6

1.6.4 Data Analysis 6

1.7 DELIMITATIONS OF THE STUDY 6

1.8 CLARIFICATION OF CONCEPTS 7

v

1.9 SIGNIFICANCE OF THE STUDY 7

1.10 EXPOSITION OF THE STUDY 8

1.11 CONCLUSION 10

CHAPTER 2 – LITERATURE REVIEW 11

2.1 INTRODUCTION 11

2.2 ASSESSMENT 11

2.2.1 Summative assessment 12

2.2.2 Formative assessment 12

2.2.3 Norm referenced assessment 14

2.2.4 Criterion referenced assessment 14

2.3 CONTINUOUS ASSESSMENT 16

2.4 TRADITIONAL ASSESSMENT VS CONTINUOUS ASSESSMENT 17

2.5 OUTCOMES-BASED EDUCATION AND CONTINUOUS ASSESSMENT

18

2.6 STRATEGIES FOR CONTINUOUS ASSESSMENT 19

2.6.1 Self assessment 20

2.6.2 Peer assessment 20

2.6.3 Projects 21

2.6.4 Portfolios 22

2.7 CONCLUSION 24

CHAPTER 3 – RESEARCH METHODOLOGY 25

3.1 INTRODUCTION 25

3.2 QUANTITATIVE AND QUALITATIVE RESEARCH APPROACHES 25

3.3 RESEARCH DESIGN 27

3.3.1 Population and Sampling 27

3.3.1.1 Population 27

3.3.1.2 Sampling 28

vi

3.3.2 Data collection 29

3.3.2.1 Quantitative data collection 30

3.3.2.2 Qualitative data collection 31

3.3.2.3 Literature study 35

3.3.3 Data analysis 35

3.3.3.1 Qualitative data analysis 36

3.3.3.2 Quantitative data analysis 37

3.4 ESTABLISHING TRUSTWORTHINESS 38

3.4.1 Qualitative data 38

3.4.1.1 Credibility 38

3.4.1.2 Transferability 38

3.4.1.3 Dependability 39

3.4.2 Quantitative data 39

3.4.2.1 Validity 39

3.4.2.2 Reliability 40

3.5 PILOT STUDY 40

3.5.1 Quantitative pilot study 41

3.5.2 Qualitative pilot study 41

3.6 CONCLUSION 41

CHAPTER 4 – RESULTS AND FINDINGS 43

4.1 INTRODUCTION 43

4.2 QUALITATIVE DATA 43

4.2.1 CASS training for Communication Skills educators 44

4.2.2 Implementation of CASS 44

4.2.3 Problems encountered in the implementation of CASS 44

4.2.4 Learning through CASS 45

4.2.5 The main difference between learning through CASS and writing final exams

45

vii

4.2.6 Ensuring that learners understand exactly what educators expect of them in

CASS. 46

4.2.7 Learners’ understanding of their roles in CASS 46

4.2.8 Provision of timeous feedback on assessments 47

4.2.9 Learners’ attendance and motivation 47

4.2.10 How learners coped with CASS 47

4.2.11 The move from final exams to CASS 48

4.2.12 Learners’ attitudes towards group work activities 48

4.3 QUANTITATIVE DATA 49

4.3.1 Responses from learners 49

4.3.1.1 Definition of outcomes 49

4.3.1.2 Different assessment methods 51

4.3.1.3 Encouraging active participation from learners 52

4.3.1.4 Involving learners in assessment 53

4.3.1.5 Provision of timeous feedback 54

4.3.1.6 Learning from lectures 55

4.3.1.7 Learning from feedback 56

4.3.1.8 Understanding the importance of assessment 58

4.3.1.9 Learning through CASS 59

4.3.1.10 Repetition of difficult aspects of lectures by educators 60

4.3.1.11 Assisting learners who experience difficulty 61

4.3.1.12 The pace of lectures 62

4.3.1.13 Provision of outcomes in learner guides 64

4.3.1.14 Provision of assessment methods in learner guides 65

4.3.1.15 Availability of educators during consultation times 66

4.3.1.16 Assessment given by educators 67

4.3.1.17 Learning through CASS 68

4.3.1.18 Balance between teaching and assessment 69

4.3.1.19 Visiting educators during consultation times 70

4.3.1.20 Visiting the library 72

4.3.1.21 Number of group discussions held during the year 73

viii

4.4 CONCLUSION 73

CHAPTER 5 – CONCLUSIONS AND RECOMMENDATIONS 75

5.1 INTRODUCTION 75

5.2 RESEARCH QUESTIONS 75

5.2.1 CASS training for educators 76

5.2.2 Problems encountered in the implementation of CASS 77

5.2.3 What the educators expected of learners in CASS 78

5.2.4 Provision of timeous feedback 79

5.2.5 Group work activities 79

5.2 6 Clear definition of outcomes 80

5.2.7 Different assessment methods 81

5.2.8 Active participation from learners 82

5.2.9 Involvement of learners in assessment 83

5.2.10 Provision of feedback 84

5.2.11 Learning from feedback provided by educators 84

5.2.12 Assisting learners who experienced difficulty in the subject 85

5.2.13 Clear outcomes provided in learner guides 85

5.2.14 Availability of educators during consultation times 86

5.2.15 Balance between teaching and assessment 86

5.2.16 Learners’ enthusiasm and motivation 87

5.3 SUGGESTIONS FOR FURTHER RESEARCH 88

5.4 FINAL CONCLUSION 88

BIBLIOGRAPHY 91

ANNEXURE A 94

ANNEXURE B 110

ANNEXURE C 113

ix

LIST OF TABLES

Table 2.1 Similarities and differences between formative an summative assessment

13

Table 2.2 Summary of differences between criterion and norm referenced assessment

15

Table 2.3 Differences between traditional examinations and continuous assessment

18

Table 3.1 Differences between quantitative and qualitative research 26

Table 3.2 Summary of advantages and disadvantages of interviews and questionnaires

32

Table 4.1 Frequency distribution for learners who completed the questionnaires 49

Table 4.2 Definition of outcomes 50

Table 4.3 Different assessment methods 51

Table 4.4 Encouraging active participation from learners 52

Table 4.5 Involving learners in assessment 53

Table 4.6 Provision of timeous feedback 54

Table 4.7 Learning from lectures 56

Table 4.8 Learning from feedback 57

Table 4.9 Understanding the importance of assessment 58

Table 4.10 Learning through CASS 59

Table 4.11 Repetition of difficult aspects of lectures 60

Table 4.12 Assisting learners who experience difficulty 61

Table 4.13 The pace of lectures 63

Table 4.14 Outcomes provided in learner guides 64

Table 4.15 Assessment methods provided in learner guides 65

Table 4.16 Availability of educators during consultation times 66

Table 4.17 Assessment given by educators 68

Table 4.18 Learning through CASS 69

Table 4.19 Balance between teaching and assessment 70

Table 4.20 Visiting educators during consultation times 71

x

Table 4.21 Visiting the library 72

Table 4.22 Number of group discussions held during the year 73

xi

LIST OF FIGURES

Figure 1.1 Exposition of study 9

Figure 3.1 Steps in analysing data 37

xii

CHAPTER I – INTRODUCTION AND BACKGROUND

1.1 INTRODUCTION

The University of Johannesburg (UJ) is a career-oriented institution of higher learning,

which offers a wide range of qualifications. The university is located in Johannesburg and

consists of five branches - the Doornfontein campus, the Auckland Park campuses

(Kingsway and Bunting Road campuses), the Eloff street campus, the Soweto campus,

and the East Rand campus. The focus of this study was the Doornfontein campus of the

University of Johannesburg, which was previously known as Technikon Witwatersrand

(TWR).

On 1 January 2005, Technikon Witwatersrand merged with Rand Afrikaans University

(RAU) and two Vista University campuses (Soweto and East Rand) to form the

University of Johannesburg.

Communication Skills is offered as a service subject to learners at the university. A

service subject is one that does not provide learners with career specific technical skills,

but gives them a range of skills that they need to have in addition to their career-specific

skills, in order to help them to become better professionals in their chosen careers. The

subject combines language, interpersonal and organisational communication skills. The

main aim of the subject is to improve learners’ communication and language skills,

thereby moulding them into well rounded professionals who will contribute meaningfully

to both the South African and global economies.

In South Africa, political, economic and social transformation have created new

challenges to the way in which organisations communicate. The impact of global

competition in conjunction with South Africa’s internal challenges of democratisation,

job creation, training and skills development, a powerful trade union movement and the

African Renaissance all call for new approaches and a revision of old principles. In these

changed circumstances, a new breed of organisational communicators is required to take

their rightful place at the highest level of South African business (Mersham & Skinner,

2001:1)

1.2 BACKGROUND

In many countries with exit-level examinations, educators are expressing an increased

interest in continuous assessment. Their interests appear to arise from two related but

different educational concerns. First, educators recognise that good instruction requires a

constant stream of information about learner progress or about possible reasons for their

lack of progress. Both learners and educators benefit from systematic and focused

feedback during the teaching-learning process. The second reason for educators’

increased interest in continuous assessment is a concern about fairness to learners. It

appears unfair to learners to place the weight of evaluating their worth on one

examination which comes at the end of the year or end of the term (Nitko, 1995:1).

South African society constantly undergoes change and the education and training arenas

are no exception. There has been a radical transformation of education and training. One

of the most challenging aspects of this transformation is the adoption of an Outcomes

Based Education (OBE) approach that underpins the introduction of Curriculum 2005 (Le

Grange & Reddy, 1998:1). OBE is mainly based on the attainment of learning outcomes.

Assessment, which is central to the achievement of outcomes, is the way information is

gathered to decide whether the learning outcomes have been properly attained.

Continuous Assessment (CASS) is an assessment approach central to OBE assessment,

which integrates teaching, learning and assessment.

1.3 STATEMENT OF THE RESEARCH PROBLEM

At the University of Johannesburg, there is a gradual shift towards continuous

assessment. Some of the subjects offered are already assessed by means of CASS and

these include Communication Skills. The subject is only assessed by means of CASS in

some departments, while other departments still employ the traditional exam method.

2

Sound CASS is based on three premises:

The purpose of CASS is to inform teaching and to improve learning while there is

still time to do so.

CASS calls for graded assessments that are based on several methods of

assessment.

CASS must be valid, reliable and fair.

It was on these premises that CASS was introduced at the University of Johannesburg,

but a problem arose because the structures to enable the effective implementation of

CASS were not well thought through. As a result, the practice of CASS was divergent

and inconsistent between the various departments.

1.4 RESEARCH QUESTIONS

The questions which gave rise to this research project are as follows:

1.4.1 Grand tour question:

How successful is continuous assessment in the subject Communication

Skills?

1.4.2 Sub-questions:

How do learners experience CASS in the subject Communication Skills?

How do educators experience CASS in the subject Communication Skills?

3

1.5 OBJECTIVES OF THE STUDY

The aims of this research were:

To explore the experiences of learners and educators regarding CASS in the

subject Communication Skills.

To examine how educators at the University of Johannesburg were implementing

continuous assessment in their teaching and assessment of the subject

Communication Skills.

1.6 RESEARCH METHODOLOGY

1.6.1 Research Design

The researcher used both qualitative and quantitative approaches in this study.

Questionnaires were distributed and interviews were also conducted. Descriptive research

was used to describe the experiences of respondents.

1.6.2 Population and Sampling

1.6.2.1 Population

The population consisted of learners registered at the Doornfontein campus of the

University of Johannesburg. These learners were registered for Communication Skills as

a continuous assessment subject. The necessary arrangements for the distribution of

questionnaires to learners were made with educators for the selected classes. Educators

who taught Communication Skills as a CASS subject at the University of Johannesburg

also formed part of the population. The necessary arrangements were made with the

educators to be interviewed.

4

1.6.2.2 Sampling

The researcher used simple random sampling as a probability sampling technique. Three

classes in which learners were doing Communication Skills as a continuous assessment

subject during the 2004 academic year were selected. All the learners in this study were

first year learners. The researcher selected 156 learners spread over three classes. These

classes consisted of learners from the following departments: Civil Engineering,

Radiography, and Somatology. Questionnaires were then distributed to learners in the

three classes. Purposive sampling was used to select four educators who were teaching

Communication Skills as a CASS subject.

1.6.3 Data Collection

1.6.3.1 Questionnaires

A questionnaire was used to gather information from the learners. The questionnaire

mainly investigated the learners’ experiences of CASS in the subject Communication

Skills.

The questionnaire included:

Statements to which respondents had to respond by selecting from the following

options: 1-4 and 5 or more.

Statements to which respondents had to respond by choosing from the following

options: always, sometimes and never.

Some of the questions were open questions, which gave respondents the

opportunity to air their views.

5

1.6.3.2 Interviews

Interviews were conducted with 4 Communication Skills educators in order to find out

about their views on and experiences of teaching Communication Skills as a CASS

subject.

1.6.3.3 Literature Study

A literature study was done to determine the views of different authors on continuous

assessment.

1.6.4 Data Analysis

Interviews conducted with educators were transcribed before being analysed. Cross-

tabulations were the main statistical method used to analyse data. Frequency tables were

also used to illustrate the overall response of learners.

1.7 DELIMITATIONS OF THE STUDY

In spite of the fact that continuous assessment is implemented in schools and tertiary

institutions throughout South Africa, this study confines itself to the teaching of

Communication Skills at the Doornfontein campus of the University of Johannesburg.

The Doornfontein campus consists of three faculties: Engineering and the Built

Environment, Health Sciences, and Art, Design and Architecture. The study will

specifically focus on three departments, namely, Civil Engineering, Radiography and

Somatology. The study of learners and educators at the University of Johannesburg will

ultimately cast more light on the challenges and issues facing learners and educators who

are currently implementing (CASS) in South Africa.

6

1.8 CLARIFICATION OF CONCEPTS

Continuous assessment (CASS): This refers to the ongoing assessment of learners’

knowledge, skills and attitudes in terms of the learning outcomes that they are required to

achieve over a period of time (Le Grange & Reddy, 1998:37).

Outcomes: These are the end products of the learning process. They state clearly what

skills, knowledge and values a learner should be able to demonstrate and apply

appropriately (Le Grange & Reddy, 1998:38).

Formative assessment: This form of assessment involves the educator’s intervention

during the learning process to gather feedback which is then used to guide subsequent

teaching and learning (Brooks, 2002:15).

Summative assessment: This refers to assessment that takes place at the end of the

learning process. It is the final summing up of achievements at the end of a learning

process. It usually comprises one main test or examination that is written at the end of the

school year (Kramer, 1999:43).

Norm referencing: This refers to the assessment of a learner’s performance by

comparing it with that of other learners in a group (Le Grange & Reddy, 1998:38).

Criterion referencing: This is the assessment of a learner’s performance by measuring it

against a set of predetermined criteria (Le Grange & Reddy, 1998:37).

1.9 SIGNIFICANCE OF THE STUDY

In view of the fact that many educators in schools and tertiary institutions in South Africa

implement continuous assessment in their teaching practices, it is essential for them to be

exposed to the experiences that learners and educators have of CASS so that they gain

more insight into this approach. It is, therefore, imperative to conduct this study.

7

Against this background, this study aimed to investigate challenges that are faced by

educators and learners in the implementation of CASS. As part of this study, the

researcher provided, inter alia, various ways of implementing CASS. The

recommendations in this study will provide educators with the knowledge to understand

the fundamental reasons for implementing CASS in their teaching-learning environment.

They will also enable educators to be prepared to cope with the challenges emanating

from the implementation of this approach, in order to ensure effective learning among

learners.

1.10 EXPOSITION OF STUDY

For a graphical illustration of the exposition of this study, see Figure 1.1 on the next

page.

8

Summary of chapters 1-5

Figure 1.1 Exposition of study

Chapter 1

Chapter 2

Chapter 4

Research methodology.

Literature study.

Introduction and background

Conclusions and recommendations.

Research results and findings.

Chapter 5

Chapter 3

9

1.11 CONCLUSION

In the past, teaching and assessment were seen as separate processes. Teaching practices

focused mainly on the development of a learner’s memory capacity while assessment

practices were mostly summative and norm referenced, rather than formative and

criterion referenced. However, current assessment practices combine teaching, learning

and assessment. Continuous assessment is of particular interest to the researcher because

it has been adopted by the education authorities as being more relevant to outcomes-

based education in South Africa, and it makes teaching, learning and assessment part of

the same process. Since CASS integrates teaching, learning and assessment, the

researcher is of the opinion that this approach should be a central part of the planning and

design stages of a teaching and learning programme.

10

CHAPTER 2 - LITERATURE REVIEW

2.1 INTRODUCTION

In chapter 1, focus was on the formulation of the background to this research project.

Important aspects of this study, such as the research questions, the objectives of this

study, a short layout of research methodology, the significance of the study and the

exposition of the whole research project, were provided and discussed. Chapter 2 is a

focused literature study on assessment and continuous assessment, a comparison between

traditional assessment methods and continuous assessment, different strategies for

implementing continuous assessment, as well as a brief discussion of the relationship

between outcomes-based education and continuous assessment.

2.2 ASSESSMENT

Assessment occurs when judgements are made about a learner’s performance, and entails

gathering and organising information about learners in order to make decisions and

judgements about their learning (Le Grange & Reddy, 1998:3). Educators need to

become competent in selecting and using assessments, because they should focus their

assessment activities on the information they need to make certain educational decisions

(Nitko, 2001:5).

Nitko (2001:5) provides the following guiding principles that need to be followed in

order to use and select educational assessments meaningfully:

Educators should be clearly aware of the learning outcomes they want to assess.

Educators need to be sure that the assessment techniques they select really match

the learning outcomes.

Educators have to select the assessment techniques that serve the needs of

learners.

11

Educators must try to use various performance indicators for each learning

outcome.

When interpreting the results of assessment, educators need to take the learners’

limitations into account.

Le Grange and Reddy (1998:4-5) identify four main terms used to describe assessment,

namely summative, formative, norm referenced and criterion referenced. It is essential

that we familiarise ourselves with these terms, as they are central to educational

assessment.

2.2.1 Summative assessment

Summative assessment refers to assessment that takes place at the end of the learning

process. It usually comprises one main test or examination that is written at the end of the

school year. This assessment is given at the conclusion of a learning unit and aims to

determine whether learning has occurred and if the predetermined outcomes have been

achieved (McDonald, 2002:12). Summative assessment mainly determines how much of

the subject’s content the learners know. According to McDonald (2002:12), the purpose

of summative assessment is to describe the quality of a learner’s achievement after a

teaching-learning process has been completed. Learners who pass the examination will be

promoted to the next grade, whereas those who fail will have to remain in the same grade.

This form of assessment is almost always norm referenced (Le Grange & Reddy,

1998:4). Norm referenced assessment is discussed later in this chapter.

2.2.2 Formative assessment

Formative assessment is conducted as the learning process takes place and is used to

guide or inform the learning process (Le Grange & Reddy, 1998:4). According to Brooks

(2002:15), formative assessment involves an educator’s intervention during the learning

process. This intervention is aimed to collect feedback, which will then be used to guide

future teaching and learning. According to Brooks (2002:15), for the activity to be

12

regarded as formative, educators need to do more than just assessing learners regularly.

Brooks (2002:15) advises that, if educators and learners fail to act upon feedback, then

the activity is not genuinely formative. Formative assessment helps to make decisions

about how to proceed with the learning process. It allows us to make adjustments and to

take account of new issues, learning problems, changes or other factors that influence

learning (Kramer, 1999:43). Slavin (in McDonald, 2002:12) states that while formative

assessment asks, “How are you doing?”, summative assessment asks, “How did you

do?”.

We need both formative and summative assessment and we should have assessment

strategies that include both components (Kramer, 1999:43). McDonald (2002:12) states

that it is essential for both formative and summative assessment to be consistent, as this

consistency is attained when both are based on the learning outcomes established at the

beginning of a learning programme. Continuous assessment relies on formative

assessment to help learners and educators to check on how learning progresses.

Table 2.1 is a comparison of formative and summative assessment as adapted from

McDonald (2002:13).

Table 2.1 Similarities and differences between formative and summative assessment

Formative assessment Summative assessment

Occurs during the process of

learning.

Occurs at the completion of the

learning process.

Assesses progress in a course. Summarises achievement in a

course.

Directs learning to achieve

outcomes.

Assesses the achievement of

outcomes.

Provides feedback.

Provides feedback.

(McDonald, 2002:13)

13

2.2.3 Norm referenced assessment

Norm referenced assessment describes a learner’s progress by comparing it to the

standards set for a group. This form of assessment also foregrounds ranking positions in

class to get a sense of where individual learners fit in the class (Kramer, 1999:44).

Brooks (2002:46) states that the cardinal principle of norm referencing is that learners’

results are not only based on their own performance, but are also dependent on how well

other learners in a class perform. Nitko (2001:13) observes that norm referenced

assessment shows a learner’s position in a reference group, but does not explain what the

learner knows or the tasks he or she is able to perform. Gipps (in Brooks, 2002:47)

observes that competition, which is a central feature of norm referencing, can severely

discourage learners who have few academic successes in competition with their

contemporaries. Competition also discourages learners from assisting each other with

their academic work, encourages learners to cover up misunderstandings, threatens peer

relationships, tends to divide groups into higher and lower achieving learners and

discourages intrinsic motivation (Brooks, 2002:47).

2.2.4 Criterion referenced assessment

Criterion referenced assessment consists of certain criteria that learners are expected to

achieve in a particular grade. This kind of assessment seems to provide more information

about a learner’s competence in a particular area, compared with norm-referenced

assessment. According to Brooks (2002:45), all learners who satisfy the pre-set

requirements are considered competent. Black (in Brooks, 2002:45) argues that formative

assessment relies on criterion referenced assessment. Brooks (2002:46) regards criterion

referencing as superior to norm referencing because it involves, inter alia, specifying

achievement standards, which is a central principle of formative assessment.

Swezey (in Gultig, Lubisi, Parker & Wedekind, 1998:50) believes that it is essential for

an educator to do a thorough task analysis in order to develop a valid criterion referenced

test which is based upon adequate performance objectives. According to Gultig et al.,

14

1998:50), such an analysis will enable educators to identify the critical elements to

perform tasks successfully. In order for educators to develop an adequate criterion

referenced test, they need to have access to task information on required skills and

knowledge, necessary performances that must be accomplished, criteria associated with

each performance that is identified, and conditions under which each performance must

be accomplished (Gultig et al., 1998:50). According to Kramer (1999:44), criterion

referencing gives all learners an equal and fair opportunity to achieve outcomes in

various ways. Criterion referencing, as Brooks (2002:46) observes, is more democratic as

it gives every learner the potential to succeed if they satisfy the pre-determined

achievement standards. It does not involve the ranking of performances and does not

compare learners’ performances with one another.

Table 2.2 is a comparison of criterion and norm referenced assessment as adapted from

McDonald (2002:17).

Table 2.2 Summary of differences between criterion and norm referenced assessment

Norm referenced assessment Criterion referenced assessment

Compares a learner’s performance

to a reference group.

Compares a learner’s performance

to pre-established criteria.

Discriminates the performance. Describes the performance.

Relative performance reference. Mastery reference.

Has a more diverse content domain. Has a narrowly defined content

domain.

Covers a smaller number of items

for each objective.

Covers a larger number of items for

each objective.

Eliminates easy items. Includes easy items.

Focuses on ranking of learners. Focuses on learners’ competency.

Provides percentile rank. Provides percent-correct score.

(McDonald, 2002:17)

15

2.3 CONTINUOUS ASSESSMENT

Continuous assessment (CASS) is an approach which integrates teaching, learning and

assessment into a uniform process. According to Kramer (1999:39), CASS aims to

achieve the following three main results:

To help gather a wider range of evidence of learning that can be used for assessment.

To provide different and varied opportunities to gather evidence.

To spread and interweave assessment activities throughout the learning process, rather

than to leave all assessment to the end of the process.

Current thinking is that we need to find ways of assessing in different ways and at

different times. Higher education usually combines course work, project work, class

marks, test results and other elements into a final assessment. Continuous assessment

helps educators to plan a more effective, useful and fair assessment strategy so that they

can collect various kinds of evidence, at different times, to demonstrate that learning has

occurred (Kramer, 1999:40).

According to Kramer (1999:39), one of the main goals CASS aims to achieve is to spread

and interconnect assessment activities throughout the learning process, instead of doing

all assessment activities at the end of the learning process. Assessment, as Kramer

(1999:37) observes, should be ongoing and aim to provide learners with valuable

information about their learning progress. Richards and Lockhart (1994:188) state that

one of the essential parts of language teaching is providing feedback to learners. An

educator may give either positive or negative feedback which helps learners know how

well they have performed, motivate them and also build a supportive classroom climate.

It is, therefore, important for language educators to spread and interweave assessment

throughout the learning process so that learners can receive ongoing feedback on their

learning progress and learn from it. Kramer (1999:43) states that formative assessment,

which takes place during the learning process, is essential for making decisions on how to

proceed with the learning process. It also allows educators and learners to identify

16

learning problems and other factors that influence learning and then make the necessary

adjustments.

2.4 TRADITIONAL ASSESSMENT VS CONTINUOUS ASSESSMENT

According to Le Grange and Reddy (1998:5), traditional learning practices mainly focus

on developing a learner’s memory capacity and are, for the most part, summative and

norm referenced, rather than formative and criterion referenced. Educators make

judgements about what the learners know at the end of the school year in order to decide

whether they can be promoted to the next grade. In this system, the end product is

assessed. This end product mainly consists of the recall of information. This assessment

practice does not pay attention to the learners’ other skills, attitudes and levels of

competence that they can develop during their learning processes (Le Grange & Reddy,

1998:5).

Continuous assessment, as Le Grange and Reddy (1998:10) observe, does not just

involve teaching and testing for the purpose of grading, it provides feedback on the

learning process and the learner’s development and also focuses on the assessment of a

wider range of educational outcomes. Continuous assessment is mainly concerned about

the assessment of the whole learner throughout the learning process, and provides

feedback that facilitates further positive learning.

According to Le Grange and Reddy (1998:10), continuous assessment also does the

following:

It helps to identify the strengths and weaknesses of learners.

It encourages educators and learners to communicate with each other.

It works closely with evaluation and, therefore, provides essential information on

curriculum issues like teaching methods and the relevance of learning outcomes

and resources.

17

Table 2.3 illustrates the major differences between traditional examinations and

continuous assessment as adapted from Le Grange and Reddy (1998:11).

Table 2.3 Differences between traditional examinations and continuous assessment

Traditional assessment Continuous assessment

Mainly consists of written

examinations that take place in

formal settings.

Consists of a variety of

assessment methods that can be

formal and informal.

Helps educators to make

decisions on whether or not the

learner is promoted to the next

grade.

Informs the learning process

through which learning outcomes

are acquired.

Occurs at the end of the learning

process at predetermined dates

and times.

Occurs during the learning

process when it is considered

necessary.

Is mainly norm referenced. Is mainly criterion referenced.

Provides isolated marks or

percentages to show learners’

development.

Provides information in context as

feedback on learners’

development.

(Le Grange & Reddy, 1998:11)

2.5 OUTCOMES-BASED EDUCATION AND CONTINUOUS ASSESSMENT

An outcomes-based curriculum views the learning and teaching process differently from

the traditional curriculum. Knowledge is not seen as being transferred intact from the

educator to the learner. Instead, knowledge is seen as being constructed in the mind of the

learner. Each learner brings his or her own prior knowledge and experiences to any

learning situation (Le Grange & Reddy, 1998:6). It is, thus, essential for assessment to be

learner-centered.

18

According to Lubisi, Wedekind, Parker and Gultig (1997:14), a central feature in

outcomes-based learning is the achievement of certain predetermined outcomes. The

main role of assessment, therefore, is to determine whether or not learners have achieved

these outcomes. Lubisi et al., (1997:14) maintain that in outcomes-based assessment, a

learner’s progress is measured against predetermined criteria for achieving learning

outcomes, rather than against other learners’ performances. Every learner who meets the

predetermined criteria for achieving learning outcomes receives credits, regardless of

how well other learners have performed. Thus, the emphasis in outcomes-based

assessment is criterion referenced rather than norm referenced.

Lubisi et al., (1997:15) further state that assessment plays a vital role in the continuous

monitoring of a learner’s progress towards the achievement of predetermined outcomes,

and also provides educators with information about problems experienced by learners at

given moments in the learning process. In this sense, outcomes-based assessment is

continuous and formative, and could be educator, peer or self-driven (Lubisi et al.,

1997:15).

2.6 STRATEGIES FOR CONTINUOUS ASSESSMENT

Unlike traditional assessment, which only involves an educator in a learner’s assessment,

assessment in outcomes-based education involves more than one assessor (Le Grange &

Reddy, 1998:19). Continuous assessment, according to Le Grange and Reddy (1998:19),

includes judgements made about a learner’s performance by an educator (educator

assessment), the learner him/herself (self assessment) and other learners (peer

assessment). According to Kramer (1999: 42), a learner needs to be actively involved in

the assessment process, instead of being a passive passenger in a process driven by the

educator. Kramer (1999:42) also argues that, if learners are to be involved in the learning

process, they must be actively involved in assessment.

19

2.6.1 Self assessment

According to Kramer (1999:42), self assessment occurs when a learner is involved in

assessing his or her own progress. An example of self assessment would be when an

educator asks a learner to select his or her best English letter and state the reasons for the

selection (Le Grange & Reddy, 1998:19). This, according to Le Grange and Reddy

(1998:19), allows the learner to be engaged in self-reflection, encourages the learner to

take responsibility of his or her own learning and also enables the educator to be aware of

what the learner values as important, so that the educator can supply the learner with

more meaningful feedback. Self assessment needs to be used during, rather than at the

end of the teaching-learning process of a particular learning unit, so that learners can

have an opportunity to reflect on the work while it is in progress and apply what they

learn in practice while it is still relevant (Brooks, 2002:72). Brooks (2002:68) asserts that

effective assessment plays an important role in ensuring that learners become better

learners, have more self-awareness and deepen their insight into the assessment process.

2.6.2 Peer assessment

Peer assessment involves learners in assessing each other’s progress (Kramer, 1999:42).

According to Brooks (2002:73), peer assessment involves learners assessing the work of

their contemporaries rather than their own, and learners often do it in pairs or group

activities so that they can benefit from sharing ideas and insights. Le Grange and Reddy

(1998:19) state that peer assessment can either be formal or informal. Formal peer

assessment may occur when a group of learners work together on a class activity and

learners are asked to assess one another, and the criteria used to assess are discussed

before the formal assessment (Le Grange & Reddy, 1998:19). Informal peer assessment

may be in the form of informal verbal comments from other learners in a group (Le

Grange & Reddy, 1998:19). According to Le Grange and Reddy (1998:19), these verbal

remarks from other group members may prompt the learner to rethink and reassess his or

her original idea, and in this way peer assessment can contribute meaningfully to the

learner’s learning process.

20

Though it is essential for learners to be actively involved in the assessment process,

Kramer (1999:43) argues that it is equally important for educators to clearly state what

the standards of achievement are and what learners need to do to satisfy the conditions

for achievement. This enables learners, educators, peers and others to be in full

agreement about the outcomes and know the evidence that learners need to present to

show the achievement of outcomes (Kramer, 1999:43).

2.6.3 Projects

According to Le Grange and Reddy (1998, 21), a project is a long-term task which may

either be done by learners individually or in groups. They further add that, a project

comprises several activities which depend on the targeted outcomes of the project. These

activities may include the making of objects or models, identifying and attempting to

solve problems, experimenting or presenting experiments and presenting information

collected from other sources.

Le Grange and Reddy (1998:21) maintain that it is vital for a project to have clear

intended outcomes, and it should also be structured to ensure that each of the outcomes

can be assessed. When assessing a project, it is essential not to concentrate on the

outcome only, but to also focus on the process and circumstances (Le Grange & Reddy,

1998:21).

According to Le Grange and Reddy (1998:21), the following guidelines should be borne

in mind when asking learners to carry out a project:

The educator should give clear and detailed instructions.

Both process and content should be assessed.

Both content and end product should be assessed.

Educators should make learners aware of what will be assessed and how that will

be assessed.

21

It is important for educators to remember that learners often have unequal access

to resource materials.

Educators should emphasise due dates and constantly remind learners of these

dates.

If educators make assessment criteria clear at the beginning of a project, self assessment

and peer assessment (if it is a group project) occur automatically, since learners will

constantly be gauging their progress against the criteria to check whether they are

meeting the requirements and, if not, to see how they can amend their work in order to

meet the criteria (Le Grange & Reddy, 1998:22).

2.6.4 Portfolios

According to Martin-Kniep (in Maree & Fraser, 2004:121), a portfolio is a purposeful

collection of learners’ work that demonstrates the learners’ efforts, progress and

achievements in one or more areas. The collection must include participation of learners

in the selection of content, the selection criteria, the criteria for assessing merit and

evidence of learners’ self reflection.

A portfolio is generally considered to be a file or folder comprising different samples of a

learner’s work (Le Grange & Reddy, 1998:23). The work may include written

assignments, corrections to assignments, sketches, photographs, graphs, charts, models

and art work.

Du Toit and Vandeyar (in Maree & Fraser, 2004:123) identify the following reasons for

implementing continuous assessment:

To tap learners’ knowledge and capabilities to a greater degree.

To investigate learners’ learning and production processes.

To align teaching and assessment emphases.

To examine learners’ functioning in real-life situations.

22

To give continuous developmental feedback.

To encourage learners to be actively involved in and be responsible for learning.

To track the progress of learners in a multidimensional way.

To provide an opportunity for educators, learners and parents to communicate

about the learning that is taking place.

A portfolio, according to Le Grange and Reddy (1998:23), is a valuable strategy for

assessing the development of a learner’s progress in certain outcomes. An educator

selects a sample of work that reflects a learner’s competence in a particular skill or

outcome and places it in a portfolio at the beginning of the school year (Le Grange &

Reddy, 1998:23). The educator’s assessment of this sample should also be included in the

portfolio. At a later stage, the educator selects and assesses samples that reflect the

learner’s growing competence in the same outcome during the course of learning to also

form part of the portfolio. The selected samples of work will then be compared so that the

educator and the learner are able to assess the learner’s development towards achieving

the desired learning outcomes. According to Le Grange and Reddy (1998:23), it is

essential for assessment of samples of work to also include both educator assessment and

self assessment.

The most important trend in assessment, according to Kramer (1999: 44), is to realise that

assessment is not an action, but rather a planned series of different events and actions

throughout the learning process, and aims to help an educator to make decisions about the

learner’s progress. It is important for educators to use various assessment techniques that

assist learning and give information. According to Kramer (1999:44), it is also essential

for educators to be conscious of the need to seek improvement and avoid both

complacency and habit as the major influences on assessment.

23

2.7 CONCLUSION

This chapter covered the literature study on different views of assessment and provided

various types of assessment methods. The chapter also included a discussion on

continuous assessment, various strategies for implementing continuous assessment, as

well as a comparison between traditional assessment methods and continuous assessment.

Continuous assessment appears to have more advantages than disadvantages. This could

be due to the fact that CASS informs and guides the teaching-learning process, thereby

paving the way for effective teaching and learning. It is, therefore, imperative for

educators to be well informed about the correct implementation of CASS, in order to

ensure the attainment of the desired results. In chapter 3, the description of various

research methods and techniques used in this study will be presented.

24

CHAPTER 3 - RESEARCH METHODOLOGY

3.1 INTRODUCTION

In chapter 2, focus was on various views that different authors have of assessment and,

more especially, continuous assessment. This chapter clarifies the methodology used to

conduct this research.

The main aim of this chapter is to outline the research methods and provide the rationale

behind the methods used. Chapter 3 outlines how this study was conducted and what

strategies were utilised to ensure credibility of the data gathered.

The researcher used both qualitative and quantitative approaches in this study. What

follows is a discussion of both approaches and how the researcher used them in this

study.

3.2 QUANTITATIVE AND QUALITATIVE RESEARCH APPROACHES

Charles and Mertler (2002:30) assert that research that depends on narrative information

is referred to as qualitative research, whereas the one that relies on numerical information

is called quantitative research.

According to Neuman (2000:122), quantitative researchers focus on issues of design,

measurement, and sampling, because their deductive approach stresses detailed planning

before data collection and analysis. Qualitative researchers, on the other hand, pay more

attention to issues of the richness, texture, and feeling of raw data since their inductive

approach stresses developing insights and generalisations from the data collected

(Neuman, 2000:122).

25

Cohen, Manion and Morrison (2000:95) state that, for a quantitative data, a precise

sample number can be calculated in accordance with the level of accuracy and the level

of probability that is required by the researcher in his or her work.

Neuman (2000:122) also points out that quantitative researchers stress precise measuring

of variables and testing of hypotheses that are linked to general causal explanations.

Conversely, qualitative researchers rely, to a larger extent, on interpretive or critical

social science.

Table 3.1 illustrates the differences between quantitative and qualitative research as

adapted from Neuman (2000:123).

Table 3.1 Differences between Quantitative and Qualitative Research

Quantitative Qualitative

Tests hypothesis that the researcher

begins with.

Captures and discovers meaning

once the researcher becomes

immersed in the data.

Concepts are in the form of distinct

variables.

Concepts are in the form of themes,

motifs, generalisations, and

taxonomies.

Measures are systematically created

before data collection and are

standardised.

Measures are created in an ad hoc

manner and are often specific to the

individual setting or researcher.

Data are in the form of numbers

from precise measurements.

Data are in the form of words and

images from documents,

observations, and transcripts.

Theory is, to a larger extent, causal

and deductive.

Theory can be causal or non-causal

and is largely inductive.

Research procedures are standard,

and replication is assumed.

Research procedures are particular,

and there is seldom replication.

26

Analysis takes place by using

statistics, tables, or charts and

discussing how what they show is

related to hypotheses.

Analysis occurs by extracting

themes or generalisations from

evidence and organising data to

present a coherent, consistent

picture.

3.3 RESEARCH DESIGN

According to Charles and Mertler (2002:384), research design refers to the overall,

detailed plan that shows how a researcher intends to obtain, analyse, and interpret data.

Research design, according to Cohen et al., (2000:73), is governed by the notion of

fitness of purpose, and the purpose of the research determines the methodology and

research design.

3.3.1 Population and Sampling

3.3.1.1 Population

Anderson and Arsenaut (1998:254) define population as the whole group of people or set

of objects, including those not in the research study. A population, according to Charles

and Mertler (2002:45), includes all the individuals within certain descriptive parameters,

such as those of location, age or sex.

When defining the population, a researcher specifies the section being sampled, the

geographical location, and the temporal boundaries of the population (Neuman,

2003:216). Neuman (2003:216) also adds that a researcher’s target population is a

particular pool of individuals or cases that he or she wants to study.

In this study, the population consisted of first year learners registered for the 2004

academic year at the Doornfontein campus of the University of Johannesburg. The

27

learners were doing Communication Skills as a continuous assessment subject, and were

from the following departments: Civil Engineering, Radiography and Somatology. Four

educators who taught Communication Skills as a CASS subject at the University of

Johannesburg also formed part of the population.

3.3.1.2 Sampling

According to Cohen et al., (2000:92), the quality of a research study does not only rely

on the appropriateness of methodology and instrumentation, but also on the suitability of

the sampling strategy that the researcher has adopted. Bailey (1994:83) defines a sample

as a segment of the entire population. It is essential for the sample to always be viewed as

an approximation of the whole and not as a whole in itself. Sampling, according to

Fraenkel and Wallen (1996:111), refers to a process in which a researcher selects

individuals to participate in a research study.

Quantitative sampling

Neuman (2000:195) states that quantitative researchers often use probability sampling.

Bailey (1994:89) points out that in probability sampling, the likelihood of selecting each

respondent is known. The main advantage of probability sampling is that it enables

researchers to indicate the probability with which sample results deviate in differing

degrees from the corresponding population values (Welman & Kruger, 2001:47).

In this study, the researcher used simple random sampling as a probability sampling

technique. In simple random sampling, all members of the research population have an

equal chance of being selected, and the likelihood of a member of the population being

selected is not at all affected by the selection of other members of the population (Cohen

et al., 2000:92). McBurney (2001:249) advises that simple random sampling should be

used when a researcher believes that the population is relatively homogeneous with

regard to the questions of interest. In this study, the researcher used simple random

28

sampling to select 156 learners spread over three classes (Civil Engineering, Radiography

and Somatology).

Qualitative sampling

According to Neuman (2000:196), unlike quantitative researchers who tend to use

probability sampling, qualitative researchers mainly use non-probability sampling. Cohen

et al., (2000:102) state that, in non-probability sampling, the researcher targets a

particular group, knowing very well that it does not represent the wider population; it

simply represents itself. Bailey (1994:94) points out that the major drawback of non-

probability sampling is that, since the chances that a member of the population will be

selected are not known, the researcher generally cannot claim that his or her sample is

representative of the wider population. Non-probability sampling, as Welman and Kruger

(2001:47) observe, is often used for reasons of convenience and economy.

In this study, the researcher used purposive sampling as a non-probability sampling

technique. According to Bailey (1994:96), in purposive sampling, the researcher uses his

or her judgement to select those respondents who best meet the purposes of the study.

Cohen et al., (2000:103) state that the researcher who uses purposive sampling selects the

sample for a specific purpose.

The researcher used purposive sampling in this study, since all the four educators

interviewed met the purposes of the study; they were all involved in the teaching of

Communication Skills as a CASS subject at the Doornfontein campus of the University

of Johannesburg.

3.3.2 Data collection

In this study, the researcher used triangulation of method by utilising both quantitative

and qualitative data collection methods. Triangulation, according to Cohen et al.,

(2000:112), refers to the use of more than one data collection method in the study of

29

some aspect of human behaviour. According to Neuman (2000:125), triangulation of

method is the type of triangulation that entails mixing both qualitative and quantitative

data collection methods.

3.3.2.1 Quantitative data collection

Quantitative data collection involves gathering information in the form of numbers

(Neuman, 2003:542). In this study, questionnaires were used to collect data from

learners.

Questionnaires

A questionnaire is a written document in survey research comprising a set of questions

handed out to respondents or used by a researcher to ask questions and record the

answers (Neuman, 2000:542). According to Cohen et al., (2000:245), a questionnaire is a

widely utilised and useful instrument for data collection, providing structured and mostly

numerical information, being able to be administered even without the presence of the

researcher, and often being relatively straightforward to analyse.

In this study, a questionnaire was used to collect data from the learners. It was mainly

aimed to investigate the learners’ experiences of CASS in the subject Communication

Skills at the University of Johannesburg.

The questionnaire included:

Statements to which respondents had to respond by selecting from the following

options: 1-4 and 5 or more.

Statements to which respondents had to respond by choosing from the following

options: always, sometimes and never.

30

Some of the questions were open questions, which gave respondents the

opportunity to air their views.

See annexure B for a questionnaire distributed to learners.

Cohen et al., (2000:245) advise that, since a questionnaire will always be an intrusion

into the respondent’s life, it is unethical to force respondents into completing it. The

researcher might strongly encourage them, but it is entirely up to the respondents to

decide whether to become involved and when to withdraw from the study, (Cohen et al.,

2000:245).

3.3.2.2 Qualitative data collection

Qualitative data collection involves gathering information in the form of words, pictures,

sounds, visual images, or objects (Neuman, 2003:542). In this study interviews were

conducted.

Interviews

Interviews are one of the most commonly known forms of qualitative research (Mason,

2002:63). Huysamen (2001:144) points out that, when information is to be gathered by

means of personal interviews, interviewers visit the respondents at home or at their work-

place.

Cohen et al., (2000:268) identify the following three purposes of a research interview:

It may be used as the main method of collecting data having a direct bearing on

the research objectives.

It may be used to test hypotheses or to suggest new ones; or as an explanatory tool

to assist in identifying variables and relationships.

It may be used in combination with other methods in a research study.

31

In this study, interviews were conducted with four educators who were teaching

Communication Skills as a CASS subject at the University of Johannesburg, in order to

find out about their views on and experiences of teaching Communication Skills as a

CASS subject. See annexure C for questions posed to the educators.

Huysamen (2001:145) adds that it is essential for interviewers to dress in more or less the

same way as the interviewees. Obviously, it would not be acceptable if an interviewer

arrived at the State President’s office wearing shorts and a torn shirt.

Table 3.2 summarises the advantages and disadvantages of interviews and questionnaires

as adapted from Cohen et al., (2000:269).

Table3.2 Summary of advantages and disadvantages of interviews and questionnaires

Consideration Interview Questionnaire

Personal need to

gather information.

Requires

interviewers.

Requires a secretary.

Major expense. Payment to

interviewers.

Postage and

printing.

Opportunities for

response keying.

Extensive. Limited.

Opportunities for

asking.

Extensive. Limited.

Opportunities for

probing.

Possible. Difficult.

Relative magnitude

of data reduction.

Great (because of

coding).

Mainly limited to

rostering.

Typically, the

number of

respondents who can

be reached.

Limited. Extensive.

32

Rate of return. Good. Poor.

Sources of error. Interviewer,

instrument, coding

and sample.

Limited to

instrument and

sample.

Overall reliability. Quite limited. Fair.

Emphasis on writing

skill.

Limited. Extensive.

In this research, structured interviews were conducted with four educators who were

teaching Communication Skills as a CASS subject at UJ, in order to find out about their

views on and experiences of teaching the subject through CASS.

• Unstructured interviews

When conducting an unstructured interview, the interviewer simply suggests the general

theme of discussion and poses further questions as dictated by the spontaneous nature of

the interaction between interviewer and interviewee (Huysamen, 2001:174). Huysamen

(2001:174) adds that the difference between structured and unstructured interviews is that

the latter are not restricted to a previously compiled set list of questions. In unstructured

interviews, an interviewer attempts to understand how participants experience their life-

worlds and how they make sense of things that happen around them, rather than on their

interpretations and speculative explanations of their life-worlds.

According to Mason (2002:62), unstructured interviews have their own character, and

even though there may be large differences in style and tradition, they all share the

following central features:

The dialogue is characterised by an interactional exchange. These interviews

include one-to-one interactions, larger group interviews or focus groups, and may

happen face to face, over the telephone or the internet.

33

The style is relatively informal.

The interviews have a flexible structure and allow both interviewer and

participant to develop unexpected themes.

The interview ensures that the relevant contexts are brought into focus in order for

situated knowledge to be produced.

Unstructured interviews require a great deal of intellectual preparation, and the researcher

also needs to plan for and deal with the social dynamics (Mason, 2002:68).

The following steps give an overview of a procedure which might be used to plan and

prepare intellectually for unstructured interviews, as adapted from Mason (2002:68).

Step 1-Big research questions

Here a researcher lists or assembles the main research questions which the study is

designed to explore.

Step 2- Mini research questions

Here the researcher breaks down or subdivides main research questions into smaller or

‘mini’ research questions.

Step 3- Possible interview topics and questions

The researchers convert their big and mini questions into possible interview topics. Then,

from these topics, they develop questions that might be used during interviews.

Step 4- Cross-reference

The researcher needs to cross-reference all the levels so that he or she knows that each

big research question has a set of corresponding mini-research questions, and each of

these questions has a set of ideas about interview topics and questions. It is essential for

researchers to ensure that cross-referencing works in reverse in order that their interview

topics and questions help them to answer their big research questions.

Steps 5- Loose interview structure or format

Start developing some ideas about a loose interview structure or format.

Step 6- Standardised questions or sections

34

Researchers need to work out whether they want to include any standardised questions or

sections in their interviews.

Step 7- Cross reference

Researchers need to cross check that their format, and any standardised questions or

sections adequately and appropriately cover their possible topics and questions.

3.3.2.3 Literature study

Neuman (2003:96) asserts that literature review about a research question is an important

early step in the research process, regardless of the approach a researcher adopts.

According to Huysamen (2001:191), it is essential for researchers to be up to speed with

research reported on their specific topics until at least the time that the research report is

concluded. A literature review is based on the assumption that knowledge accumulates

and that people learn from and build on the existing knowledge generated by others

(Neuman, 2003:96).

Neuman (2003:96) identifies the following four goals of a literature review:

To learn from others and also stimulate new ideas.

Helps researchers to integrate and summarise what is known in an area.

It shows that the researcher is familiar with a body of knowledge and also

establishes credibility.

It shows the path of earlier research and how a current project is linked to it.

In this study, various literature studies were done to determine the views of different

authors on continuous assessment.

3.3.3 Data analysis

Data analysis, according to Creswell (1994:153), requires the researcher to be

comfortable with developing categories and making comparisons and contrasts. In this

35

study, the researcher used both qualitative and quantitative data analysis methods, since

both methods were used to collect data.

3.3.3.1 Qualitative data analysis

Qualitative data analysis is a process of understanding and interpreting the contents of

qualitative data and finding commonalities in it. In order for researchers to make the

kinds of links they need to analyse and interpret data, they need to repeatedly read the

data until they really know and live their data. The process of analysing and interpreting

data can be tedious, time consuming and necessarily iterative (Gay & Airasian,

2000:244).

Since analysis of qualitative data occurs simultaneously with data collection, the first step

in data analysis is to manage the data so that they can be studied. Managing data means

organising the collected data. The researcher has to ensure that he or she has dated,

organised and sequenced all field notes, recorded tapes, computer files, observer’s

comments, transcripts, memos and reflections.

According to Gay and Airasian (2000:240), the cyclical process of data analysis (Figure

3.1) focuses on:

Reading- becoming familiar with the data and identifying main themes in them.

Describing- examining the data in-depth to provide detailed descriptions of the

setting, participants and activities.

Classifying- categorising and coding pieces of data and physically grouping them

into themes.

Interpreting- interpreting and synthesising the organised data into general

conclusions or understandings.

36

Reading

Classifying

Describing Interpreting

(Gay & Airasian, 2000:240)

Figure 3.1 Steps in analysing data

3.3.3.2 Quantitative data analysis

Neuman (2003:331) states that, in quantitative data analysis, a researcher provides the

charts, graphs, and tables to give readers a condensed picture of the data. The charts and

tables allow the readers to see the evidence gathered by the researcher and learn for

themselves what is in it.

In this study, the main statistical method which was used to analyse data was cross-

tabulation. Frequency tables were also used to illustrate the overall response of learners.

37

3.4 ESTABLISHING TRUSTWORTHINESS

3.4.1 Qualitative data

In order to establish trustworthiness in qualitative research, Guba and Lincoln (in Koch,

1994:976) recommend the criteria of credibility, transferability and dependability.

3.4.1.1 Credibility

Researchers enhance credibility when they describe and interpret their experiences. In

order to enhance the credibility of this study, the researcher has stored all the

documentation i.e. notes and tape recordings of qualitative data collection and will make

them available when people require them to prove the credibility of the study. Relevant

educators have also been approached to read the research report and discuss the

construction derived from the analysis.

3.4.1.2 Transferability

Guba and Lincoln (in Koch, 1994:977) point out that transferability depends upon the

degree of similarity between the contexts. Many authors prefer to use the term

“fittingness”. Koch (1994:977) argues that a study meets the criterion of fittingness when

its findings can “fit” into a context outside the study situation and when its audience

views its findings as meaningful and applicable in terms of their own experiences. The

aim of this study was to explore how learners and educators experienced Communication

Skills as a CASS subject. However, as the study was only conducted with educators and

learners involved in the subject Communication Skills at the University of Johannesburg,

to enhance transferability, the researcher intends to make this research report available to

all educators engaged in the implementation of CASS.

38

The researcher has ensured that this study fits in different subjects offered at various

educational institutions by interpreting the collected data as accurately as possible, and by

using language that is understood by all educators.

3.4.1.3 Dependability

Dependability as an aspect of trustworthiness means that the process of the study is

consistent and reasonable over time and across researchers and methods (Miles &

Huberman, 1994:3). In this study, interviews were conducted.

After the interviews, the audiotapes were transcribed verbatim. Amongst others, this

provides an unedited copy of the interviews for the purpose of analysis. This

documentation leaves evidence for others who can “reconstruct” the process to reach a

conclusion.

3.4.2 Quantitative data

Trustworthiness in quantitative inquiry can be established by ensuring validity and

reliability.

3.4.2.1 Validity

According to Cohen et al., (2000:105), validity in quantitative data might be improved

through careful sampling, appropriate instrumentation and appropriate statistical

treatments of the data. In this study, the researcher ensured validity through the

following:

Ensuring that there were adequate resources for the required research to be

conducted.

Selecting appropriate methodology for answering the research questions.

Selecting appropriate instrumentation for collecting the type of data required.

39

Using an appropriate sample.

3.4.2.2 Reliability

According to Cohen et al., (2000:118), in order for quantitative data to be reliable,

instrumentation, data and findings should be controllable, predictable, consistent and

replicable. One of the most important issues in considering the reliability of questionnaire

surveys is that of sampling (Cohen et al., 2000:118). Cohen et al., (2000:129) also add

that an unrepresentative, skewed sample can easily distort the data, and in the case of

very small samples, they prohibit statistical analysis.

The researcher ensured validity through:

Ensuring that instructions were clear and unambiguous.

Ensuring that the language used in questionnaires was easily understandable.

Ensuring that the questionnaire was readable.

Motivating the learners to be honest by stressing the importance and benefits of

completing the questionnaire.

Ensuring that the questionnaire was not too long or difficult to complete.

The researcher also conducted a pilot study in order to identify and rectify any

shortcomings.

3.5 PILOT STUDY

Huysamen (2001:97) asserts that it is important for a researcher to conduct a pilot study

on a limited number of participants from the research population. The pilot study,

according to Huysamen (2001:97), helps the researcher to investigate the feasibility of

the proposed research study and to detect possible flaws in the measurement procedures,

such as ambiguous instructions and inadequate time limits.

40

In this study, the researcher conducted both qualitative and quantitative pilot studies.

3.5.1 Quantitative pilot study

Before distributing questionnaires to the whole sample, the researcher thought it best to

conduct a pilot study. Questionnaires were distributed to selected members of the sample.

The researcher used simple random sampling to select participants for the pilot study.

The pilot study afforded the researcher the opportunity to rephrase instructions that were

unclear and also clarify questions that were ambiguous and not specific enough.

3.5.2 Qualitative pilot study

The researcher thought it best to conduct a pre-interview with one of the four educators in

the sample. The interview enabled the researcher to test the interview questions and

rephrase questions that were unclear. The pre-interview further afforded the researcher

the opportunity to test the audio tape recorder that was used and then make the necessary

adjustments.

The time spent on the qualitative pilot study enabled the researcher to be aware of the

duration for each interview, and this information was crucial towards scheduling

interviews with participants.

3.6 CONCLUSION

Chapter 3 outlined the methodology used in this study, as well as the strategies utilised to

ensure credibility of the data gathered. Both quantitative and qualitative approaches play

an important role in educational research. The most obvious distinction between the two

approaches is the form of data presentation. The approach that a researcher chooses

depends on the topic he or she selects, the purpose of the research and intended use of

study results, as well as the researcher ‘s own assumptions and beliefs (Neuman,

41

2003:164). Both qualitative and quantitative researchers need to establish trustworthiness

in order to validate their research reports.

42

CHAPTER 4 - RESULTS AND FINDINGS

4.1 INTRODUCTION

Chapter 3 outlined the methodology used in this research. As has already been

mentioned, both qualitative and quantitative research methods were utilised. This chapter

focuses on the results of both the qualitative and quantitative data.

Qualitative data was obtained from unstructured interviews that were conducted with

educators on a one-to-one basis. The data was processed by transcribing all the interviews

and analysing the responses.

During quantitative data collection, questionnaires were distributed to learners. The

questionnaires were analysed statistically, using frequency tables and cross-tabulations.

The following is a discussion of results from both the qualitative and quantitative data.

4.2 QUALITATIVE DATA

Interviews were conducted with four educators who taught Communication Skills as a

CASS subject to three diploma groups - Civil Engineering, Somatology and

Radiography. The interviews were recorded using a voice recorder, after which they were

transcribed (see annexure A). The questions posed to the educators were then organised

into themes so that their responses could be properly analysed. The following is a full

analysis of the interviews.

43

4.2.1 CASS training for Communication Skills educators

All the educators interviewed indicated that they had not received adequate training in

CASS. Two of the four educators said that they had not received any training in CASS.

One of the other two had only attended one introductory workshop on CASS and the

other educator had attended a number of workshops on CASS before working at the

University of Johannesburg.

4.2.2 Implementation of CASS

Although CASS had to be implemented, all four educators felt that they were not

adequately prepared to implement it properly. They said that their limited knowledge of

implementing CASS was mainly based on their teaching experience and that it (CASS

knowledge) was inadequate.

4.2.3 Problems encountered in the implementation of CASS

Each of the educators interviewed identified the numbers of learners per class as the main

problem in the implementation of CASS. They felt that the classes were too big and,

therefore, not conducive to CASS. The class numbers mentioned ranged from 40 to over

60 learners per class.

The educators indicated that the large numbers of learners made it difficult for them to

give individual attention to learners. One educator also mentioned that it took a long time

for her to identify learners who had had difficulty understanding a particular section of

work.

It was also evident from the educators’ responses that group work activities in the

classroom were fraught with problems. They said that the large numbers of learners

resulted in either too many groups or too many learners in a group. They felt that these

factors made it difficult to monitor and control group work activities.

44

4.2.4 Learning through CASS

Although the Communication Skills educators were not at ease with the implementation

of CASS, they identified the following advantages of CASS:

Continuous assessment encourages learners to work steadily throughout the year.

CASS gives learners the opportunity to master smaller pieces of work at a time.

As a result of CASS, educators find it easier to rethink and rework sections that

had not been properly understood or taught.

Continuous assessment discourages rote learning.

CASS instills a work ethic of working consistently.

On the whole, the educators felt that learners benefit significantly from the CASS system

because they need to achieve the outcomes of a particular section before moving on to

other sections.

4.2.5 The main difference between learning through CASS and writing final exams

The main difference between the two methods, according to the educators interviewed, is

that continuous assessment encourages learners to work consistently throughout the term,

whereas the exam system encourages rote learning since learners study very hard only

towards the final exams.

Continuous assessment, according to the educators, allows learners to receive continuous

feedback.

Further, the educators also said that CASS allows them to teach and assess each section

of the syllabus properly, whereas the exam system assesses large sections of work at

once.

45

4.2.6 Ensuring that learners understand exactly what educators expect of them in

CASS

In order to ensure that learners understood exactly what was expected of them in CASS,

educators said that they:

Talked to learners and discussed the expectations with them.

Simply explained how CASS works and what is involved.

Laid out the structure and overview of the course.

Indicated the kinds of assessments, as well as how often these would take place.

Explained the dangers of not attending and working inconsistently throughout the

year.

4.2.7 Learners’ understanding of their roles in CASS

On the whole, the educators felt that learners did not properly understand what was

expected of them in continuous assessment.

The educators’ responses could be summarised as follows:

Some of the learners did not quite understand what was expected of them in

CASS, because not all departments at the university implemented CASS.

Some learners still thought along final exam lines.

Some learners still did not understand their roles even after these had been

explained by the educators.

Some learners thought it was just an easy way to pass.

Learners just went along with the flow.

Educators who taught Civil Engineering learners felt that the learners had a better

understanding of CASS than learners in other departments. They also said that the

learners seemed to receive the necessary support from the Civil Engineering Department.

46

4.2.8 Provision of timeous feedback on assessments

All the educators said that it was very difficult to provide timeous feedback. They cited

the following reasons:

Their teaching load was too heavy.

The class sizes were too big.

Educators also felt that, due to the large numbers, feedback was more general in nature

instead of being targeted specifically to individual learners.

4.2.9 Learners’ attendance and motivation

It was evident from the educators’ responses that learners’ attendance and motivation

were generally satisfactory, although some of the learners tended to drag their feet at the

beginning of the semester and only worked hard towards the end when they realised the

seriousness of the situation.

One educator indicated that she had built class attendance into her continuous assessment

calculations in order to ensure that learners attended regularly. Another educator

indicated that learners learning through CASS were not as motivated as those writing

final exams. In contrast, the other two educators who were teaching Communication

Skills to Civil Engineering learners felt that these particular learners were more motivated

and enthusiastic than their other classes.

4.2.10 How learners coped with CASS

Educators mentioned that learners generally coped satisfactorily with continuous

assessment.

47

It was also evident from the responses that Civil Engineering learners coped better and

were more enthusiastic than others. Some educators believed that this could have been

due to the fact that educators in the Civil Engineering Department supported their

learners throughout the programme by explaining right from the beginning how

continuous assessment worked and exactly what they expected from learners.

4.2.11 The move from final exams to CASS

The educators’ feeling regarding the move from final exams to continuous assessment

was generally positive. Different educators articulated the following mixed feelings about

the move to CASS:

Unlike CASS, the exam system could be artificial.

CASS was a softer option for learners because they were trying to get away with

doing as little as possible and still passed the subject.

Continuous assessment provided learners with constant feedback which, in turn,

guided teaching activities.

In CASS, learners work hard throughout the year instead of only during exam

time.

4.2.12 Learners’ attitude towards group work activities

The general feeling was that most learners were positive towards group work activities. It

was also evident from the responses that academically stronger learners were not that

keen on group work activities because they felt that they would be held back by

academically weaker learners. Educators felt that most of these stronger learners

perceived group work negatively.

As was mentioned earlier in this chapter, educators said that groups were difficult to

control due to large numbers of learners in a class. They felt that smaller numbers would

make it easier for them to control and monitor group work activities.

48

4.3 QUANTITATIVE DATA

It is important to remember that all the respondents were involved in Communication

Skills as a CASS subject at the University of Johannesburg. The main statistical method

which was used to analyse data was cross-tabulation. Frequency tables were also used to

illustrate the overall response of learners. The researcher used cross-tabulations to

compare the responses of learners from different courses/diploma groups.

4.3.1 Responses from learners

Table 4.1 shows the numbers and percentages of completed questionnaires that were

returned by learners doing Civil Engineering, Radiography and Somatology.

Table 4.1 Frequency distribution for learners who completed the questionnaires

Frequencies

Course

Frequency Percent Valid Percent Cumulative Percent

Civil Engineering 65 41.7 41.7 41.7

Somatology 27 17.3 17.3 59.0

Radiography 64 41.0 41.0 100.0 Valid

Total 156 100.0 100.0

41.7% of Civil Engineering, 17.3% of Somatology, and 41.0% of Radiography

questionnaires were completed and returned. A total of 156 completed questionnaires

were returned.

4.3.1.1 Definition of outcomes

Question: The educator clearly defines the outcomes for each lecture

(always/sometimes/never)

49

Table 4.2 illustrates the learners’ responses to the question about how often the educator

clearly defined the outcomes for each lecture.

Table 4.2 Definition of outcomes

Crosstab

The educator clearly defines the outcomes for each lecture

Always Sometimes Never

Total

Count 40 24 1 65Civil Engineering % within

Course 61.5% 36.9% 1.5% 100.0%

Count 7 18 1 26Somatology % within

Course 26.9% 69.2% 3.8% 100.0%

Count 38 26 0 64

Course

Radiography % within Course 59.4% 40.6% 0% 100.0%

Count 85 68 2 155Total % within

Course 54.8% 43.9% 1.3% 100.0%

61.5% of Civil Engineering, 26.9% of Somatology, and 59.4% of Radiography learners

said that the educator always defined the outcomes for each lecture. 36.9% of Civil

Engineering, 69.2% of Somatology, and 40.6% of Radiography learners felt that

outcomes were sometimes clearly defined. Only 1.5% of Civil Engineering, 3.8% of

Somatology, and none of Radiography learners said that the outcomes were never clearly

defined by their educator. A total of 54.8% of learners indicated that their educators

always clearly defined the outcomes, 43.9% felt that the outcomes were sometimes

defined, and only 1.3% of learners said that their educators never defined the lecture

outcomes.

Most of the Civil Engineering and Radiography learners indicated that their educators

clearly defined the outcomes for each lecture, while a minority of Somatology learners

50

said so. On the whole, Somatology learners felt that the outcomes were not always clearly

defined.

4.3.1.2 Different assessment methods

Question: Different assessment methods are used by the educator

(always/sometimes/never)

Table 4.3 illustrates the responses of learners to the question about how often different

assessment methods were used by their educators.

Table 4.3 Different assessment methods

Crosstab

Different assessment methods are used by the educator

Always Sometimes

Total

Count 58 7 65Civil Engineering % within

Course 89.2% 10.8% 100.0%

Count 19 7 26Somatology % within

Course 73.1% 26.9% 100.0%

Count 57 7 64

Course

Radiography % within Course 89.1% 10.9% 100.0%

Count 134 21 155Total % within

Course 86.5% 13.5% 100.0%

89.2% of Civil Engineering, 73.1% of Somatology, and 89.1% of Radiography learners

said that their educators always used different assessment methods. 10.8% of Civil

Engineering, 26.9% of Somatology, and 10.9% of Radiography learners felt that different

methods were sometimes used by their educators. None of the learners indicated that the

educators never used different assessment methods. A total of 86.5% of learners said that

51

the educators always used different assessment methods, whereas 13.5% observed that

this was sometimes the case.

Respondents from the three groups generally felt that different assessment methods were

used by their educators, although the response from Somatology learners was relatively

less positive.

4.3.1.3 Encouraging active participation from learners

Question: The educator invites us to actively participate during lectures

(always/sometimes/never)

Table 4.4 illustrates the learners’ responses to the question about how often their

educators invited them to participate actively during lectures.

Table 4.4 Encouraging active participation from learners

Crosstab

The educator invites us to actively participate during lectures

Always Sometimes

Total

Count 50 15 65Civil Engineering % within Course 76.9% 23.1% 100.0%

Count 16 10 26Somatology

% within Course 61.5% 38.5% 100.0%

Count 48 16 64

Course

Radiography % within Course 75.0% 25.0% 100.0%

Count 114 41 155Total

% within Course 73.5% 26.5% 100.0%

76.9% of Civil Engineering, 61.5% of Somatology, and 75% of Radiography learners

indicated that their educators always invited them to participate actively during lectures.

23.1% of Civil Engineering, 38.5% of Somatology, and 25% of Radiography learners

52

said that they were sometimes invited by their educators to participate actively during

lectures. A total of 73.5% of learners felt that their educators always encouraged active

participation during lectures, and 26.5% felt that the learners’ active participation was

sometimes encouraged.

On the whole, all the groups responded positively to this question. There was, however,

somewhat of a difference between Somatology and the other two groups, since

Somatology learners responded a little less positively. As expected, a higher percentage

of Somatology learners than Civil Engineering and Radiography said that their educator

did not always invite them to participate actively during lectures.

4.3.1.4 Involving learners in assessment

Question: The educator involves learners in assessment (always/sometimes/never)

Table 4.5 illustrates the learners’ responses to the question about how often their

educators involved them in assessment.

Table 4.5 Involving learners in assessment

Crosstab

The educator involves learners in assessment

Always Sometimes Never

Total

Count 36 28 1 65Civil Engineering

% within Course 55.4% 43.1% 1.5% 100.0%

Count 8 14 4 26Somatology

% within Course 30.8% 53.8% 15.4% 100.0%

Count 33 23 8 64

Course

Radiography % within Course 51.6% 35.9% 12.5% 100.0%

Count 77 65 13 155Total

% within Course 49.7% 41.9% 8.4% 100.0%

53

55.4% of learners in Civil Engineering, 30.8% of those in Somatology, and 51.6% of

Radiography learners felt that their educators always involved them in assessment. 43.1%

of Civil Engineering, 53.8% of Somatology, and 35.9% of Radiography learners said that

they were sometimes involved in assessment. 1.5% of Civil Engineering learners, 15.4%

of Somatology learners, and 12.5% of Radiography learners observed that their educators

never involved them in assessment. A total of 49.7% of learners felt that their educators

always involved them in assessment, 41.9% said they were sometimes involved in

assessment, and only 8.4% indicated that their educators never involved them in

assessment.

Civil Engineering and Radiography learners, again, responded more positively than

Somatology learners. Overall, Civil Engineering learners responded slightly more

positively than the other two groups.

4.3.1.5 Provision of timeous feedback

Question: The educator gives us feedback on tests and assignments within a week

(always/sometimes/never)

Table 4.6 shows the responses of learners to the question as to whether their educators

gave them feedback on tests and assignments within a week.

Table 4.6 Provision of timeous feedback

Crosstab

The educator gives us feedback on tests and assignments within a week

Always Sometimes Never

Total

Count 35 29 1 65Civil Engineering % within

Course 53.8% 44.6% 1.5% 100.0%

Count 6 9 11 26

Course

Somatology % within Course 23.1% 34.6% 42.3% 100.0%

54

Count 14 36 14 64 Radiography % within

Course 21.9% 56.3% 21.9% 100.0%

Count 55 74 26 155Total % within

Course 35.5% 47.7% 16.8% 100.0%

53.8% of learners in Civil Engineering, 23.1% of those in Somatology, and 21.9% of

Radiography learners said that their educators always gave them feedback on tests and

assignments within a week. 44.6% of Civil Engineering learners, 34.6% of Somatology

learners, and 56.3% of Radiography learners felt that the educator sometimes gave

feedback within a week. 42.3% of Somatology learners, 21.9% of Radiography learners,

and only 1.5% of Civil Engineering learners indicated that their educators never gave

them feedback on tests and assignments within a week. A total of 35.5% of learners said

that the educator always gave them feedback within a week, 47.7% felt that this was

sometimes done, and 16.8% indicated that the feedback was never given within a week.

In the main, learners’ response to this question was not positive. However, Civil

Engineering learners responded more positively than Radiography and Somatology

learners.

4.3.1.6 Learning from lectures

Question: I learn a lot from my lectures (always/sometimes/never)

Table 4.7 illustrates learners’ responses to the question as to whether they learned a lot

from lectures.

55

Table 4.7 Learning from lectures

Crosstab

I learn a lot from my lectures

Always Sometimes Never

Total

Count 46 18 1 65 Civil Engineering

% within Course 70.8% 27.7% 1.5% 100.0%

Count 12 14 0 26 Somatology

% within Course 46.2% 53.8% 0% 100.0%

Count 37 26 1 64

Course

Radiography % within Course 57.8% 40.6% 1.6% 100.0%

Count 95 58 2 155 Total

% within Course 61.3% 37.4% 1.3% 100.0%

70.8% of Civil Engineering learners, 46.2% of Somatology learners, and 57.8% of

Radiography learners felt that they always learned a lot from their lectures. 27.7% of

Civil Engineering learners, 53.8% of learners in Somatology, and 40.6% of Radiography

learners indicated that they sometimes learned from their lectures, while 1.5% of Civil

Engineering and 1.6% of Radiography learners said they never learned from their

lectures. None of the Somatology learners indicated that they never learned from their

lectures. Overall, 61% of learners felt that they always learned a lot from their lectures,

37.4% said they sometimes learned a lot, and only 1.3% indicated that they never learned

from their lectures.

Most of the Civil Engineering learners felt that they always learned a lot from their

lectures. There was, generally, a negative response from Somatology learners, with most

of them saying that they did not always learn a lot from their lectures.

4.3.1.7 Learning from feedback

Question: I learn a lot from feedback provided by my educator (always/sometimes/never)

56

Table 4.8 illustrates the learners’ responses to the question about whether they learned

from feedback provided by their educators.

Table 4.8 Learning from feedback

Crosstab

I learn a lot from feedback provided by my educator

Always Sometimes Never

Total

Count 43 21 1 65Civil Engineering % within

Course 66.2% 32.3% 1.5% 100.0%

Count 10 15 1 26Somatology % within

Course 38.5% 57.7% 3.8% 100.0%

Count 33 27 4 64

Course

Radiography % within Course 51.6% 42.2% 6.3% 100.0%

Count 86 63 6 155Total % within

Course 55.5% 40.6% 3.9% 100.0%

In Civil Engineering, 66.2% of learners said that they always learned from their

educators’ feedback, 38.5% of Somatology learners, and 51.6% of Radiography learners

also indicated that they always learned from their educator’s feedback. 32.3% of Civil

Engineering learners, 57.7% of those in Somatology, and 42.2% of Radiography learners

felt that they sometimes learned from their educator’s feedback. Only 1.5% of Civil

Engineering learners, 3.8% of Somatology learners and 6.3% of Radiography learners said

they never learned anything from the feedback. An average of 55.5% of learners felt they

always learned from feedback, while 40.6% said they sometimes learned from feedback,

and only 3.9% indicated that they never learned from their educator’s feedback.

Again, Civil Engineering learners responded more positively than the other two groups.

Somatology learners, on the whole, responded negatively since most of the learners in

this group felt that they did not always learn from their educator’s feedback.

57

4.3.1.8 Understanding the importance of assessment

Question: I clearly understand the importance of my educator's assessment (always/sometimes/never) Table 4.9 illustrates the learners’ responses with regard to whether they clearly

understood the importance of their educator’s assessment.

Table 4.9 Understanding the importance of assessment

Crosstab

I clearly understand the importance of my educator's assessment

Always Sometimes Never

Total

Count 48 16 1 65Civil Engineering % within

Course 73.8% 24.6% 1.5% 100.0%

Count 13 13 0 26Somatology % within

Course 50.0% 50.0% 0% 100.0%

Count 39 21 4 64

Course

Radiography % within Course 60.9% 32.8% 6.3% 100.0%

Count 100 50 5 155Total % within

Course 64.5% 32.3% 3.2% 100.0%

73.8% of learners in Civil Engineering, 50% of Somatology learners, and 60.9% of

Radiography learners said that they always understood the importance of assessment.

24.6% of Civil Engineering learners, 50% of learners in Somatology, and 32.8% of

Radiography learners indicated that they sometimes understood the importance of their

educators’ assessment. One and a half percent of Civil Engineering learners and 6.3% of

Radiography learners said that they never understood the importance of their educators’

assessments. In contrast, none of the Somatology learners said they never understood the

importance of assessment. On the whole, 64.5% of learners indicated that they always

58

understood the importance of their educators’ assessments, 32.3% said this was

sometimes the case, and 3.2% felt that they never understood the importance of

assessment.

Civil Engineering learners, once again, responded more positively than the other two

groups, with over 70% of the learners saying that they always clearly understood the

importance of their educator’s assessment. Somatology learners, again, responded

relatively negatively; half of these learners indicated that they did not always clearly

understand the importance of their educator’s assessment.

4.3.1.9 Learning through CASS

Question: I enjoy learning through continuous assessment (always/sometimes/never)

Table 4.10 shows whether learners enjoyed learning through CASS.

Table 4.10 Learning through CASS

Crosstab

I enjoy learning through continuous assessment

Always Sometimes Never

Total

Count 56 7 2 65Civil Engineering

% within Course 86.2% 10.8% 3.1% 100.0%

Count 19 5 2 26Somatology

% within Course 73.1% 19.2% 7.7% 100.0%

Count 49 14 1 64

Course

Radiography % within Course 76.6% 21.9% 1.6% 100.0%

Count 124 26 5 155Total

% within Course 80.0% 16.8% 3.2% 100.0%

Of the learners surveyed, 86.2% of Civil Engineering learners, 73.1% of those in

Somatology, and 76.6% of Radiography learners said that they always enjoyed learning

through CASS. 10.8% of Civil Engineering learners, 19.2% of Somatology learners, and

59

21.9% of Radiography learners indicated that they sometimes enjoyed learning through

CASS, while 3.1% of learners in Civil Engineering, 7.7% of those in Somatology, and

1.6% of Radiography learners stated that they never enjoyed learning through CASS

The response from all the three groups was mainly positive. A total of 80% of learners

said that they always enjoyed learning through CASS, 16.8% indicated that this was

sometimes the case, and only 3.2% felt that they never enjoyed learning through CASS.

Civil Engineering learners, once more, responded somewhat more positively than the

other two groups.

4.3.1.10 Repetition of difficult aspects of lectures by educators

Question: The educator repeats aspects of the lecture that learners did not understand

(always/sometimes/never)

Table 4.11 illustrates learners’ responses to the question about whether their educator

repeated aspects of the lecture that they did not understand.

Table 4.11 Repetition of difficult aspects of lectures

Crosstab

The educator repeats aspects of the lecture that learners did not understand

Always Sometimes Never

Total

Count 49 16 0 65Civil Engineering % within

Course 75.4% 24.6% 0% 100.0%

Count 15 10 1 26Somatology % within

Course 57.7% 38.5% 3.8% 100.0%

Count 43 16 5 64

Course

Radiography % within Course 67.2% 25.0% 7.8% 100.0%

Count 107 42 6 155Total % within

Course 69.0% 27.1% 3.9% 100.0%

60

75.4% of Civil Engineering learners, 57.7% of those in Somatology, and 67.2% of

Radiography learners said their educator always repeated aspects of the lecture that they

did not understand. However, 24.6% of Civil Engineering learners, 38.5% of Somatology

learners, and 25% of those studying Radiography felt that educators sometimes repeated

those aspects that were not understood, while 3.8% of Somatology and 7.8% of

Radiography learners indicated that their educator never repeated aspects that were

difficult for learners to understand. None of Civil Engineering learners stated that the

educators never repeated the difficult sections.

On the whole, there was a positive response from all the groups in this section, with an

average of 69% across all courses indicating that the educator always repeated aspects of

the lecture that were not understood. Civil Engineering learners were, again, more

positive than the other two groups.

4.3.1.11 Assisting learners who experience difficulty

Question: The educator helps learners who experience difficulties/problems in the subject

(always/sometimes/never)

In table 4.12, learners’ responses to the question about whether the educator helped

learners who experienced difficulties in the subject, are illustrated.

Table 4.12 Assisting learners who experience difficulty

Crosstab

The educator helps learners who experience difficulties/problems in the subject

Always Sometimes Never

Total

Count 45 20 0 65Civil Engineering % within

Course 69.2% 30.8% 0% 100.0%

Count 16 8 2 26

Course

Somatology % within Course 61.5% 30.8% 7.7% 100.0%

61

Count 34 26 4 64 Radiography % within

Course 53.1% 40.6% 6.3% 100.0%

Count 95 54 6 155Total % within

Course 61.3% 34.8% 3.9% 100.0%

69.2% of Civil Engineering learners, 61.5% of those in Somatology, and 53.1% of

Radiography learners said that their educator always assisted struggling learners. 30.8%

of Civil Engineering, 30.8% of Somatology, and 40.6% of Radiography learners

indicated that this was sometimes the case. However, 7.7% of Somatology learners and

6.3% of those studying Radiography felt that their educator never helped learners who

experienced difficulties in the subject. Overall, 61.3% of learners said their educators

always assisted the learners, 34.8% indicated that their educators sometimes did, and only

3.9% said their educator never helped the learners.

The learners’ response to the question was, in the main, positive. There were, however,

noticeable differences between the groups; Civil Engineering had the highest percentage

(69%) of learners who said the educator always helped learners who experienced

difficulties in the subject, whereas Radiography had the lowest percentage (53%) of

learners who said so.

4.3.1.12 The pace of lectures

Question: The pace of lectures is appropriate to my needs (always/sometimes/never)

Table 4.13 is an illustration of the learners’ responses to the question as to whether the

pace of the lectures was appropriate to their needs.

62

Table 4.13 The pace of lectures

Crosstab

The pace of lectures is appropriate to my needs

Always Sometimes Never

Total

Count 45 19 1 65Civil Engineering

% within Course 69.2% 29.2% 1.5% 100.0%

Count 14 8 4 26Somatology

% within Course 53.8% 30.8% 15.4% 100.0%

Count 38 22 4 64

Course

Radiography % within Course 59.4% 34.4% 6.3% 100.0%

Count 97 49 9 155Total

% within Course 62.6% 31.6% 5.8% 100.0%

69.2% of learners in Civil Engineering, 53.8% of those in Somatology, and 59.4% of

Radiography learners indicated that the pace was always appropriate to their needs.

29.2% of Civil Engineering, 30.8% of Somatology, and 34.4% of Radiography learners

said the pace of lectures was sometimes appropriate to their needs. Of the learners who

felt that the pace was never appropriate to their needs, 1.5% were in Civil Engineering,

15.4% were in Somatology, and 6.3% were in Radiography. On the whole, 62.6% of

learners said the pace was always appropriate to their needs, 31.6% felt that this was

sometimes the case, and 5.8% indicated that the pace was never appropriate to their

needs.

The response from the three groups was generally positive, and Civil Engineering had the

highest percentage of learners who felt that the pace of lectures was appropriate to their

needs. The highest percentage of learners who felt that the pacing of lectures was never

appropriate to their needs was in Somatology; Civil Engineering had the lowest

percentage of learners who stated this.

63

4.3.1.13 Provision of outcomes in learner guides

Question: Learner guides provide clear outcomes for each learning unit

(always/sometimes/never)

Table 4.14 illustrates learners’ responses to the question about whether learner guides

provided clear outcomes for each learning unit.

Table 4.14 Outcomes provided in learner guides

Crosstab

Learner guides provide clear outcomes for each learning unit

Always Sometimes Never

Total

Count 44 20 1 65Civil Engineering % within

Course 67.7% 30.8% 1.5% 100.0%

Count 12 11 3 26Somatology % within

Course 46.2% 42.3% 11.5% 100.0%

Count 45 17 2 64

Course

Radiography % within Course 70.3% 26.6% 3.1% 100.0%

Count 101 48 6 155Total % within

Course 65.2% 31.0% 3.9% 100.0%

Of the learners surveyed, 67.7% of Civil Engineering learners, 46.2% of Somatology

learners, and 70.3% of Radiography learners indicated that the learner guides always

provided clear outcomes for each learning unit. However, 30.8% of Civil Engineering

learners, 42.3% of those in Somatology, and 26.6% of Radiography learners felt that the

learner guides did not always provide clear outcomes for each learning unit, while 1.5%

of Civil Engineering learners, 11.5% of Somatology learners, and 3.1% of Radiography

learners said that clear outcomes were never provided. Overall, 65.2% of learners

64

indicated that the learner guides always provided clear outcomes, 31% said that clear

outcomes were sometimes provided, and only 3.9% stated that learner guides never

provided clear outcomes.

Thus, there were varying responses from the three groups to the question about whether

learner guides provided clear outcomes for each learning unit. Civil Engineering and

Radiography responded more positively than Somatology learners. Radiography learners

were the most positive, whereas Somatology learners were the least positive.

4.3.1.14 Provision of assessment methods in learner guides

Question: Learner guides clearly specify the assessment methods

(always/sometimes/never)

In table 4.15, the learners’ responses to the question about whether learner guides

specified assessment methods, are illustrated.

Table 4.15 Assessment methods provided in learner guides

Crosstab

Learner guides clearly specify the assessment methods

Always Sometimes Never

Total

Count 39 26 0 65Civil Engineering % within

Course 60.0% 40.0% 0% 100.0%

Count 15 7 4 26Somatology % within

Course 57.7% 26.9% 15.4% 100.0%

Count 51 9 4 64

Course

Radiography % within Course 79.7% 14.1% 6.3% 100.0%

Count 105 42 8 155Total % within

Course 67.7% 27.1% 5.2% 100.0%

65

When asked whether learner guides clearly specified assessment methods, 60% of Civil

Engineering learners, 57.7% of Somatology learners, and 79.7% of Radiography learners

stated that this was the case. 40% of learners in Civil Engineering, 26.9% of those in

Somatology, and 14.1% of Radiography learners indicated that assessment methods were

sometimes clearly specified. None of the Civil Engineering learners, 15.4% of

Somatology learners, and 6.3% of those in Radiography felt that the learner guides never

clearly specified the assessment methods. Overall, 67.7% of learners said the learner

guides always clearly specified the assessment methods, 27.1% felt that this was

sometimes the case, and 5.2% indicated that learner guides never clearly specified

assessment methods.

Again, Radiography learners were the most positive about their learner guides always

clearly specifying assessment methods, while Somatology had the highest percentage of

learners who were the least positive about their learner guides.

4.3.1.15 Availability of educators during consultation times

Question: The educator is available during consultation times (always/sometimes/never)

Table 4.16 illustrates the learners’ responses to the question about how often the educator

was available during consultation times.

Table 4.16 Availability of educators during consultation times

Crosstab

The educator is available during consultation times

Always Sometimes Never

Total

Count 34 31 0 65Civil Engineering

% within Course 52.3% 47.7% 0% 100.0%

Count 11 13 2 26Somatology

% within Course 42.3% 50.0% 7.7% 100.0%

Count 33 28 2 63

Course

Radiography % within Course 52.4% 44.4% 3.2% 100.0%

66

Count 78 72 4 154Total

% within Course 50.6% 46.8% 2.6% 100.0%

52.3% of Civil Engineering learners, 42.3% of Somatology learners, and 52.4% of

Radiography learners indicated that their educator was always available during

consultation times. 47.7% of learners in Civil Engineering, 50% of those in Somatology,

and 44.4% of Radiography learners said the educator was sometimes available. While

none of Civil Engineering learners said their educator was never available for

consultation, this was the case for 7.7% of Somatology and 3.2% of Radiography

learners. On average, 50.6% of learners indicated that their educator was always available

during consultation, 46.8% said the educator was sometimes available, and 2.6% felt the

educator was never available for consultations.

On the whole, the learners’ responses with regard to their educators’ availability during

consultation times, was not particularly positive as the highest percentage of learners who

said their educator was always available for consultations was 52%. In fact, the majority

of the Somatology learners indicated that their educator was not always available during

consultation times.

4.3.1.16 Assessment given by educators

Question: The assessment given by your educator is (balanced/not balanced)

Table 4.17 is an illustration of learners’ responses about whether or not the assessment

given by their educator was balanced.

67

Table 4.17 Assessment given by educators

Crosstab

The assessment given by the educator is...

Balanced Not balanced

Total

Count 60 5 65Civil Engineering

% within Course 92.3% 7.7% 100.0%

Count 20 7 27Somatology

% within Course 74.1% 25.9% 100.0%

Count 58 6 64

Course

Radiography % within Course 90.6% 9.4% 100.0%

Count 138 18 156Total

% within Course 88.5% 11.5% 100.0%

Across all courses, the majority of learners felt that assessment was balanced. 92.3% of

Civil Engineering learners, 74.1% of Somatology learners, and 90.6% of Radiography

learners felt that the assessment was balanced. 7.7% of learners in Civil Engineering,

25.9% of those in Somatology, and 9.4% of Radiography learners said the assessment

was not balanced. A total of 88.5% of learners indicated that the assessment was

balanced, and only 11.5% felt that it was not balanced.

The learners’ response was mostly positive; in fact, the response from Civil Engineering

and Radiography learners was overwhelmingly positive. Although Somatology learners

generally responded positively, their response was less positive compared to that of Civil

Engineering and Radiography learners.

4.3.1.17 Learning through CASS

Question: Learning through CASS is (challenging/either too easy or too difficult)

Table 4.18 illustrates how learners felt about learning through CASS.

68

Table 4.18 Learning through CASS

Crosstab

Learning through CASS is...

Challenging Either too easy or too difficult

Total

Count 33 32 65Civil Engineering

% within Course 50.8% 49.2% 100.0%

Count 16 11 27Somatology

% within Course 59.3% 40.7% 100.0%

Count 44 20 64

Course

Radiography % within Course 68.8% 31.3% 100.0%

Count 93 63 156Total

% within Course 59.6% 40.4% 100.0%

50.8% of Civil Engineering learners, 59.3% of Somatology learners, and 68.8% of

Radiography learners felt that learning through CASS was challenging. 49.2% of Civil

Engineering, 40.7% of Somatology, and 31.3% of Radiography learners said it was either

too easy or too difficult. Overall, 59.6% of learners indicated that learning through CASS

was challenging, while 40.4% said it was either too difficult or too easy.

Generally, the learners’ feelings about learning through CASS were positive, but the

responses varied somewhat from group to group. Most of the Radiography learners

responded positively, since over 68% of them found learning through CASS challenging.

About half of the Civil Engineering learners indicated that learning through CASS was

either too easy or too difficult.

4.3.1.18 Balance between teaching and assessment

Question: The educator’s teaching and assessment were (balanced/not balanced)

Table 4.19 reflects learners’ perceptions of whether there was a balance between teaching

and assessment.

69

Table 4.19 Balance between teaching and assessment

Crosstab

Balance between teaching and assessment?

Balanced Not balanced

Total

Count 64 1 65Civil Engineering

% within Course 98.5% 1.5% 100.0%

Count 16 11 27Somatology

% within Course 59.3% 40.7% 100.0%

Count 57 7 64

Course

Radiography % within Course 89.1% 10.9% 100.0%

Count 137 19 156Total

% within Course 87.8% 12.2% 100.0%

Of the learners surveyed, 98.5% of those in Civil Engineering, 59.3% of those in

Somatology, and 89.1% of Radiography learners thought that there was a balance

between teaching and assessment. On the other hand, 40.7% of Somatology learners,

10.9% of Radiography learners, but only 1.5% of Civil Engineering learners felt that

there was no balance between teaching and assessment. Overall, 87.8% of learners

indicated that teaching and assessment were balanced, whereas 12.2% said they were not

balanced.

There was an overwhelmingly positive response from both Civil Engineering and

Radiography learners regarding the balance between teaching and assessment. All but

one learner doing Civil Engineering said there was a balance between teaching and

assessment. The response from Somatology learners was however noticeable. Even

though their response was generally positive, it was 30% lower than that of the other two

diploma groups.

4.3.1.19 Visiting educators during consultation times

Question: I consulted with my educator (4 or less/5 or more) times during the year

70

Table 4.20 illustrates the number of times learners consulted with their educator during

the year.

Table 4.20 Visiting educators during consultation times

Crosstab

Number of times I consulted with my educator during the year

4 times or less 5 times or more

Total

Count 40 25 65Civil Engineering % within

Course 61.5% 38.5% 100.0%

Count 19 8 27Somatology % within

Course 70.4% 29.6% 100.0%

Count 34 30 64

Course

Radiography % within Course 53.1% 46.9% 100.0%

Count 93 63 156Total % within

Course 59.6% 40.4% 100.0%

61.5% of Civil Engineering learners, 70.4% of those doing Somatology, and 53.1% of

Radiography learners indicated that they had visited their educator 4 times or less during

the year. On the other hand, 38.5% of Civil Engineering learners, 29.6% of Somatology

learners, and 46.9% of the learners in Radiography said they had consulted 5 times or

more. In total, 59.6% of the learners had visited their educator 4 times or less, whereas

40.4% had consulted 5 times or more.

Learners in Somatology had, on average, visited their educator the least. Radiography

learners had consulted with their educator slightly more frequently than the other groups.

71

4.3.1.20 Visiting the library

Question: I visited the library (4 or less/5 or more) times during the year

Table 4.21 illustrates the number of times that learners visited the library during the year.

Table 4.21 Visiting the library

Crosstab

Number of times I visited the library during the year

4 times or less 5 times or more

Total

Count 16 49 65Civil Engineering % within

Course 24.6% 75.4% 100.0%

Count 7 20 27Somatology % within

Course 25.9% 74.1% 100.0%

Count 23 41 64

Course

Radiography % within Course 35.9% 64.1% 100.0%

Count 46 110 156Total % within

Course 29.5% 70.5% 100.0%

Of the learners who visited the library four times or less, 24.6% were doing Civil

Engineering, 25.9% were doing Somatology, and 35.9% were Radiography learners.

75.4% of Civil Engineering learners, 74.1% of Somatology learners, and 64.1% of

Radiography learners reported having visited the library 5 times or more during the year.

On average, 29.5% of learners had visited the library 4 times or less during the year,

whereas 70.5% had visited 5 times or more.

72

4.3.1.21 Number of group discussions held during the year

Question: I had group discussions with other learners (4 or less/5 or more) times during

the year

Table 4.22 illustrates the number of group discussions that learners held during the year.

Table 4.22 Number of group discussions held during the year

Crosstab

Number of group discussions during the year

4 times or less

5 times or more

Total

Count 22 43 65Civil Engineering

% within Course 33.8% 66.2% 100.0%

Count 10 17 27Somatology

% within Course 37.0% 63.0% 100.0%

Count 25 39 64

Course

Radiography % within Course 39.1% 60.9% 100.0%

Count 57 99 156Total

% within Course 36.5% 63.5% 100.0%

33.8 % of Civil Engineering learners, 37% of Somatology learners, and 39.1% of

Radiography learners indicated that they had held group discussions 4 times or less

during the year. On the other hand, 66.2% of learners in Civil Engineering, 63% of those

in Somatology, and 60.9% of Radiography learners said they had held such discussions 5

times or more during the year. A total of 36.5% of learners had had 4 or less group

discussions during the year, whereas 63.5% had held 5 or more discussions.

4.4 CONCLUSION

This chapter has focused on analysing data received from interviews with educators and

from statistical analysis of questionnaires. It is evident from both the qualitative and

73

quantitative results that learners and educators at the University of Johannesburg

experienced CASS in the subject Communication Skills differently. The lack of CASS

training for educators is likely to have resulted in uncertainties about implementation.

Even if educators were not adequately trained in the implementation of CASS, they still

had to proceed with it anyway. There is little doubt that this lack of clarity and training

was one of the main factors that led to the differences in the implementation of CASS at

the University of Johannesburg. The next chapter looks at conclusions and

recommendations based on the findings in this chapter.

74

CHAPTER 5 - CONCLUSIONS AND

RECOMMENDATIONS 5.1 INTRODUCTION Chapter 4 focused mainly on the results of both qualitative and quantitative data. In this

chapter, conclusions will be drawn from the results of both qualitative and quantitative

data.

In chapter 1, the following research questions were posed:

How do educators experience CASS in the subject Communication Skills?

How do learners experience CASS in the subject Communication Skills?

This chapter mainly focuses on conclusions that are based on the findings reported in

chapter 4. These conclusions will provide answers to the above-mentioned research

questions. After discussing the conclusions, the researcher will also provide

recommendations.

5.2 RESEARCH QUESTIONS

In this, section the researcher will attempt to provide answers to the two research

questions mentioned in the introduction above. As it was mentioned in chapter 3, after

examining the data from the respondents, the researcher categorised and coded pieces of

data and grouped them into themes. The interpreted themes were, then, organised into

conclusions and recommendations.

75

Question 1: How do educators experience CASS in the subject Communication Skills?

5.2.1 CASS training for educators

• Conclusions

It is evident that the university did not provide the educators with sufficient training in

CASS. Some of the Communication Skills educators who were involved in the

implementation of CASS had not received any CASS training at all; and those who had

received it had only attended one or two introductory workshops, which were clearly

inadequate.

As a result of the lack of CASS training, the educators were not adequately prepared to

implement CASS properly.

• Recommendations

It is recommended that the university provides proper CASS training to all educators,

particularly those who are involved in the implementation of CASS. One way of doing

this is to arrange with the Faculty of Education at UJ to provide CASS training in the

form of workshops. Another alternative is either to send educators to accredited training

providers or to arrange with the training providers to provide training at the university.

It is important for the University of Johannesburg to ensure that CASS training becomes

compulsory for all educators at the university. This is essential because, although not all

educators at the university are currently involved in the implementation of CASS, at least

some of those not presently involved will be involved in future, as some of the

departments at the university are slowly shifting towards CASS.

It is essential for the educators to feel confident that they are adequately prepared to

implement CASS properly, so that they can facilitate effective teaching and learning.

76

5.2.2 Problems encountered in the implementation of CASS

• Conclusions

All the respondents experienced problems with the number of learners in their CASS

classes. As a result of the large numbers, educators found it difficult to give individual

attention to learners.

The large numbers of learners made group work activities difficult to control because the

educators either had classes with too many groups or classes consisting of fewer groups

but with too many learners in each group (see 4.2.3).

• Recommendations

The continuous assessment policy released by the University of Johannesburg in 2003,

does not stipulate the envisaged minimum or maximum numbers of learners in a CASS

class, nor does it provide any guidelines for class sizes. It is, however, recommended that

class numbers be limited to not more than 40 learners per class so that:

Classes are easier to control.

Learners are given the necessary individual attention.

Group work activities are manageable.

It is essential for class sizes to be manageable in order to ensure that CASS is

implemented properly.

77

5.2.3 What the educators expected of learners in CASS

• Conclusions

Learners did not seem to clearly understand what was expected of them in CASS. Even

though they knew that they were not going to write final exams, they did not quite

understand the roles they needed to play when learning through CASS. This was

probably due to the educators’ own lack of clarity and expertise in CASS.

The Civil Engineering Department, in particular, seemed to support the learners by

providing them with essential information at the beginning of the academic year and

explaining what the learners were expected to do (see 4.7).

• Recommendations

It is recommended that the educators clearly specify the following at the beginning of

each academic year or semester:

The layout of the course/ subject.

All the formative and summative assessments.

The learner’s roles and responsibilities.

It is important for the above-mentioned information to be clearly stipulated in the learner

guides so that learners can constantly refer to it for more clarity and reminders.

Learners also need to receive all the necessary support from their respective departments

with regard to their departments’ expectations of them in learning through CASS. This

will help Communication Skills educators to ensure that learners are completely clear

about the roles they need to play in CASS.

78

5.2.4 Provision of timeous feedback

• Conclusion

It was difficult for educators to provide timeous feedback to learners because of the large

numbers of learners in their classes. Feedback was more general in nature than targeted

towards individuals.

• Recommendations

As has already been mentioned in 5.2.2, class numbers should be limited to not more than

40 learners per class, so that educators can provide timeous and specific feedback. Proper

training in CASS will also ensure that educators are better prepared to handle larger

numbers of learners.

5.2.5 Group work activities

• Conclusions

Even though most of the learners were positive about group work activities, those

learners who were less positive about group work activities require interventions to

improve their enthusiasm. Educators felt that the large number of learners per class had a

negative impact on group work activities, since groups were difficult to control.

• Recommendations

Academically stronger learners could be made to play more active roles during group

work activities, by distributing them among various groups and making them group

leaders. Weaker learners are also likely to benefit from the presence of their academically

stronger peers.

79

Question 2: How do learners experience CASS in the subject Communication Skills?

5.2.6 Clear definition of outcomes

• Conclusions

It is evident from the data analysis that there is a problem with regard to the clarification

of outcomes in Somatology. This could either be due to their educator not always

defining the outcomes for a particular section of work or the outcomes themselves not

being clearly defined. Even though about 60% of Civil Engineering and Radiography

learners said the outcomes were always clearly defined, this figure is also a cause for

concern since it implies that about 40% felt that the outcomes were not always clear. The

different responses by different groups indicate that different Communication Skills

educators state the outcomes differently. The outcomes, as Kramer (1999: 25) asserts,

must be worded in clear, measurable and observable language; this was not always the

case in all the three diploma groups.

It is essential for educators to realise that, in outcomes-based education, assessment and

outcomes are interrelated. According to Jacobs, Gawe and Vakalisa (2000:31),

assessment of learners revolves around outcomes because outcomes describe the goals

that the learners are supposed to achieve.

• Recommendations

It is clear that all the Communication Skills educators need to understand adequately how

to state the outcomes clearly and unambiguously. It is, therefore, recommended that

specific workshops on the clarification of outcomes be organised for the educators.

After receiving the necessary training in the clarification of outcomes, educators need to

clearly state outcomes at the beginning of each lecture. Since outcomes are central to

80

assessment in OBE, the clarification of outcomes will, to a certain extent, ensure

clarification of assessments.

5.2.7 Different assessment methods

• Conclusions

All the respondents seem to have used different assessment methods, even though the

16% difference between Somatology learners and the other two groups shows that the

Communication Skills educator involved with the Somatology learners needs to vary the

assessments a little more.

• Recommendations

In addition to stating outcomes for each lecture, educators also need to specify the

assessment tools to be used. Learners do not only need to know the outcomes, but they

also have to know how they are expected to demonstrate the achievement of these

outcomes. Different outcomes require different assessment methods. It is, therefore,

essential for educators to ensure that they utilise appropriate assessment methods that are

designed specifically to assess each outcome.

The Faculty of Education at the University of Johannesburg could be requested to

conduct workshops on OBE compliant assessment. These workshops must, inter alia,

expose educators to various assessment methods, as well as to how and when to utilise

these assessment methods.

81

5.2.8 Active participation from learners

• Conclusions

Educators appear to have encouraged learners to participate actively during lectures. It is,

however, somewhat worrying that there was a 15% difference between Somatology and

the other two groups. According to Jacobs et al., (2000:2), the success of the teaching-

learning activity depends on the educator’s ingenuity (or lack of it) in creating a

classroom climate that is conducive to active participative learning by the learner.

• Recommendations

It would be advisable for Communication Skills educators, particularly those involved in

the teaching of Somatology learners, to be encouraged to foster active participation from

learners.

Jacobs et al., (2000:4) assert that participative learning requires that the class be designed

and managed in a manner that encourages learners to express their own views on the

content without fear of intimidation from either the educator or their peers. They do not

agree with the notion that there is an answer to every question.

Active participation by learners, according to Jacobs et al., (2000:4), occurs when:

Each individual learner is given an opportunity to express what he or she

understands of the learning content presented to him or her.

Expression of one’s views does not meet with destructive criticism from educator

or peers.

The notion that for every question there exists a single ‘correct’ answer is

discarded and, instead, uninhibited exploration of all possibilities with regard to

learning content is promoted.

82

Learning by enquiry balances reception learning; reception learning occurs when

the educator (or textbook) is the main source of information.

5.2.9 Involvement of learners in assessment

• Conclusions

It is evident that there was a lack of learner involvement in assessment, with only about

half of Civil Engineering and Radiography learners stating that their educators always

involved them in assessment. Somatology learners seem to be considerably less involved

in assessment since approximately 30% of them felt that they were always involved in

assessment.

According to Kramer (1999: 42), in order to ensure that learners are actively involved in

their learning, they must also be involved in assessment so that they can gauge the

progress they are making.

• Recommendations

Educators should be encouraged by their departments to involve learners in assessment.

According to Le Grange and Reddy (1998:19), assessment in an outcomes-based system

is more overt than traditional assessment practices and involves more than one assessor;

it includes educator assessment, self assessment and peer assessment (discussed in

chapter 2). Learners can be involved in assessment through both self assessment and peer

assessment. Again, educators from the Faculty of Education at UJ could be requested to

conduct workshops on the effective involvement of learners in assessment.

83

5.2.10 Provision of feedback

• Conclusion

Generally, learners felt that feedback on tests and assignments was not provided within a

week. Educators, on the other hand also felt that they could not provide timeous feedback

due to the large number of learners in their classes.

• Recommendations

In order for educators to provide regular assessment and timeous feedback, it is

recommended that educators be given manageable classes of not more than 40 learners

each; otherwise assessments will be few and far between, and defeat the very objective of

continuous assessment. The Communication Skills Department should investigate

employing senior learners to help with the marking of tests and assignments, in order to

ensure that learners receive timeous feedback.

5.2.11 Learning from feedback provided by educators

• Conclusions

Except for Civil Engineering learners, learners did not particularly feel that they always

learned effectively from the feedback provided. This was not unexpected because it is

consistent with the fact that their educators could not provide timely feedback due to the

large number of learners in each class. It also appears that feedback was more general in

nature, because large numbers of learners did not allow educators enough time to give

individual attention to all learners who needed it.

84

• Recommendations

In addition to the reduction of numbers of learners per class, which may not always be

possible, the Communication Skills Department needs to ensure that Communication

Skills educators are adequately trained in the provision of meaningful feedback. This will

enable educators to provide more meaningful feedback to learners. Educators should also

be trained in specific teaching and assessment strategies, which will help them to cope

better with larger classes.

5.2.12 Assisting learners who experienced difficulty in the subject

• Conclusions

Even though the learners generally felt that they always received the necessary assistance

from the educators, a considerable number of learners felt otherwise.

• Recommendations

Educators need to be encouraged to exercise patience with their learners in order to

ensure that their learners receive the necessary guidance. It should also be impressed on

the educators that, in CASS, learners need to achieve the outcomes of each section before

moving on to the next section.

5.2.13 Clear outcomes provided in learner guides

• Conclusions

It is clear that some learner guides did not provide clear outcomes for each learning unit.

Clear outcomes enable learners to gauge their own progress and know exactly what they

are expected to do.

85

• Recommendation

Departments need to organise workshops on writing learner guides that are OBE

compliant, to ensure that educators know how to word their outcomes properly.

According to Jacobs et al., (2000:31), without written outcomes, educators will be unable

to establish whether the outcomes have been achieved and, therefore, assessment will be

unreliable.

5.2.14 Availability of educators during consultation times

• Conclusions

It appears that educators are not always available during consultation times. Consultation

times are vital to learners’ learning and success, because it is during these times that

learners feel free to open up to their educators about any problem they encounter. It is

much easier for learners to express themselves during consultation times because the

environment is less threatening than in a classroom with other learners.

• Recommendations

Departments need to ensure that educators clearly specify their consultation times and

make these known to learners. They should also ensure that the educators are available

during consultation times. Educators must constantly encourage learners to use the

consultation times.

5.2.15 Balance between teaching and assessment

• Conclusions

Even though there seemed to have generally been a balance between teaching and

assessment, Somatology learners did not feel that the two were particularly balanced. It

86

stands to reason that, if an educator failed to provide timeous feedback, the assessment

provided to learners would not be sufficient.

• Recommendations

In addition to exposing educators to various teaching and assessment methods, the

workshops need to teach educators how they can use assessment as a teaching-learning

tool so that they view assessment as an integral part of the teaching-learning process.

This will enable educators to achieve a balance between teaching and assessment.

5.2.16 Learners’ enthusiasm and motivation

• Conclusions

The lack of motivation, particularly among Somatology learners, could have been the

result of unclear outcomes. It is evident from Somatology learners’ responses that they

found that the outcomes were not clearly stated by their Communication Skills educator.

• Recommendations

A clear definition of outcomes plays an essential role in the motivation of learners.

According to Jacobs et al., (2000:30), outcomes arouse in learners a desire to achieve the

purposes contained in the outcomes. Muthukrishna (in Jacobs et al., 2000:30) states that

even the most demotivated learners can be transformed into eager learners if an

enthusiastic educator continuously reminds them of the intended outcomes, and allows

them to experience growing confidence and status as their own competence increases.

87

5.3 SUGGESTIONS FOR FURTHER RESEARCH

This research focused mainly on the practice and experiences of CASS in the subject

Communication Skills by both educators and learners at the Doornfontein campus of the

University of Johannesburg.

The study attempts to provoke further debate on the practice of continuous assessment,

particularly in higher education. Since this study only focused on the subject

Communication Skills at one campus (Doornfontein) of UJ, further research is invited on

CASS practices in other subjects, and at other institutions of higher learning.

5.4 FINAL CONCLUSION

It is evident from the results of both qualitative and quantitative data that Communication

Skills educators at UJ need adequate training in CASS. This training will ensure that the

educators are able to:

Clearly define outcomes for each lecture.

Use different assessment methods.

Involve learners in assessment.

Encourage learners to participate actively during lectures.

Provide more meaningful feedback to learners.

Clearly specify all assessments to be used.

Write OBE compliant learner guides that clearly specify outcomes and

assessments.

According to the Report on Continuous Assessment Practices at the University of

Johannesburg (2004), released by the Academic Development Unit (ADU) at UJ:

The CASS policy approved by the university’s Senate in May 2003 had some

flaws that required urgent review.

88

Most of the educators practising CASS had not even seen the CASS policy that

was approved by the UJ Senate on 21 May 2003, and neither had they attended

any CASS training. The lack of familiarity with the CASS policy and the lack of

training have serious implications for the quality of CASS at UJ.

As a result of the lack of familiarity with the CASS policy and lack of training,

there was very little consistency in CASS practices at UJ. Glaring inconsistencies

in applications were evident in the programmes (departments) used for the review.

There was also insufficient feedback to learners on marked/graded assessments.

In most instances, work was not returned to learners for correction of their

mistakes, and neither were they given sufficient feedback to guide them with

regard to subsequent assessments. The absence of adequate feedback to learners

on graded assessments is also a serious contradiction of the purposes of CASS.

There were also considerable differences in the way that learner guides were

written. In some, mutual expectations of educators and learners were stated

explicitly, whereas others were vague about expectations and requirements.

Even though the report found that the majority of educators who were already

implementing CASS were doing a remarkable job under circumstances that were not

conducive to promoting their efforts, it is evident that there is a lack of CASS training for

educators, not only those teaching Communication Skills, but educators in various

departments at UJ. If the situation remains as it is, it stands to reason that the

implementation of CASS will continue to be divergent and inconsistent. As a result, the

quality of teaching and learning will be compromised.

It is imperative that the management of UJ realises the seriousness of the situation and

organises CASS training sessions and workshops for educators, particularly those

practising CASS, in all departments at the university. Le Grange and Reddy (1998:35)

advise that, in order for CASS to be implemented successfully, it needs to be well

89

planned, with in-service education and training programmes to support educators. The

CASS policy also needs to be reviewed and improved urgently, and the university

management needs to ensure that CASS at UJ is practiced uniformly. Departments also

need to ensure that educators are familiar with the university’s CASS policy by making it

available to educators and even discussing it during departmental meetings.

Both adequate CASS training and familiarity with the university’s CASS policy will

ensure that educators at UJ, including Communication Skills educators, implement CASS

effectively, which will benefit learners immensely.

90

BIBLIOGRAPHY

Anderson, G. & Arsenault, N. 1998. Fundamentals of Educational Research. 2nd edition.

Hong Kong: RoutledgeFalmer.

Bailey, K.D. 1994. Methods of Social Research. 4th edition. New York: The Free Press.

Brooks, V. 2002. Assessment in secondary schools: The new educator’s guide to

monitoring, assessment, recording, reporting and accountability. Buckingham: Open

University Press.

Charles, C.M. & Mertler, C.A. 2002. Introduction to Educational Research. 4th edition.

Boston: Allyn and Bacon.

Cohen, L., Manion, L. & Morrison, K. 2000. Research Methods in Education. London:

RoutledgeFalmer.

Creswell, J.W. 1994. Research Design: Qualitative and Quantitative Approaches.

London: Sage Publications.

Fraenkell, J.R. & Wallen, N.E. 1996. How to design and evaluate research in education.

New York: McGraw-Hill.

Gay, L.R. & Airasian, P. 2000. Educational Research: Competencies for analysis and

applications. New Jersey: Merrill Prentice Hall.

Gultig, J., Lubisi, C., Parker, B. & Wedekind, V. (eds.). 1998. Understanding outcomes-

based education: Teaching and assessment in South Africa: Reader. New York: Oxford

University Press.

91

Huysamen, G.K. 2001. Methodology for the social and behavioural sciences. Cape

Town: Oxford University Press.

Jacobs, M., Gawe, N. & Vakalisa, N. 2000. Teaching-Learning Dynamics: A

participative approach for OBE. 2nd edition. Johannesburg: Heinemann.

Koch, T. 1994. Establishing rigour in qualitative research: The decision trail. South

Australia: Unpublished Royal Adelaide Hospital. 26/09.

Kramer, D. 1999. O.B.E. Teaching Toolbox. Florida Hills: Vivlia Publishers &

Booksellers.

Le Grange, L.L. & Reddy, C. 1998. Continuous Assessment: An Introduction and

Guidelines to Implementation. Cape Town: Juta.

Lubisi, C., Wedekind, V., Parker, B. & Gultig, J. (eds.). 1997. Understanding outcomes-

based education: Knowledge, curriculum & assessment in South Africa. Cape Town:

CTP Book Printers (Pty) Ltd.

Maree, J.G. & Fraser, W.J. (eds.). 2004. Outcomes-based assessment. Sandown:

Heinemann.

Mason, J. 2002. Qualitative Researching. 2nd edition. London: Sage Publications.

McBurney, D.H. 2001. Research Methods. 5th edition. Australia: Wadsworth/Thomson

Learning.

McDonald, M.E. 2002. Systematic assessment of learning outcomes: Developing

multiple-choice exams. Mississauga, Canada: Jones and Bartlett Publishers.

92

Miles, M.B. & Huberman, A.M. 1994. Qualitative data analysis. Thousand Oaks, CA:

Sage Publications Inc.

Mersham, G. & Skinner, C. 2001. New insights into business and organisational

communication. Sandown: Heinemann.

Neuman, W.L. 2000. Social Research Methods: Qualitative and Quantitative

approaches. 4th edition. Boston: Allyn and Bacon.

Neuman, W.L. 2003. Social Research Methods: Qualitative and Quantitative

approaches. 5th edition. Boston: Allyn and Bacon.

Nitko, A.J. 1995. Curriculum-based continuous assessment: A framework for concepts,

procedures and policy. Assessment in Education, 2 (3):321.

Nitko, A.J. 2001. Educational assessment of learners. 3rd edition. New Jersey: Merrill

Prentice Hall.

Richards, J.C. & Lockhart, C. 1994. Reflective Teaching in Second Language

Classrooms. New York: Cambridge University Press.

University of Johannesburg: Academic Development Unit. 2004. Report on Continuous

Assessment Practices at the TWR. Johannesburg: University of Johannesburg.

Welman, J.C. & Kruger, S.J. 2001. Research Methodology for the Business and

Administrative Sciences . 2nd edition. Cape Town: Oxford University Press.

93

TRANSCRIPTION OF INTERVIEWS WITH EDUCATORS

The four educators interviewed taught communication skills through CASS at the

University of Johannesburg.

The educators will be referred to as follows:

Lecturer 1

Lecturer 2

Lecturer 3

Lecturer 4

Key abbreviations

Q = Question

A = Answer

P = Probe

CASS = Continuous Assessment

Lecturer 1

Q: Have you got any training on continuous assessment?

A: No….. no.

Q: Do you feel that you are in a position to implement continuous assessment properly?

A: I think I am, provided that I’m given the….. the….. the proper manual and instructions

on what is expected.

Q: Have you been given….. er….. the proper manuals to implement it properly?

94

A: No, I haven’t, but I have been able to discuss it with other lecturers.

P: And that has been helpful?

A: That has been helpful.

Q: What problems do you encounter in implementing continuous assessment?

A: I think the problems there are that you have different aspects of what you’re teaching

them, taking different amounts of time. So, it’s very di….. some things might only be

taught for a day, and then you really need to assess them and then some things you might

teach for a week and then you need to assess them, and it’s allocating the correct amount

of time to the amount of teaching time that you spend. So, it’s examination time to

teaching time; that’s the thing.

Q: Do you feel that continuous assessment helps students to learn better?

A: I do. I think that it makes it more real for them. I think they get a much more

immediate result for their efforts, and that immediate feedback is good for them.

Q: What would you say is the main difference between continuous assessment and

writing final exams?

A: I think it’s that continual feedback. As a lecturer you are able to get back to them,

identify weak areas and get back to them and rectify them. Whereas in the final exams, I

think on the whole, what students tend to do is write their….. their final exams and they

forget it completely….. it’s….. there is no reality….. er….. link in reality between recall

and the mark they got, which I think continuous assessment does do. The feedback is

better

Q: Do you think students understand what is expected of them in continuous assessment?

A: Ha….. ja….. um….. I don’t think they really do.

P: In your own experience.

A: I think that they find themselves in the river that’s flowing in the same direction and

just go along with it. I don’t think that they have any….. any real identity as….. um….. a

continuous assessment student as opposed to the other students. I think they just go along

with it, and it’s up to the lecturer to provide the feedback or receive the feedback.

P: And the students, you feel, are learning from the feedback?

A: Absolutely; and I think as a lecturer you get feedback too, as to whether you are being

successful or not.

95

P: But, on the whole, they have been cooperating in….. er….. doing what you ask them

to do or what you expect them to do?

A: Yes….. yes, they do. They do understand the importance of having to write all the

tests or write the papers or assignments or whatever; they do understand that, ja.

Q: What do you do to ensure that students know exactly what you expect of them in

continuous assessment.

A: It’s a case of talking to the students, discussion with them. But not knowing exactly

what is demanded by the department who….. er….. it’s difficult; it’s something that I

have to assess for myself from time to time, say to the students that’s what I’m expecting

from them. I think that there is probably a lack of clarity in that area, on occasion.

Q: How big are your classes?

A: Er….. round about 45….. could be 50….. but 40 to 50, ja.

Q: Do you feel that the numbers are conducive to continuous assessment?

A: Not really, because I think that continuous assessment I think you need to have more

individual contact and in a big class, it’s difficult to get the feel from the students, from

all the students individually.

P: In your opinion, if you had a choice….. er….. what is the number that would be…..

er….. conducive to learning through continuous assessment; the student numbers per

class?

A: He….. ah….. I had a class as small as five and that was really nice, but it wasn’t

continuous assessment. I would imagine that in….. in continuous assessment, you should

have in the region of 15 maybe possibly do 20, but I would say 15 is probably the

number.

Q: Is it possible to always provide timeous feedback?

A: No, it’s not, due to the pressure of other classes that one’s taking and after….. so you

have to deal with other classes and what other classes’ requirements are, and so you don’t

actually feedback as….. er….. as quickly as one ought to.

P: Do you also feel that the numbers in a class also impact on how long it takes to

feedback to students, as in tests and assignments?

96

A: Ver….. ver….. very much so, because the feedback becomes more general in nature

than targeted to the student himself, whereas in a smaller class, then you can be more

specific.

Q: Since the students do not write final exams, does that impact positively or negatively

on their attendance or motivation?

A: I think the attends….. er….. their attendance is more positive. They know that it’s, it

means something to be at every class. I found the assessment better in continual

assessment.

Q: Do your learners cope well with this type of assessment?

A: Oh yeah, I think they like it because they avoid the pressure of exams.

Q: What is your feeling about the move from final exams to continuous assessment. Has

it been positive or negative?

A: I think that….. that there are two aspects to that because I think that what you really

need is continuing assessment, continual assessment to give the feedback and the learning

structure and targeted teaching but, I also do believe that an….. an exam ensures….. an

exam at the end of the term ensures retention for more than perhaps a week or two weeks.

So they have to learn it and retain it and then even if it’s a regargitation after six months,

at least they’ve got to retain it for six months of which it’s likely to stay longer.

P: So, would you say final exams promote regargitation more than knowledge of a

particular….. or mastering an outcome

A: I think it might lend itself to regargitation, but I think that actually, with proper

understanding, one doesn’t have to regargitate. If the students understand the topic,

they’ll be able to answer the final exam and will be able….. they will be able to answer it

up to two years; whereas rote learning, which can happen in continual assessment, can

lend itself to regargitation.

Q: What are your learners’ attitudes towards groupwork activities?

A: It’s actually difficult. I think part of the problem there is that the learners like to

choose the other members of their group and make sure they are in with the clever guy,

and they don’t want to be with the guy who is not so clever. So in part they like it that

they can get in with the clever guy, but it depends on who he is, but their attitude…..I

think, they co-operate quite well as a group. But they chase the clever guy.

97

P: So, in view of your numbers-about 50 per class, do you feel that group work is…..

er….. viable, it’s easy to conduct or is it quite difficult in that class of 50 students?

A: I find it quite difficult too, because again the groups need to be monitored properly

and, to get around the groups is more difficult because they are having discussions and

one needs to virtually fit into and participate in the discussions as they happen and when

there are too many groups it becomes very difficult.

Q: Thank you very much.

A: Pleasure.

Lecturer 2

Q: Have you got any training on continuous assessment?

A: None at all. My only training would be my experience. With….. we are doing it with

engineers.

P: But then that experience, you think, helps you to implement continuous assessment

properly.

A: No, I don’t think so; I think I learnt a lot from….. I’m doing it with you and I learnt a

lot about what is required now. But if I had to organise it , I wouldn’t have known quite

where to start.

Q: What problems do you encounter in implementing continuous assessment.

A: Um….. I don’t recall having any problems; I think continuous assessment is a

wonderful way to…. I liked working with r….. um….. ja….. what sorts of problems were

you thinking of?

Q: Um….. any problems with, for example, you give, you know, a couple of assessments

constantly; you get to give feedback and maybe with the classes that you had, and having

to help students who are struggling….. er….. giving them special tests and make-up tests;

things like that.

A: Ja…..ja…..but it’s more work because you do have to mark more steadily through the

term, and you can’t rely on the exam at the end of the year; and in that sense there’s more

work involved. But, I actually like the idea the way you do discover which students are

98

struggling and when you’ve completed your work with the students who are coping, you

can then concentrate on the ones who are struggling.

Q: Do you feel, on the whole, that students learn better through continuous assessment?

A: Well, I suppose it depends on what’s going to be required of them when they leave the

technikon, but in terms of instilling a work ethic where the idea of that you work

consistently instead of at your tasks….. um….. I think that they do benefit. Ja, but if they

are gonna be in a situation where they are gonna have to work intensively and feverishly

at something….. a project for two weeks or something, perhaps the exam system is a

good training for that. Then you go flat out. Um….. but I think the exam system doesn’t

encourage this steady work through the term the way that continuous assessment does.

Q: So, in the same vein and breath, what would you say is the main difference between

continuous assessment and the traditional way of writing final exams?

A: Well, I think even though we try to structure the balance of the marks, that it’s…..

60% of the final mark is a term mark and 40% of the exam mark, there is still a tendency

amongst students to believe that they can always make the last ditch attempt during the

exam; whereas in continuous assessment the responsibility is squarely on the shoulders of

the students to work consistently.

Q: So, you would say exams promote, to a certain extent, regargitation.

A: Ah, yes….. yes, and not….. this….. this because they can do a last minute spirited

study don’t necessarily retain their knowledge, the way they do when they are being

continuously assessed.

Q: Do you feel students understand what is expected of them in continuous assessment?

A: Well, I think that’s up to the lecturer. Um….. it must be made very clear to them that

every little bit of work that they do counts. Um….. it’s important that they do have to…..

they….. they have to attend first of all….. um….. so, it requires a commitment from

them. They have to be made to understand that.

Q: But in your experience have you found that they….. they understood exactly what

they were expected to do?

A: No, some of them, because the rest of the technikon is not in line with continuous

assessment….. um….. are still thinking the other way- along exam lines. Um….. it’s not

99

difficult to understand….. shouldn’t be difficult to understand what continuous

assessment is about.

Q: What do you do to ensure that students know exactly what is expected of them in

continuous assessment?

A: Simply explain how it works; explain what’s involved. Explain the dangers that if they

do not attend, they do not get used to work throughout the term….. then their course is in

jeopardy. Their success is in jeopardy; simply explain it to them.

Q: And you’ve succeeded in ensuring that they understand what you expect of them?

A: I think so, yes.

Q: How big are your classes?

A: Um….. er….. I can’t remember now. I think they were around forty huge; not more

than forty; around forty or less, yes.

Q: Are the class numbers conducive to learning through continuous assessment?

Well, I’m sure the ideal number would be less. Um….. because it does entail knowing the

students well. Um….. it is manageable, it is manageable.

Q: In view of your numbers and other factors, is it possible to always provide timeous

feedback to students?

A: No, it’s not. That’s the trouble; the larger the numbers, the less possible it is to provide

timeous feedback and in fact I should have mentioned that earlier, that it’s one of the

problems of continuous assessment. The feedback needs to be almost instant.

P: So, you need lesser numbers?

A: You need, yes, or fewer classes, or you could have a big class, but not such a full

time-table. That could be another way of doing it, but the ideal would be lesser numbers

Q: Ok, since the students do not write final exams, does that impact positively or

negatively on their attendance or motivation?

A: I think that it impacts positively. I think that the students….. um….. well, the more

mature students could see the advantage of working steadily through the term and then,

just when life could normally be getting more and more stressful and fraught, they are

released; they have done the work; they have reaped their rewards. Of course there are

still immature students who don’t see it that way. They….. they first play and then at the

last minute discover that they haven’t been consistent enough in the work and attendance.

100

P: So, on the whole, you would say that students are more involved in and enjoy learning

through continuous assessment?

A: Yes, and I think they….. there’s a sense of achievement when they realise that they

don’t need to write final exam now; they have done sufficiently well through their term’s

work, and as you release them and only the weaker ones remain behind….. um….. it

seems to be a very logical progression that the ones who are strong have left and the

weaker ones remain behind for you to help.

P: So, it’s not difficult for them to shift their paradigms from an exam mode to

continuous assessment.

A: Well….. um….. my only experience is with civil engineers and I think, on the whole,

they are intelligent students, and my experience is that they didn’t experience difficulty;

well, I may be wrong.

Q: Do your learners cope well with this type of assessment- continuous assessment.

A: I thought that civil engineers did cope. Yes, I thought, I am sure that the context is

important. I think that civil, from what I see, civil engineering department is a….. a….. a

caring department and things are well structured there….. um….. and the students are

carefully selected, and so, that supports in their participation in this continuous

assessment programme.

P: You would say different groups of students would take differently to continuous

assessment?

A: Yes, I think….. yes….. probably. I’ve only had experience with the civils but, I guess

other groups may not cope as well, and also other departments may not support them as

well, in explaining how it works.

Q: What is your feeling about the move from final exams to continuous assessment. Has

it been positive or negative?

A: Well, my feeling is positive. I like continuous assessment. I think the exam situation

can be artificial.

P: So, the advantages would be those you’ve mentioned that the students constantly learn

and constantly feel a sense of achievement?

A: Yes. And not only that, because the assessment is continuous, they are constantly

reinforcing their learning. They are not having a one off blast at the end just for the exam,

101

and that kind of study is quickly forgotten. They are constantly spiraling back over their

work and reinforcing their learning.

Q: What are your learners’ attitudes towards group work activities?

A: Um….. there were differences….. um….. some….. um….. I must say that the

majority of them seem to enjoy groupwork. But, because of the nature of

………..occasionally I have some white students who are strong in English anyway, and

they seem to think that, well, this is a bit of a waste of time. They were a little bit

patronising about the group work.

P: So, they are not all so enthusiastic?

A: Not all of them, no.

P: So, on the whole, stronger students- students who have got a strong language

background- tend to be somewhat uninterested.

A: Look, that was probably my fault; I should perhaps have challenged those students

further, but I felt that they would be very useful spread out amongst the groups…..

um….. ja…..it’s an attitudinal thing. I think you will always get….. it hasn’t much to do

with their strength. It has just to do with their good fortune and being mother tongue

English speakers.

P: So, in terms of the majority you would say that there was a positive response and

enthusiastic…

A: To group work?

P: To group work.

A: Yes. Now I’m talking about oral group work. On written group work, I don’t think

that they are that keen, because they feel that the assessment….. they can be prejudiced

by a weak member of the group and I think they dislike that. Ja.

Q: Thank you very much.

A: OK, pleasure.

Q: Thank you.

102

Lecturer 3

Q: Have you got any training on continuous assessment?

A: No, I don’t. Um….. educational training that I’ve had is 15 to 20 years old, probably

older, and um….. continuous assessment was then not one of the criteria used.

P: How do you implement it if you haven’t gotten any formal training on continuous

assessment?

A: Um….. formal training as in a recognised diploma or degree, no. But….. um….. I

have had some staff training as a staff member of two various other….. um….. tertiary

institutions, and a number of workshops that I actually attended. This is the only training

I have.

Q: Do you feel that they have provided you with….. er….. necessary knowledge on how

to implement this properly.

A: No, I do not believe that….. um….. my knowledge is adequate at present. Um….. as I

said, it’s mainly based on experience and….. um….. no, definitely not.

Q: What problems do you encounter in implementing continuous assessment?

A: Well, one especially with class sizes, where continuous assessment….. er…..um…..

the very concept infers that one would continuously need to revise and rework

assignments, pieces of work ; give….. um….. because of the size of our class , that is the

first thing. And then secondly, the amount of contact that we have with our students.

P: So you think you can do with….. er….. more time with the students?

A: Definitely

Q: Do you feel, on the whole, that continuous assessment helps students to learn better?

A: My personal opinion in this matter is that our society is very much still an exam or test

oriented society, and that many of the students feel that unless they sit and write a big

exam, they haven’t really sat and they don’t really need to work as hard.

P: So, the students don’t work as they are expected to?

A: No.

Q: What would you say is the main difference between learning through continuous

assessment and the traditional system of writing final exams.

103

A: The main….. cr….. um….. difference in my opinion is that students with continuous

assessment get to master smaller pieces of work while with the writing of….. um….. final

exams much much larger pieces of work have to be dealt with, and as an educator, it is

obviously is the easier to be able to go back and look at or rethink, rework sections,

smaller sections which haven’t been possibly adequately understood or taught even.

P: So, basically would you say continuous assessment helps or is….. it’s….. promotes

rote learning or is it the traditional final exams which do that?

A: Um….. there is an amount of rote learning to many people….. um….. when they are

writing final exams, and I think we all know there are difficulties accompanying that,

while with continuous assessment one does try to get away from the old fashion form of

rote learning.

Q: Do you think students understand what you expect of them in continuous assessment,

or what is generally expected of them in continuous assessment?

A: I think the concept of continuous assessment is starting to gain favour, because…..

um….. many of our students coming fresh from high school are familiar with the process

but as I’ve said there is….. are still some of the difficulties.

Q: What do you do to ensure that students know exactly what you expect of them in

continuous assessment?

A: The first stage is to lay out in terms of the course; it’s how often and where and what

kinds of assessment will take place. A student is actually given an overview right at the

beginning of the course. Then, with the actual setting of the test, students….. um…..

lecturers….. we try to set out as clearly as is possible to show a student what we are

actually….. um….. evaluating or assessing.

P: Do you think they understand what is expected of them after explaining and doing all

the necessaries?

A: Um….. I think for me , that would be yes, but with some reservations, because some

students still do not understand even after you’ve explained; but then you might have…..

um….. you might have a percentage who do understand; so it’s…..

P: But the majority, does the majority understand?

A: Oh yes.

Q: Ok, how big are your classes?

104

A: Um….. the continuous assessment courses that I have are over 60….. er….. the one is

the Somatology group and they are just over 50 while the Radiographers are just over 70.

Q: Do you feel that the class numbers are conducive to continuous assessment?

A: Definitely not….. because….. er….. if one were to try and assess as often as you

ought to with continuous assessment, then my groups are far too large. I am then

finding….. it takes longer for me to….. um….. identify students who have had difficulty

understanding that section of work.

P: What numbers do you have in mind….. er….. in terms of ….. er….. class numbers;

which do you think would be conducive to learning through continuous assessment.

A: I would think about 30 to 35.

Q: Not more than 35?

A: Definitely not. That would be the maximum.

Q: Is it always possible to provide timeous feedback to students?

A: No. despite….. um….. how diligent or dedicated or even disciplined we try to be, the

fact of the matter is that we are human beings and that there is just so many hours in the

day. So, no, one cannot get back to these continuous assessment classes.

P: Mainly because of the work load?

A: Of course yes.

P: Would you say the numbers also impact…?

A: Definitely….. absolutely.

Q: Since the students do not write final exams, do you feel that impacts positively or

negatively on their attendance or motivation?

A: Well, I must admit that….. um….. because I am probably ….. er….. labeled as being

of the old school, I build in class attendance into my continuous assessment….. um…..

calculations.

P: So they attend because they have to attend?

A: They attend because they have to attend. They know that the guns actually pointed at

their heads, that poor attendance will result in poorer scoring. Um….. for every

assignment, if you are not actually in the classroom, I won’t accept that assignment.

P: But then, do you feel that even though that they are supposed to attend as they have

to….. um….. they have the enthusiasm or the motivation to attend?

105

A: No, they are not. They are not as motivated as students who write….. um….. the

normal semester or final year exams, year-end exams.

Q: Do your learners cope well with this type of assessment, that is, continuous

assessment?

A: At present….. um….. because I teach both modes- continuous assessment and the

exam type ….. um….. I find that continuous assessment takes far more out of the lecturer

than out of the student and that should not be the case. It ought to be….. emphasis ought

to be more on the student ….. and producing at a regular pace. As….. er….. I said

previously, our mindset is wrong, but we’re still functioning because we know that the

final exams……

P: So it’s lecturers who are working harder and the students cope because there is not…..

their work is not as much as that of the lecturer.

A: Yes.

Q: What is your feeling about the move from final exams to continuous assessment? Do

you feel it has been positive or negative?

A: Um….. I think it is a matter of….. again as I’d like to state that it’s a mindset where

our students have been brought through the school system on….. um….. the mindset of

being evaluated continuously and then being found competent or non-competent, not yet

competent. This kind of thinking would then shift, but….. um….. it’s kind of really very

difficult to change horses in the extreme, that where you have been brought up with the

exam system in mind to suddenly be cast into what you think is an easy or soft option…..

um….. students who have been brought through the school system on continuous

assessment, I hope are going to do better.

P: But, on the whole, do you feel continuous assessment is a soft option for students?

A: At the present moment it is; it is a softer option that more students are still trying to

get away with trying to do as little as possible and still get through.

P: So, they are getting away with doing very little?

A: As little as is possible.

Q: What are your learners’ attitudes towards group work activities?

A: Um….. quite positive if….. um….. there is sufficient control. Again, where your

groups are very….. where your classes are very big and you have many groups, then…..

106

um….. control seems to slip. So, in a smaller environment, where students know that

there is more control, group work is more successful.

P: So, would you say, as it stands now, group work activities are difficult to control

because of the large numbers?

A: Yes, very much so.

Q: Thank you very much

A: My pleasure.

Lecturer 4

Q: Have you got any training on CASS?

A: No, not really. I attended one workshop which was more of an introduction to CASS.

Q: Do you feel that you are in a position to implement CASS properly?

A: Well, I know some basics, but….. er….. I don’t know if I can do it in a desired

manner.

Q: What problems do you encounter in the implementation of CASS?

A: Er….. I think it’s large numbers. I….. having to mark the test scripts of all the

students, it’s a problem; it’s really difficult, ja.

Q: Do you feel that CASS helps students to learn better?

A: Yes, I think it encourages students to work hard. They don’t have to memorise that

much. It discourages rote learning.

Q: What would you say is the main difference between learning through CASS and

writing final exams?

A: In CASS, students learn consistently throughout the year and in the writing of exams,

they study hard towards….. er….. the end of the year. Students mostly memorise.

Q: Do you think students understand what is expected of them in CASS?

A: I don’t think they really understand. Some students think it is just an easy way to pass,

an easy option.

Q: What do you do to ensure that students know exactly what you expect of them in

CASS.

107

A: I explain all the assessments and tasks at the beginning, and stress the importance of

attendance.

Q: How big are your classes?

A: Hm….. hey….. some of my classes have many students. Er….. but my CASS class

has around 55 students.

Q: Are the class numbers conducive to continuous assessment?

A: No, they are just too big. Too big for CASS.

Q: Is it possible to always provide timeous feedback?

A: The classes are too big. It’s difficult to give feedback on time; otherwise you won’t

have time to rest.

Q: Since the students do not write final exams, does that impact positively or negatively

on their attendance or motivation?

A: Well, students generally attend. But some….. hm….. some don’t take the work as

seriously as they should, because there is no exam. They have an attitude; well, it’s an

attitude problem. But I am talking about very few students; I mean very few. But ja,

students generally attend.

Q: Do your learners cope well with this type of assessment?

A: I think….. er….. generally, generally….. ja….. they do. But as I’ve said, they don’t

quite understand what CASS is really all about.

Q: What is your feeling about the move from final exams to CASS. Has it been positive

or negative?

A: Hm….. I think it’s good….. ja….. it’s good, because students study hard throughout

the year instead of studying only during exam time. But, their departments have to make

them understand this CASS.

Q: What are your learners’ attitudes towards group work activities?

A: Generally they like it, but some of the strong students have an attitude. They think it’s

a waste of time, but generally they like it.

P: Do you enjoy doing group work with students?

A: Yes and no. Yes, because they learn a lot from their peers and play an active role in

their learning; and no, because the class numbers are too high. There are just too many

groups and they are difficult to control.

108

P: What number do you think is appropriate for a CASS class?

A: Er….. hey….. I am not sure, but I think about 30….. ja….. 30 is good and can be

controllable.

Q: Thank you very much.

109

LEARNER FEEDBACK FORM

Date

Course

Subject name

Communication Skills

Year of study

2004

Tick ONE box for each of the following statements

Statement Always Sometimes Never

1. The lecturer clearly defines the outcomes for each lecture.

2. Different assessment methods are used by the lecturer (i.e. tests, assignments, class-work, group work).

3. The lecturer invites us to actively participate during lectures.

4. The lecturer involves students in assessment (i.e. uses peer assessment and/or self assessment).

5. The lecturer gives us feedback on tests and assignments within a week.

6. I learn a lot from my lectures.

7. I learn a lot from feedback provided by my lecturer.

8. I clearly understand the importance of my lecturer’s assessment.

9. I enjoy learning through continuous assessment.

10. The lecturer repeats aspects of the lecture that learners did not understand.

110

11. The lecturer helps learners who experience difficulties/ problems in the subject.

12. The pace of lectures is appropriate to my needs.

13. Learner guides provide clear outcomes for each learning unit.

14. Study guides clearly specify the assessment methods (e.g. tests, assignments, group-work, class-work, projects) to be used.

15. The lecturer is available during consultation times.

16. The assessment given by your lecturer is:

(Tick only ONE box)

1. Too much 2. Balanced 3. Too little 17. Learning through CASS is:

(Tick only ONE box) 1. Difficult 2. Challenging 3. Easy

18. Tick ONE of the following: The lecturer always teaches and barely assesses (i.e. assesses very little) The lecturer always assesses and barely teaches. (i.e. teaches very little) The lecturer balances both teaching and assessment.

19. In EACH of the following statements indicate the number of times in accordance with the statement.( tick the number of times for EACH of the statements below)

Statement 0 1 2 3 4 More I visited my lecturer---times during the year

111

I visited the library---times during the year I had group discussions with other students---times during the year

20. Give any positive and/or negative comments about your lecturer’s method of assessment. 21. Give any positive and/or negative comments about learning Communication Skills through continuous assessment.

112

INTERVIEWS WITH LECTURERS

Questions

1. Have you received any training on CASS?

2. Do you feel that you are in a position to implement CASS properly?

3. What problems do you encounter in the implementation of CASS?

4. Do you feel that CASS helps students to learn better?

5. What would you say is the main difference between learning through CASS and

writing final exams?

6. Do you think students understand what is expected of them in CASS?

7. What do you do to ensure that students know exactly what you expect of them in

CASS?

8. How big are your classes?

9. Are the class numbers conducive to CASS?

10. Is it possible to always provide timeous feedback?

11. Since the students do not write final exams, does that impact positively or negatively

on their attendance/ motivation?

12. Do your learners cope well with this type of assessment?

113

13. What is your feeling about the move from final exams to CASS? Has it been positive

or negative?

14. What are your learners’ attitudes towards group-work activities?

114