Gathering, analyzing, and implementing student feedback to online courses: Is the Quality Matters...

14
IJODE | 15 volume 1 ıssue 1 january 2015 GATHERING, ANALYZING, AND IMPLEMENTING STUDENT FEEDBACK TO ONLINE COURSES: IS THE QUALITY MATTERS RUBRIC THE ANSWER? GATHERING, ANALYZING, AND IMPLEMENTING STUDENT FEEDBACK TO ONLINE COURSES: IS THE QUALITY MATTERS RUBRIC THE ANSWER? Lucian Dinu 1 , Philip J. Auter 2 , Phillip Arceneaux 3 1. Lucian Dinu, Ph.D. is an endowed Associate Professor of Communication at the University of Louisiana at Lafayette in Lafayette | [email protected] | +1.337.366.0266 2. Philip J. Auter, Ph.D. is an endowed Professor of Communication at the University of Louisiana at Lafayette in Lafayette | [email protected] | +1.337.482.6112 3. Phillip Arceneaux is a graduate student in the Communication program at the University of Louisiana at Lafayette | [email protected] | +1.337.482.9008 Abstract This paper proposes a new method of collecting and utilizing quality feedback from students regarding the learning experience of the electronic classroom. The study begins by reviewing how existing methods for data gathering, also known as stu- dent evaluations of instruction (SEI), have been well established and tested in the traditional class setting, but have not been adequately adapted to the online class setting. The Quality Matter (QM) rubric is suggested as a supplementary tool in the information collection process of online classes. Data was collected by survey from both students and professors of the same institution. The results note both strengths and weaknesses of each approach, and conclude that the most efficient system would be to use the QM Rubric as a supplement to the SEI. Keywords: Online Courses, Distance Learning, Student Feedback, Evaluations of Instruction

Transcript of Gathering, analyzing, and implementing student feedback to online courses: Is the Quality Matters...

IJODE | 15volume 1 ıssue 1january 2015

GATHERING, ANALYZING, AND IMPLEMENTING STUDENT FEEDBACK TO ONLINE COURSES: IS THE QUALITY MATTERS RUBRIC THE ANSWER?

GATHERING, ANALYZING, AND IMPLEMENTING STUDENT FEEDBACK TO ONLINE COURSES:

IS THE QUALITY MATTERS RUBRIC THE ANSWER?

Lucian Dinu1, Philip J. Auter2, Phillip Arceneaux3

1. Lucian Dinu, Ph.D. is an endowed Associate Professor of Communication at the University of Louisiana at Lafayette in Lafayette | [email protected] | +1.337.366.0266

2. Philip J. Auter, Ph.D. is an endowed Professor of Communication at the University of Louisiana at

Lafayette in Lafayette | [email protected] | +1.337.482.6112

3. Phillip Arceneaux is a graduate student in the Communication program at the University of Louisiana at Lafayette | [email protected] | +1.337.482.9008

Abstract

This paper proposes a new method of collecting and utilizing quality feedback from students regarding the learning experience of the electronic classroom. The study begins by reviewing how existing methods for data gathering, also known as stu-dent evaluations of instruction (SEI), have been well established and tested in the traditional class setting, but have not been adequately adapted to the online class setting. The Quality Matter (QM) rubric is suggested as a supplementary tool in the information collection process of online classes. Data was collected by survey from both students and professors of the same institution. The results note both strengths and weaknesses of each approach, and conclude that the most efficient system would be to use the QM Rubric as a supplement to the SEI.

Keywords:

Online Courses, Distance Learning, Student Feedback, Evaluations of Instruction

16 | IJODEvolume 1 ıssue 1

january 2015

Lucian Dinu, Philip J. Auter, Phillip Arceneaux

Gathering, analyzing, and implementing student feedback

to online courses: Is the QM rubric the answer?

Student feedback is an essential part of teaching (Huxham et. al., 2008). It allows students to select courses and instructors, it influences tenure, promotion, and merit raises for instructors, and perhaps most important-ly, it allows instructors to improve the content and delivery of courses.

And yet, obtaining, analyzing, and im-plementing feedback in online classes presents several challenges. First, current feedback forms don’t focus on the specifics of online instruction. Second, the data gathered using current forms may not provide sufficient in-formation to online instructors. And third, it is not clear how and whether online instructors and distance learning program administrators

are using student feedback to improve online course delivery and program development. This paper focuses on the three stages of im-proving online course instruction: the chal-lenges of obtaining student feedback, utilizing feedback to determine better ways of course redesign; and “closing the loop” between as-sessment and course improvement. QM-cer-tified instructors and course designers will be interviewed in order to find out how they typ-ically obtain, analyze, and utilize student feed-back in their course development.

Additionally, the QM rubric will be modified to be utilized as a direct assessment tool in a number of online classes at the Uni-versity of Louisiana at Lafayette. Responses to the QM survey will be compared to tradi-tional student evaluation of instruction (SEI) scores in order to see if and how they correlat-ed. Additionally, faculty and administrators will be interviewed to see how they feel about results and the best ways to take advantage of them.

IJODE | 17volume 1 ıssue 1january 2015

GATHERING, ANALYZING, AND IMPLEMENTING STUDENT FEEDBACK TO ONLINE COURSES: IS THE QUALITY MATTERS RUBRIC THE ANSWER?

Literature Review

The process of formal education was first established at the advent of complex writing systems, namely Egyptian hieroglyph-ics (Fischer, 2004). Modern technology has offered great advances to what is offered in the 21st century’s educational system. Fewer classrooms have chalk boards while more and more classrooms have lecterns equipped with computers, projectors, visual scanners, and au-dio systems. One of the greatest applications of modern technology is that some classes do not even have to have a classroom. Distance learning offered through online courses is be-coming a popular option among college stu-dents. Whereas traditional courses have been well established through trial and evaluation over time, online courses are only a few de-cades old and evaluative tools for these classes have lacked intensive academic criticism. In this study an alternative option will be put forth for the effective evaluation of teaching in online courses.

Traditional courses consist of the face-to-face transference of knowledge and instruc-tion from teacher to student. Over the years, the means of assessing the effectiveness of the teacher has varied, but one has remained a constant over time and is typically the most weighed are the data gathered from the stu-dents evaluations of the course (Erdoğan et al, 2008; Selçuk, 2011). These data have been the basis for administrative evaluations of teaching performance as a determinant in the awarding of tenure, title promotions, and pay increases (Loveland, 2007; Palmer, 2011). Previous research has shown that numerous variables effect students’ overall satisfaction with a course. Type of class is very important as it must reflect the intellectual ability of the students. Types of courses include: lecture, seminar, lab, and independent study. (DeBerg and Wilson, 1990; Langbein, 1994). Class size also greatly affects student evaluation re-sponses. Lectures can range up to multiple hundreds of people while seminars can be as

small of five people; and independent studies typically involve a one-on-one relationship between teacher and student. This wide range in class size leads to different social environ-ments that greatly vary a student’s interactive role (Hoftgreter, 1991). The time of day a class is offered also significantly skews stu-dent perceptions. Average college classes can range anywhere from early morning courses, to afternoon courses, to late night courses (De-burg and Wilson, 1990). Lastly, the day of the week a class is offered tends to vary student perceptions, as the day typically determines the frequency with which a student must at-tend that class per week (Husbands and Frosh, 1993).

The most widely agreed upon method of evaluating student satisfaction of a course is through data collection by means of a ques-tionnaire. Student satisfaction is defined as, “the degree to which students feel satisfied with workload of course, level of course, teaching activities, and instructor’s teaching effectiveness” (Richardson, 2005; Selçuk, 2011). Questionnaires designed specifically to gather satisfaction data of professors are known as Student Evaluations of Teaching (Loveland, 2003; Selçuk, 2011; Palmer, 2011) or Student Evaluation of Instruction – SEI. Approximately 2,000 examinations of SEI methods have been conducted over the 20th century, all supporting relatively high marks of both reliability and validity (Wilson, 1998). Marsh (1982) developed one of the most com-mon SEI in academia – the Student’s Evalua-tions of Educational Quality (SEEQ) which is a thirty-five item questionnaire, with each item measured on a five point Likert scale. Through extensive testing, SEEQ has been shown to produce minimal variance across teaching de-mographics at all levels of formal education (Marsh and Hocevar, 1991). Questionnaires are administered in the class setting on the last day of the semester in which the group meets, or as near to the conclusion of the semester as the course schedule allows. The purpose of administering the questionnaire at that time is

18 | IJODEvolume 1 ıssue 1

january 2015

Lucian Dinu, Philip J. Auter, Phillip Arceneaux

so that the feedback can evaluate the entirety of the course rather than just a portion. The ability to capture data in class rather than on a student’s own time boosts response rates which in turn reduces the sampling error as much as possible (Richardson, 2005). The questionnaires are kept relatively short, mak-ing the time length of the data collection pro-cess rather minimal. Unfortunately, the sheer repetitive nature of filling out near identical questionnaires for multiple classes each semes-ter has become so common place that students tend to not seriously address the questions, po-tentially leading to faulty data (Abarmi et al, 1996). Another aspect that affects the qual-ity of data is the apparent application of the information. When students see institutional changes representative of their critiques, they become more involved in actively working to better their classes. When administrations fail to act on the evaluations provided through the questionnaires, the student body tends to become disinterested and the quality of the data gathered suffers (Spencer and Schmelkin, 2002). As with most concerns for gathering effective data, evaluative questionnaires are provided in a multitude of ways to account for students with both physical and mental dis-abilities, in compliance with such legislation as the Americans with Disabilities Act and the United Kingdom’s Special Educational Needs and Disability Act (Richardson, 2005).

The advent of applying modern tech-nology to the educational process has not only enforced the traditional class setting, but has also given birth to distance learning. The first online course was offered in 1981 by the Western Behavioral Sciences Institute in La Jolla, California (Feenburg, 1999). By 1997, 44% of academic institutions were offering distance learning courses (NCES, 1999). As of 2006, over three million American students were taking at least one online course (Allen and Seaman, 2006). More recent results by the U.S. Department’s National Center for Ed-ucation Statistics reported that in the academic Fall of 2012, over twenty-one million Ameri-

cans were enrolled in at least one online course (NCES, 2014) with over 75% of American universities offering online programs (Parket et al, 2011; Berk, 2013). Where the role and responsibilities of instructors have been well established in the traditional class setting, the roles and responsibilities for teachers in the electronic class setting are considerably differ-ent and under established. As teachers work more one-on-one with students through on-line classes, their role consists of being more of an academic partner than a traditional in-stitutional leader (Balderrain, 2006). Through this role as an academic partner, teachers are needed to more directly interact with and as-sist the students in the consumption, and most importantly, assimilation of knowledge (Ke-arsley and Shneiderman, 1998; Rothman et al, 2011). Because of the extensive amount of technology used in online courses, when a student contacts the teacher seeking a resolu-tion for technical issues, the teacher must be more prompt than in a traditional setting in responding to the student in order to facilitate an ease of access to the material and assign-ments (Bangert, 2004). Unfortunately very few teachers receive official training when it comes to teaching online courses. Institution-ally they are offered technological support and little more than that. One of the current meth-ods for governing the online educational pro-cess is known as the Quality Matters Program (QM). The QM provides a rubric which is designed to increase a teacher’s awareness and sensitivity to: learning objectives, assessment and measurements, instructional materials, course activities and earner interaction, and course technology (Matters, 2011). With an increasing number of online students, teachers must be trained to exhibit skills and behaviors appropriate to the online class setting rather than just the traditional class setting (Love-land, 2007).

Whereas by the dawn of the 21st centu-ry approximately 2,000 studies had been con-ducted on the effectiveness of the SEI, by 2007 Loveland and Loveland’s 2003 article titled

IJODE | 19volume 1 ıssue 1january 2015

GATHERING, ANALYZING, AND IMPLEMENTING STUDENT FEEDBACK TO ONLINE COURSES: IS THE QUALITY MATTERS RUBRIC THE ANSWER?

Student Evaluations Of Online Classes Versus On-Campus Classes was the only published academic article that directly analyzed and questioned the effectiveness of SETs for evalu-ating online courses (Loveland, 2007). In that article they were the first to hold that tradi-tional evaluation tools, such as an SEI, were not effectively designed to collect pertinent data in online courses (Loveland and Love-land, 2003). The majority of evaluations used in online courses to date has either been writ-ten by the course instructor himself/herself or were all together neglected to be administered by the academic institution itself (Campora, 2003; Rothman et al, 2011). Overall there is simply a general lack of research and testing on functional evaluative tools for the online class setting. The QM Program offers teach-

ers a structured rubric for designing effective syllabi and content for their online courses. This article investigates whether the QM ru-bric would also serve as an optimal as a post-course evaluative tool. The specific research questions that drive this study are:

• RQ1: Are the current SEIs adequate for student feedback in online classes?

• RQ2: How is student feedback to online instruction gathered?

• RQ3: How is student feedback to online instruction analyzed?

• RQ4: How do faculty close the student feedback loop in online classes?

• RQ5: Can the QM rubric be adapted into a student feedback form for online classes?

20 | IJODEvolume 1 ıssue 1

january 2015

Lucian Dinu, Philip J. Auter, Phillip Arceneaux

Methods and Procedures

Two surveys were used in this study. Both surveys were conducted at a large public university in the Southern part of the United States of America.

Student survey. The first survey was longi-tudinal and collected data from students enrolled in online classes during the Spring, Summer, and Fall 2013 semesters. The purpose of this survey was to assess the utility of the Quality Matters (QM) rubric as an instrument for instruction feedback data. The QM rubric is typically used to assess classes, not as a survey tool. Utilization in this way is a new approach and thus the reli-ably in these circumstances is untested.

All students enrolled in six communi-cation classes were asked to complete both a traditional Student Evaluation of Instruction (SEI) form and a form based on the QM ru-bric. Three of the classes were freshman-level, two were junior-level, and one was senior-lev-el. Both the traditional SEI and the QM-based evaluations were used as regular student feed-back for improving the course. Completing the forms was not mandatory and the students were not offered any incentives to participate in the study. All responses were anonymous.

More specifically, the QM-based form used each standard in the QM rubric as a di-chotomous (yes/no) variable. The students were asked to respond to each statement based on their experience in the course. For example, the first statement corresponding to QM stan-dard 1.1, was “Instructions make clear how to get started and where to find various course components.” Respondents had to indicate their agreement or disagreement with the statement by checking “yes” or “no,” respec-tively. Eight open-ended questions, one for each major QM standard, asked respondents if they had any suggestions for improvements.

Faculty survey. The second survey was cross-sectional and collected data from QM-cer-tified faculty at the same Southern US public

university as above. Its purpose was to assess whether faculty see a QM-based feedback form as a useful and viable tool for student evalua-tion of instruction. In order to teach online this university requires faculty to be Quality Matters (QM)-certified. About 150 professors and in-structors were certified in the Fall 2014 semes-ter, when this part of the research was conduct-ed. A total number of N = 48 faculty completed the questionnaire during the two weeks allotted for data collection, for a response rate of 32%.

Specifically, at the beginning of the Fall 2014 semester an online questionnaire was cre-ated and posted online using the tools available in Google Forms. A mailing list of all professors and instructors certified to teach online was also created. The mailing list used contact informa-tion publicly available on the website of the Uni-versity’s Distance Learning office. All QM-certi-fied faculty – about 150 people at the time of the survey – were emailed and asked to participate in the study. As an incentive, a $20 gift card was raffled after the completion of data collection. The survey took about 15 minutes to complete.

Once again, the QM standards were used as items in the questionnaire. This time, respondents had to rate the importance of having student feedback on each of the QM standards. A five-point Likert-type scale, from 1 (not at all important) to 5 (extremely im-portant) was used for each item. In addition, faculty respondents were also asked several questions about traditional SEIs. For exam-ple, five-point Likert scales were used to assess faculty’s level of agreement with the impor-tance, utility, value, accuracy, pertinence, and sufficiency of student feed-back obtained with traditional SEIs. Yet another set of Likert-type scales asked faculty how likely they were to use student feedback to improve their course. Finally, a few open-ended questions asked fac-ulty respondents whether they gather student feedback other than through the traditional SEIs in their online courses and how, and also what they perceived as the best features and the poorest features of traditional SEIs.

IJODE | 21volume 1 ıssue 1january 2015

GATHERING, ANALYZING, AND IMPLEMENTING STUDENT FEEDBACK TO ONLINE COURSES: IS THE QUALITY MATTERS RUBRIC THE ANSWER?

Results

The first four research questions were answered using data from the faculty survey. One third (n = 16) of the faculty who com-pleted the survey were at the instructor rank, while 18.8% (n = 9) were assistant professors, 29.2% (n = 14) were associate professors, and 14.6% (n = 7) were full professors. Two re-spondents (4.2%) did not reveal their rank. A majority of the respondents had at least some experience with online teaching at the time of the research. Thus, 31.3% of the respon-dents had been teaching online for one to two years and 37.5% had been teaching online for three to five years. Moreover, 25% of the re-

spondents indicated that they normally teach two courses during an average academic year and 20.6% indicated they normally teach one course in an academic year (see figure 1). Nev-ertheless, some respondents (4.2%) indicated they had never taught an online course before.

RQ 1 asked Are the current SEIs ade-quate for student feedback in online classes?

In general, respondents indicated that the SEIs currently in use have both some ad-vantages and some disadvantages. Large pro-portions of respondents agreed (38.3%) or strongly agreed (40.4%) that current SEIs are important; similarly, many agreed (45.7%) or strongly agreed (34.8%) that SEIs are valuable to them (see Figure 2)

On the other hand, respondents iden-tified some specific disadvantages of current SEIs for evaluating online instruction. About one third (31.1%) of the respondents dis-agreed and 15.6% strongly disagreed that cur-

rent SEIs provide sufficient student feedback in online classes (see Figure 3).

In addition, answers to open ended questions revealed that respondents are wor-ried about the adequacy of traditional SEI in the evaluation of online courses. For example, respondents wrote that

Figure 1. Respondent experience with online courses

Figure 2. Perceived importance and value of current SEIs.

Figure 3. Perceived sufficiency of traditional SEIs

22 | IJODEvolume 1 ıssue 1

january 2015

Lucian Dinu, Philip J. Auter, Phillip Arceneaux

“Some of the questions are not relevant to online courses, such as “How many class-es did you miss?” and “The physical environ-ment was conducive to learning.” And

“The choices for feedback are not particularly useful. The questions are gener-ic one-size-fits-all-courses-and-all faculty, so students’ answers give little useful feedback about course improvement.” And further “In some cases, the questions don’t apply to the format of instruction.”

RQ2 asked How is student feedback to online instruction gathered?

About half (47.92%) of the respondents indicated that they collect their own student feedback data, in addition to that collected through the institutional SEIs. Some of those who do that wrote that they use online class tools, such as forum discussions, or question-naires they develop themselves. For example, one respondent wrote

“I ask for their feedback in Moodle dis-cussion forums, via e-mail, and ask them to submit anonymous feedback manually under my office door if they’re unwilling to provide feedback online.” Another respondent an-swered that he/she uses “a discussion forum or an reflection assignment to ask students to share their input on a few questions about the course design and delivery.”

Several respondents indicated that they take the “class temperature” by distributing a survey once or more times during the semes-ter. Finally, one respondent took this oppor-tunity to express his/her discontent with the current SEIs:

“I offer students points for writing and submitting a “feedback” paper at the end of the semester so I can learn about their expe-riences since the SEIs tend to get such low response rates and are not a representative sample (mostly students who are upset with the strictness of the course policies). Also, the

question about teacher availability is worded incorrectly because it should be something like “The instructor was available for scheduled appointments during office hours” because I have students that expect me to be available to them on Saturdays and they are marking that statement negatively for me rather than understanding that online teachers are avail-able in person during office hours or online during work hours (as stated to students in my syllabus).”

RQ3 asked How is student feedback to online instruction analyzed? And RQ4 asked How does faculty close the student feedback loop in online classes?

Large percentages of respondents re-ported that they use the current SEIs in a va-riety of ways. For example, 68.9% of the re-spondents are very likely and 15.6% are likely to browse the data to see their scores; 42.2% are very likely and 22.2% are likely to use the numerical data to improve their courses. Even higher percentages reported they are likely (20%) or very likely (64.4%) to use students’ comments to improve their courses (see Fig-ure 4).

To complete the picture, respondents also described how they use their own feed-back mechanisms, other than the school SEI, to improve their courses. Thus, some re-spondents wrote that they “make changes in assignment design, design and number of in-structional resources.” Some respondents re-ported that they use feedback they collect in-dependently of the school SEI to make changes to their courses during the semester, as well as between semesters. For example, one re-spondent wrote “During the semester I make adjustments to the delivery of the course, if enough students have issues expressed on the temperature checks. Then, between semesters I read the exit questionnaires and make chang-es to the course in areas where many students had trouble.” And another one wrote: “I try to weigh student comments to see where I can

IJODE | 23volume 1 ıssue 1january 2015

GATHERING, ANALYZING, AND IMPLEMENTING STUDENT FEEDBACK TO ONLINE COURSES: IS THE QUALITY MATTERS RUBRIC THE ANSWER?

improve. For instance, if students indicate that they find particular directions unclear, I try to rewrite them.”

Last, but not least, RQ 5 asked: Can the QM rubric be adapted into a student feedback form for online classes?

Data from both the student survey and the faculty survey were used to answer this re-search question. Out of the 179 students en-rolled in the classes used for data collection, 42 students completed the traditional SEIs and 19 students completed both the traditional and the QM-based feedback forms. While the rates of completion are small, they are reflective of the rates typical for evaluations of instruction conducted online. Moreover, even though the resulting sample was too small for meaningful quantitative data, several qualitative observa-tions can be made.

First, the QM Rubric responses provid-ed data on instruction and course not avail-able from traditional SEIs. For illustration, the current, traditional SEI has only one question pertaining to the university’s online learning

management system, Moodle. That question simply asks what instructional resources were used in the course, with Moodle being one of the answer options. One generic question such as this may not be sufficient for a course con-ducted exclusively online, or even for a hybrid course. In comparison, the entire QM-based feedback form is focused on the specifics of using the means available in Moodle for sound instruction.

A second observation is that a QM-based feedback form does not necessarily eliminate the necessity of traditional SEIs, but complements them well. Traditional SEIs have been developed, tested, and refined for years, and provide information that is not gathered through the QM-based feedback, such as the students’ perceptions of the workload in each class, students’ perceptions of the quality of the class compared with other classes, and so on.

Finally, a third observation is that a QM-based feedback form is excellent for tra-ditional courses as well – especially those with a learning management system (like Moodle) on the side. That happens because QM stan-

Figure 4. Uses of data from traditional SEIs.

24 | IJODEvolume 1 ıssue 1

january 2015

Lucian Dinu, Philip J. Auter, Phillip Arceneaux

dards are rooted in sound pedagogical prin-ciples, which are independent of the medium used for delivering the course content – online or traditional.

RQ 5 was further explored with data from the faculty survey. To be more precise, the faculty survey asked respondents to report their rating of the importance of using each standard in the QM rubric for student feedback. Five point Likert-type scales, from 1(unimportant) to 3 (neutral) to 5 (extremely important) were used for these questions. Overall, results indi-cate that faculty rated all items above the neu-tral point. In fact, only one of the QM standards was not statistically different from neutral: On the average, faculty rated student feedback on the presence of institutional policies in online courses at m = 3.28 (st. dev. = 1.55), which is not significantly different from 3 (t = 1.66; df = 46; p = n.s). The items rated next lowest as importance were prerequisite knowledge in the field (m = 3.47; st. dev = 1.12) and student sup-port services (m = 3.49; st. dev = 1.17). For each of these two items a one-sample t-test indicated

that the respective mean was statistically sig-nificantly higher than 3 (neutral). Specifically, for the item prerequisite knowledge in the field the one-sample t-test resulted in t = 2.88; df = 46; p < .05, and for student support services the one-sample t-test resulted in t = 2.87; df = 46; p < .05. These results are synthesized in Table 1.

In addition, it is worth mentioning that some of the items scored very high importance ratings from faculty respondents. Having stu-dent feedback on clarity of instructions (m = 4.49; st. dev. = 1.04), on clarity of grading pol-icies (m = 4.47; st. dev. = .88), and on course navigation (m = 4.45; st. dev = .78) were at the top of the importance ratings (see table 2).

Overall, the observations inspired by the student survey and the statistics obtained from the faculty survey lead toward the con-clusion that the answer to RQ 5, which asked Can the QM rubric be adapted into a student feedback form for online classes? Is likely a positive one.

How important is feedback from students on the following points, in your opinion?

Mean Std. Deviation

Institutional policies 3.28 1.55

Prerequisite knowledge in the field 3.47* 1.12

Etiquette expectations 3.49* 1.17

How important is feedback from students on the following points, in your opinion? Mean Std. Deviation

Clarity of instructions 4.49 1.04

Clarity of grading policy 4.47 .88

Course navigation 4.45 .78

Table 1. Lowest levels of agreement on questions about using a QM-based rubric as SEI

*means statistically significantly different from 3.00 (neutral value) at p < .05

Table 2. Highest levels of agreement on questions about using a QM-based rubric as SEI

IJODE | 25volume 1 ıssue 1january 2015

GATHERING, ANALYZING, AND IMPLEMENTING STUDENT FEEDBACK TO ONLINE COURSES: IS THE QUALITY MATTERS RUBRIC THE ANSWER?

Discussion

Student evaluation of instruction – and instructors – has always been a controversial issue. Response rates have declined as univer-sities move to online SEI administration. But prior, in-class administration was rife with problems including instructors that biased the results by staying in the room during evalua-tion, or providing pizza and other more grade-based treats on or near evaluation day. Es-pecially today, when response rates are low, responses tend to be students with extreme (usually negative but occasionally positive) positions about the course. That said, facul-ty and administrators need some methods of

gauging the effectiveness of class instruction so that it can be improved upon for future courses. SEIs may be the best option in a dif-ficult scenario.

Faculty generally are not thrilled with SEI questions, or the way information is gath-ered. Many have developed unique and per-sonal ways to gather supplemental data to help them improve their courses. It appears from both the student responses and faculty feedback that utilizing the QM rubric as a supplemental evaluation tool could provide additional non-duplicative data that can aid an instructor in improving course instruction and outcomes.

26 | IJODEvolume 1 ıssue 1

january 2015

Lucian Dinu, Philip J. Auter, Phillip Arceneaux

Conclusion, Further Discussion, and Suggestions

The purpose of this study was to ex-plore the benefits and drawbacks of using the Quality Matters (QM) rubric as an evaluative tool for online courses. Previous research in this field has particularly focused on the evaluative tools of traditional class settings and how they might be transferred for use in distance learning evaluation. In an effort to extend this literature, this study proposed us-ing the QM Rubric, a successful tool teachers design online courses, as an means to evalu-ate the overall effectiveness of the class. The results supported the premise that traditional student evaluations of instruction (SEI) were not adequately designed to serve as the sole means of course evaluation. Feedback from

both students and teachers favored applying the QM rubric as an evaluative tool. How-ever, it was determined that neither SEIs nor the QM rubric on their own sufficiently eval-uated the qualities of the course. Therefore it was determined that using both traditional SEIs and the QM rubric would serve to sup-plement each other’s weaknesses and function as the most efficient method of data collection regarding student feedback of the online learn-ing experience. Teachers, administrators, and other scholars of the educational process may use the findings of this study to better facil-itate the most optimal methods of collecting student satisfaction data so that online cours-es, and their instructors, may routinely be improved as a means of offering the highest levels of education.

IJODE | 27volume 1 ıssue 1january 2015

GATHERING, ANALYZING, AND IMPLEMENTING STUDENT FEEDBACK TO ONLINE COURSES: IS THE QUALITY MATTERS RUBRIC THE ANSWER?

References

Abrami, P., d’Apollonia, S. & Rosenfield, S. (1996) The dimensionality of student ratings of instruction: what we know and what we do not, in: J. C. Smart (Ed.) Higher education: handbook of theory and research, volume 11 (New York, Aga-thon Press).

Allen, I.E. & J. Seaman (2006). Making the grade: Online education in the United States. Needham: MA: Babson Survey Research Group, The Sloan Con-sortium.

Bangert, A. W. (2004). The seven principles of good practice: A framework for evalu-ating on- line teaching. The Internet and Higher Education, 7(3), 217-232.

Beldarrain, Y. (2006). Distance education trends: Integrating new technologies to foster student interaction and collabora-tion. Distance education, 27(2), 139-153.

Berk, R. (2013). Face-to-Face versus On-line Course Evaluations: A “Consumer’s Guide” to Seven Strategies. Journal of Online Learning and Teaching, 9(1).

Compora, D. P. (2003). Current trends in distance education: An administrative model. Online Journal of Distance Learn-ing Administration, 6(2).

DeBerg, C.L. & J.R. Wilson (1990). An em-pirical investigation of the potential con-founding variables in student evaluations of teaching. Journal of Accounting Educa-tion, 8 (1), 37-63.

Erdoğan, M., Uşak, M., & Aydin, H. (2008). Investigating prospective teachers’ satis-faction with social services and facilities in Turkish universities. Journal of Baltic Science Education, 7(1), 17-26.

Feenberg, A. (1999). Distance learning: Promise or threat. Crosstalk, 7(1), 12-14.

Fischer, S. R. (2004). History of Writing. Reaktion Books.

Holtfreter, R.E. (1991). Student rating biases: Are faculty fears justified? The Woman CPA, Fall, 56-62.

Husbands, C.T. & P. Frosh (1993). Students’ evaluation of teaching in higher education: Experiences from four European countries and some implications of the practice. As-sess and Evaluation in Higher Education, 18(2), 25-34.

Kearsley, G., & Shneiderman, B. (1998). En-gagement theory: A framework for tech-nology-based teaching and learning. Edu-cational technology, 38(5), 20-23.

Langbein, L.I. (1994). The validity of student evaluations of teaching. Political Science and Politics, September, 545-553.

Loveland, K. A. (2007). Student Evaluation of Teaching (SET) in Web-Based Classes:

Preliminary Findings and a Call for Further Research. Journal of Educators On-line, 4(2), n2.

Loveland, K., & Loveland, J. (2003). Student Evaluations Of Online Classes Versus On-Campus Classes. Journal of Business & Economics Research (JBER), 1(4).

Marsh, H. W. (1982). SEEQ: A Reliable, Valid, and Useful Instrument for Collect-ing Students’ Evaluations of University Teaching. British Journal of Educational Psychology, 52(1), 77-95.

March, H. W. & Hocevar, D. (1991) Multi-dimensional perspective on students’ eval-uations of teaching effectiveness: the gen-erality of favor structures across academic discipline, instructor level, and course level, Teaching and Teacher Education, 7, 9-18.

Matters, Q. (2011). Quality matters rubric standards 2011-2013 edition. Quality Matters Program.

28 | IJODEvolume 1 ıssue 1

january 2015

Lucian Dinu, Philip J. Auter, Phillip Arceneaux

Parker, K., Lenhart, A., & Moore, K. (2011). The Digital Revolution and Higher Educa-tion: College Presidents, Public Differ on Value of Online Learning. Pew Internet & American Life Project.

Palmer, S. (2011, January). An institutional study of the influence of ‘onlineness’ on student evaluation of teaching in a dual mode Australian university. In ASCILITE 2011: Changing demands, changing direc-tions: Proceedings of the Australian Soci-ety for Computers in Learning in Tertiary Education Conference (pp. 963-973). Uni-versity of Tasmania.

Richardson, J. T. (2005). Instruments for obtaining student feedback: a review of the literature. Assessment & Evaluation in Higher Education, 30(4), 387-415.

Rothman, T., Romeo, L., Brennan, M., & Mitchell, D. (2011). Criteria for assessing student satisfaction with online cours-es. International Journal for e-Learning Security, 1(1-2), 27-32.

Selçuk, G. S., Karabey, B., & Çalışkan, S.

(2011). Predicting student satisfaction in physics courses. Buca Eğitim Fakültesi Dergisi, (28), 96-102.

Spencer, K. J. & Schmelkin, L. P. (2002) Student perspectives on teaching and its evaluation, Assessment and Evaluation in Higher Education, 27, 397-409

U.S. Department of Education, National Center for Education Statistics. (1999). Distance Education at Postsecondary Education Institutions: 1997-98. NCES 2000-013, by Laurie Lewis, Kyle Snow, Elizabeth Farris, Douglas Levin. Bernie Greene, project officer. Washington, DC.

U.S. Department of Education, National Cen-ter for Education Statistics. (2014). Enroll-ment in Distance Education Courses, by State: Fall 2012. NCES 2014-023, by Scott Ginder and Christina Stearns. Richard Reeves, project officer. Washington, DC.

Wilson, R. (1998). New research casts doubt on value of student evaluations of profes-sors. The Chronicle of Higher Education, 44(19), A12-A14