Quality Assurance: A Descriptive Study of

Online Courses in Higher Education

Clarissa E. Rosas and Mary West

College of Mount Saint Joseph, USA

Online learning as a course delivery model has gained momentum in institutes of higher education. Faculty members are concerned that the quality of online courses maintains the same high standards as traditional face-to-face course delivery. This study investigated the quality of online courses as measured by a Peer Review Rubric and identified the type of web tools utilized by faculty. Of particular interest was if the Peer Review Rubric was perceived by the participating faculty to be a useful tool for future online course design. Results of the study indicated that faculty participants tended to use a minimal number of web tools and that the online course design lacked many of the objectives included in the Peer Review Rubric. The Peer Review Rubric used in this study showed promise to be a viable tool for faculty participants, as they found the rubric to be useful for future online course revisions and development.

Online learning in the United States has grown to approximately 3.5 million online learners, which is more than a 21 percent increase since 2002 (Müller,2008). In higher education, the growth of distance learning has increased tremendously in the past decade. Walts (2003) indicated that “90 percent of public 2-year and 89 percent of public 4-year institutions … (and) 16 percent of private 2-year and 40 percent of private 4-year institutions” offer distance learning courses (p.1). While the percentage of institutions offering distance learning is relatively high, the number of faculty who actually teach using online learning tools is small. Bradburn (2002) indicated that approximately 6 percent of faculty reported teaching distance education courses (p. iv). This low percentage may be the result of the “disproportionate investment of time and effort on the part of faculty members” to develop and teach online courses in comparison to those who teach traditional face-to-face courses (Bradburn, 2002). While a relatively small percentage of faculty teach distance education courses, the growth of web-based learning is evident in many institutions of higher education (Bradburn, 2002).

The increased “interest in lifelong learning and budget constraints has created a significant incentive for universities to develop online programs” (Yeung, p.2). Given the rapid increase of distance learning, many faculty in institutes of higher education (IHE) are concerned that the quality of online courses maintain the same high standards of traditional face-to-face courses (Ciavarelli, 2003; Bower, B. 2001). Many traditional face-to-face course evaluations are based on Chickering and Gamson’s (1987) Seven Principles for Good Practice in Undergraduate Education (Graham et al 2001). These same standards for traditional face-to-face course evaluations are believed to be appropriate for online course delivery (Chickering, & Ehrmann, 1996). Since meaningful learning is the goal of both traditional and online course delivery, a reasonable expectation exists that the same high standards of course development and delivery be maintained.

A review of the literature indicates that faculty training in educational technology and distance learning is one of the most critical issues in higher education (Ciavarelli, 2003; Kobulnicky, et al, 2002; Lembke, et al, 2001; Institute for Higher Education Policy, 2000). In response to the need for quality assurance for online learning, Maryland Online Inc. (2006) developed The Quality Matters: Inter-Institutional Quality Assurance in Online Learning Peer Course Review

Rubric, which was supported by the U.S. Department of Education Fund for the Improvement of Postsecondary Education (Sener, 2005). This rubric was based on a significant number of empirical studies on the quality of course delivery, including the framework developed by Chickering and Gamson’s Seven Principles for Good Practice in Undergraduate Education (1987). As a result, the Quality Matters: Peer Course Review Rubric has gained state and national recognition as a viable quality indicator for online course design (Quality Matters, 2008).

Methodology

The overall purpose of this research study was to analyze the quality of online courses at a private institution of higher education in Ohio. This study adds to the body of knowledge in the field of online learning. The research questions for this study were as follows:

1. What is the design quality of the online courses developed by faculty?

2. What Web-tools are used most often by faculty?

3. Was the rubric used to analyze the online course perceived as helpful to the faculty participants of the

study?

The participants for this study consisted of 10 faculty members who developed 12 online courses at a private institution of higher education (IHE) in Ohio during the 2005- 2007 academic years. The faculty members were from the following departments: Education, Humanities, Physical Therapy, Computer Science, Paralegal, and Sociology. All faculty who developed online courses during the 2005-2007 academic years were invited by e-mail to participate in this study. A total of 10 faculty members volunteered to allow access to their online courses, which represented 40% of the total number of faculty who taught online coursework. The demographics of the participants included the following: 90% were female; 70% had doctoral degrees; 30% had master’s degrees; 90% of the faculty had between 4-6 years of experience developing and teaching online courses; only one faculty member had more than 10 years of online experience. All faculty participants at the IHE utilized WebCT (2007), an online course management system.

The first data collection involved an inventory of twelve online courses. Two peer reviewers with expertise in online course design and delivery examined the online courses and recorded all web-tools integrated in the courses. In addition, the peer reviewers examined the quality of the online course design in order to provide constructive feedback to the faculty participants. The rubric used in this examination was based on the Quality Matters Inter-Institutional Quality Assurance in Online Learning Peer Course Review Rubric (Edited Version FY 05/06). This public domain rubric was developed by MarylandOnline (2005) and supported in part by the 2003-2006 Fund for the Improvement of Postsecondary Education (FIPSE), U.S. Department of Education. The Peer Review Rubric included eight general review standards (See Table I in Appendix A), each of which were subdivided and labeled as objectives (See Table II in Appendix B). In addition, the Peer Review Rubric was modified to include a four-point Likert Scale ranging from: outstanding (3); proficient (2); emerging (1); to not evident (0). Two faculty members, who served as the peer reviewers, examined the participants’ online course and ranked each component of the course according to the standard and objectives listed in the Peer Review Rubric.

The second data collection consisted of a survey of the participants’ perception of the usefulness of the Peer Review Rubric, which was used in the inventory of their course. Upon completion of the online course review, participants received a copy of the Peer Review Rubric with the peer reviewers’ comments and suggestions. The participants were asked to complete a survey regarding the rubric’s usefulness for future online course development. The survey consisted of twelve statements that the participants ranked using a five-point Likert Scale ranging from Strongly Agree (1); Neutral (3); to Strongly Disagree (5). In addition, an open-ended comment section asked participants to rank their overall reaction/response to the rubric. All comments were analyzed to identify any general pattern in faculty responses about the Peer Review Rubric and the online course examination process. A particular interest was whether the Peer Review Rubric was perceived to be a useful tool for participating faculty, as they updated existing courses and developed future online courses.

Data Analysis and Results

The online courses reviewed for this study used WebCT as the course management system. This management system offered the following five Web-Tool categories: (1) Course Content; (2) Content Utilities; (3) Communication; (4) Evaluation & Activities; and (5) Student. Results of the online course review indicated that in the category of Communication, all participants used the discussion and mail tools in their online course design. In addition, in the category of Course Content, all participants included the Syllabus tool in their courses. In the category of Student, 83% of the participants included the My Grades tool in their course design. In contrast, in the category of Evaluation & Activities, only 30% of the faculty utilized Quizzes/Surveys Tools or Self Test Tools. In addition, 33% of the faculty included the Glossary Tool. Few participants included multimedia tools, such as audio files, pod casts etcetera as part of their online course design (See Table III in Appendix C).

A mixed-methods approach permitted analysis of quantitative and qualitative data. After the investigators reviewed the twelve online WebCT courses, descriptive statistics were conducted to calculate the mean Peer Review Rubric scores for each of the eight general review standards: (1) Course Overview & Introduction; (2) Learning Objectives; (3) Assessment & Measurement; (4) Resources and Materials; (5) Learner Interaction; (6) Course Technology; (7) Learner Support; and (8) Accessibility. In addition, the mean Peer Review Rubric score for each objective within the eight groups was calculated (See Table II in Appendix B). The objectives with the highest mean scores, which indicated the faculty’s proficiency, included the following: Introduction Statement (n = 12, M= 2.13, SD= .85), Online Assessment Quality ( n=12, M = 2.38, SD = .77), Materials Alignment (n = 12, M= 2.25, SD= .79), Accessible Online Materials (n = 12, M= 2.46, SD = .78), Activity & Outcome Alignment (n = 12, M= 2.17, SD= .82), and Up-to-date Tools (n = 12, M= 2.26, SD= .88). The objectives with the lowest mean scores, which indicated a lack of proficiency, included the following items: Navigational Instructions (n = 12, M= .79, SD= .88), Netiquette Expectations (n = 12, M = .50, SD= .78), Articulation of Academic Support (n =12, M= .58, SD= .83), Pathways to Student Support (n =12, M= .42, SD= .83), and ADA Requirements (n=12, M= .58, SD= .97). The results indicated that future faculty professional development for online course design should focus upon techniques to incorporate these web- tools.

An analysis of variance was conducted to compare the mean differences of the eight general review standards from the Peer Review Rubric. ANOVA results are presented in Table I.

Table I

ANOVA for Mean Comparison per General Review Standards

Source of Variation Sum of df Mean F η2 Squares Squares

Between 196.0 7 28.01 26.22 0.162

Within 1017.0 952 1.07

Total 1213.0 959

______

The ANOVA indicated the differences in means were significant (F (7,952) = 26.22, p<.0001). A Tukey-Kramer post hoc test was conducted to compare all combinations of the general review standards and identify significantly different pairs. Results revealed that the ‘Learner Support’ general review standard, with the lowest mean score (n=12, M = 0.74, SD= .99), significantly differed from the other mean scores based upon the four-point Likert scale from a score of 3(outstanding) to 0 (not evident). The ‘Resources and Materials’ general review standard, with the highest mean score (n =12, M = 2.23, SD= .86), differed from all other standard groups with the exception of ‘Learner Interaction’ (n = 12, M= 1.94, SD= 1.07) and ‘Course Technology (n =12, M= 1.98, SD= .98). The results indicated that faculty members were most proficient with web-tools which directly involved the course content, such as content-specific resources, materials, and activities. Faculty members were least proficient with web-tools for learner support, such as direct web links for technical and academic support. Overall mean scores for proficiency with the alignment between online course assignments and measureable learning objectives also was low (n = 12, M= 1.08, SD= 1.18).

Researchers analyzed the survey completed by the faculty participants in reference to their perception of the usefulness of the Peer Review Rubric as a tool for future online course development. Results indicated that the faculty participants perceived the rubric to be a useful tool for future online course development and would recommend the rubric to colleagues. Ultimately, the participants anticipated that using the rubric to update their courses would increase future student learning (See Table V in Appendix D). Results from the open-ended comment section indicated that 84% of the participants perceived the Peer Course Review Rubric was “important, valuable and useful”. In addition, 67% of the participants indicated that the rubric was “understandable and effective”. In contrast, one participant indicated that the rubric was “unimportant, worthless, and not useful”.

Discussion and Conclusion

The United States Distance Learning Association (USDLA) defined distance learning as follows: “The acquisition of knowledge and skills through mediated information and instruction, encompassing all technology and other forms of learning at a distance” (Robyler, 2006, p. 219). While current research has indicated that distance learning and traditional face-to-face learning often generates comparable results…”, successful distance courses are those that have high interaction, good support, and low technical problems” (Robyler, 2006, p. 224). The skills required by faculty to design and teach online courses are considerably different than the skills required for teaching traditional face-to-face courses (Robyler, 2006). Therefore, a support system, with a peer-review inventory and an informative rubric to provide faculty with specific feedback on course design, is critical for student learning. The purpose of this research study was to (1) assess the design quality of online coursed developed by faculty; (2) identify the web tools used most often; and (3) determine if the faculty participants perceived the Peer Review Course Rubric as a useful tool for future online course development.

Results of this study indicated that the courses surveyed included minimal web-tools to foster student interactivity with each other and engagement with the content. The overall quality of the online design as measured by the Peer

Review Rubric was at an “emerging” level, even though the 90% of the faculty members had 4-6 years of online course experience. The participants perceived that the Peer Course Review Rubric was a viable tool that would be used for future online course development. This finding supported the Amerin-Beardsley, Foulger, & Toth (2007) study, in which professional development in the form of collaboration with experienced colleagues was identified as a possible vehicle to support faculty with online course design. The Peer Course Review Rubric did provide faculty with feedback from experienced colleagues and was perceived to be an effective communication tool. The rubric showed promise for an alternative delivery of professional development.