Avatar photo

Change Matters for Quality Matters

  Reading time 6 minutes

Quality Matters revised its online-course review standards in May 2011. A year later, I found the reasons for the change. While the Chinese would call this ma hou pao or criticizing/evaluating with hindsight, after-the-fact findings may be validating for researchers.

This belated finding, which illustrated a rationale for the QM rubric revision, was also accidental—the initial question I had in mind was: what are the standards for which the course reviewers are most likely to disagree with each other? I use “each other” because at this time, our limited resources only allow us to assign two reviewers per course. Since the Quality Matters scoring system is set up so that if one reviewer checks “yes” and the other checks “no,” it will take the “no” and mark the standard as “unmet.” I thought looking for the most disputed standards would tell me whose fault it really was: were these course-design problems or was the disagreement caused by the lack of clarity in the standard itself?

In the past five years since DePaul Online Teaching Series (DOTS) was launched in 2008, DePaul has been using Quality Matters to review its online and hybrid courses. Course review is the last of the three-stage DOTS process following training and course development. To complete the process, a DOTS course has to be reviewed internally by DePaul faculty and staff who have been certified by QM as peer reviewers.

As of May 2012, forty-seven DOTS courses have been through the QM review process. A compiled view of forty-seven QM reports indicated that a number of standards unmet by the courses were due to different choices made by the two reviewers. For some standards (e.g. 5.2, 6.4), the frequency of disagreement were as high as 100 percent, meaning for all of the courses that failed this element, one reviewer selected “yes” and the other, “no.” The following graph presented the top five split-decision standards.

The standards with the highest split-decision rates are:

  • SD 5.2

Learning activities foster instructor-student, content-student, and if appropriate to this course, student-student interaction (100% disagreement rate).

  • SD 6.4

Students have ready access to the technologies required in the course (100% disagreement rate).

  • SD 6.7

The course design takes full advantage of available tools and media (89% disagreement rate).

  • SD 4.2

The relationship between the instructional materials and the learning activities is clearly explained to the student. (88% disagreement rate).

  • SD 3.3

Specific and descriptive criteria are provided for the evaluation of students’ work and participation. (80% disagreement rate).

Now you can see that something’s not right with these standards—they must be hard to interpret or not make much sense for either the authors or the reviewers.

Having used two versions of QM rubrics allowed me to check further and see when and with what version of the rubric the split-decisions between reviewers had taken place. It turned out that for almost all of the top five spilt-decision standards, the disagreement happened while the old version of Quality Matters was used—or prior to May 2011. The following table shows changes made in the new version of QM for the identified standards.

Disagreement

Rate

Standard

Old QM

New QM

 

100%

(all for the old version)

SD 5.2

Learning activities foster instructor-student, content-student, and if appropriate to this course, student-student interaction

Learning activities provide opportunities for interaction that support active learning

100%

(all for the old version)

SD 6.4

Students have ready access to the technologies required in the course.

Students can readily access the technologies required in the course.

89%

(all for the old version)

SD 6.7

The course design takes full advantage of available tools and media.

<eliminated>

88%

(all for the old version)

SD 4.2

The relationship between the instructional materials and the learning activities is clearly explained to the student.

The purpose of instructional materials and how the materials are to be used for learning activities are clearly explained.

80%

(all but one for the old version)

SD 3.3

Specific and descriptive criteria are provided for the evaluation of students’ work and participation.

Specific and descriptive criteria are provided for the evaluation of students’ work and participation and are tied to the course grading policy.

Given the fact that the number of courses reviewed with the new rubric is similar to those reviewed with the old, the significant decrease in disagreements between reviewers strongly demonstrates the value of the revision. The changes have definitely made the standards more understandable, reasonable, and applicable. It also verified the necessity to continue the effort of collecting data, which will help to identify new issues that will surely emerge with the evolution of technology, the change of pedagogy, and the new demand of online learning.

Avatar photo

About Sharon Guan

Sharon Guan is the Assistant Vice President of the Center for Teaching and Learning at DePaul University. She has been working in the field of instructional technology for over 20 years. Her undergraduate major is international journalism and she has an M.A. and a Ph.D. in educational technology from Indiana State University. She has conducted research on interpersonal needs and communication preferences among distance learners (dissertation, 2000), problem-based learning, online collaboration, language instruction, interactive course design, and faculty development strategies. She also teaches Chinese at the Modern Language Department of DePaul, which allows her to practice what she preaches in terms of using technology and techniques to enhance teaching and learning.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.