Innovation Abstracts Banner

Volume XL, No. 5 | February 15, 2018

Online Learning: Creating a Culture of Accountability

At Madisonville Community College (MCC), online courses are becoming an increasingly popular scheduling alternative. Our students demand the flexible scheduling available through online instruction, and if MCC doesn’t provide it, they seek it elsewhere. Over the last several years, we found ourselves offering more and more online sections to avoid exporting enrollment to other colleges in the Kentucky Community and Technical College System (KCTCS). As of fall 2017, the number of MCC online enrollments equaled the number of face-to-face enrollments. Soon online enrollments will surpass face-to-face enrollments if current enrollment trends continue.

Although MCC were eventually able to recapture lost enrollment by scheduling more online offerings, that effort created a “student success” challenge. National data are quite clear regarding student performance in online courses: students struggle to complete them. Online course completion rates fall well below rates for face-to-face courses, which is certainly true at MCC. Given the national and local data, it was clear to us that if we were to successfully deliver online coursework, we needed ways to evaluate the coursework and to assist faculty – most of whom had little formal training in online pedagogy – to create an engaging and effective online learning environment.

Misalignment of Expectations

Studies of online learning indicate that misalignment between student and instructor expectations is not uncommon. Students often confuse “convenient” with “easy.” They soon find out, however, that online courses are more difficult and time-consuming than they expected, and instructors are soon frustrated by their students’ lack of motivation and time management skills. It’s a Catch-22 situation: instructors expect students to be self-directed, responsible, and motivated, and students expect instructors to encourage and nurture those same qualities. Therein lies the challenge: aligning expectations and preparing students and instructors to create mutually-beneficial conditions for success in an online environment.

Quality Enhancement Plan Background

At MCC, what started out as an enrollment and tuition income issue turned into a quality control issue, an issue that led our leadership team to develop a comprehensive Quality Enhancement Plan (QEP) for the Southern Association of Colleges and Schools (SACS) intended to improve the quality of online course offerings and related student support services. For us, identifying a QEP topic and purpose statement was not terribly difficult. Enrollment and student success data pointed us in the direction of online learning. We grounded our project in the work of Vincent Tinto, a nationally recognized scholar whose book Leaving College: Rethinking the Causes and Cures of Student Attrition has become the benchmark by which work on retention issues is measured. Tinto has shown that students are more likely to persist to completion of a course or credential if they have positive classroom experiences and have a sense of belonging. It is a matter of assimilation—success motivates and nurtures a sense of belonging. Since over half of all MCC students in any given term, new and returning, enroll in at least one online course, there is a need to ensure that the online learning experience is positive if they are to persist through completion of the course and into the next term. For Tinto, “engagement”–which he defines as faculty-to-student interaction and student-to-student interaction–is critical to student success in any classroom, online or face-to-face. Students need opportunities to productively engage course content, their peers, and their instructor.

In the beginning, we conducted a number of student focus groups to gather input regarding their online learning experiences. They weren’t bashful about offering opinions and noted the following list of “must have’s:”

  • Standard, “user friendly” LMS formatting,
  • Effective online learning orientation,
  • Frequent opportunities for interaction with the instructor,
  • Timely grading and feedback,
  • Supplemental explanation and support for difficult assignments, and
  • The need to feel part of a “classroom experience.”

Student input, coupled with a review of the best practice literature, led to the development of a peer review quality improvement process. It was critical that we built our process from the ground up, providing ample opportunity for buy-in and vetting of the process among faculty and student support staff. A broadly representative Course Design and Quality Assessment workgroup with members from each academic division was appointed and charged with creating uniform guidelines for the evaluation of all online courses. This led to the development and adoption of an Online Course Evaluation Rubric that identified the criteria and standards by which online courses were to be judged. Specific evaluation criteria were grouped under one of six standards for review: course introduction, course organization and design, assessment and assignments, communication, technology, and engagement activities.

Each criteria category received either an “acceptable” or “unacceptable” rating. We felt it was pointless to use a graduated evaluation scale because it would create opportunities for disagreement. We did not want to get into the business of judging between “bad, good, better, and best.” For example, under the rubric’s “communication” standard, the course either provided opportunities for student-to-student interaction or it didn’t. We did not want to prescribe which kinds of interactive assignments worked better than others. It was the responsibility of the instructors to determine what worked best for the competencies they wanted their students to learn.

Nonetheless, when the rubric was first distributed and discussed with those faculty whose courses were selected for the first iteration of review, eyebrows were raised. Although the QEP online project was shared collegewide with little or no objection during the development process, once the first cohort of courses was reviewed, the real impact of peer review hit home. As is common with human nature, people were troubled when asked to address a weakness. Some faculty sought further clarification regarding evaluation criteria. Although the leadership team chose not to be overly prescriptive with criteria, ironically, some faculty wanted just the opposite. They wanted to be told exactly how many faculty-to-student interactions were enough, what kinds of engaged learning practices to use, and so on. They also worried about how peer review would impact their annual performance review and their academic freedom, a concern that always seems to arise when instructors are asked to examine the efficacy of their pedagogy.

First, we addressed the academic freedom issue head-on. We reminded instructors that they had the latitude they needed to address the content of the courses they were responsible for teaching, but that did not mean they had the freedom to do a poor job of delivering instruction. We reminded them that our peer review process focused on building pedagogically sound online courses–those supported by the literature on best practice–and not on dictating the fundamental content, principles, and ideas that the course is intended to address. Second, we addressed the “prescription” issue. We created a “rubric concordance” that provided examples of a variety of best practices that could be used to address a particular standard. For example, regarding “course organization,” the rubric required that “course materials be arranged in manageable segments with a logical progression throughout the semester.” Explanations and suggestions were provided for each criterion on the rubric.

First Things First: Standardizing the LMS

The first order of business was to standardize the look and feel of our online courses so students would not have to relearn navigation procedures as they moved from one online course to another. As noted earlier, MCC is a member of the 16-college Kentucky Community and Technical College System (KCTCS). The KCTCS adopted the Blackboard Learning Management System (LMS) for delivery of all KCTCS online courses. However, a review of various MCC courses revealed that course formatting varied from instructor to instructor within the LMS shell. A standardized LMS format was developed with input from representatives of each academic division. Members of the Quality Enhancement Plan (QEP) steering committee who were teaching online courses agreed to test drive the new LMS shell, and with the help of the Course Design and Quality Assessment workgroup, imported existing course content during a hands-on workshop. Written instructions were prepared to assist participants with the import procedure. The instructions were subsequently evaluated for clarity and concision, revised accordingly, and then distributed collegewide to prepare for the full rollout of the new shell. An instructional video was also developed and distributed. Effective fall 2016–with the full support of the President and Chief Academic Officer–all online courses were required to use the standardized shell. Additional template training workshops were scheduled for faculty who needed assistance, and a group of Template Assistance Resource Personnel (TARPs, as in “we’ve got it covered”) was created to provide further support should faculty request one-on-one help. Although it would seem standardizing an LMS shell is a relatively straightforward undertaking, it isn’t and it does take some time. We provided ample opportunities via widely publicized workshops and TARP support to assist instructors.

The Value of Peer Review and Instructional Design Support

Standardizing the LMS look and feel is important, but the peer review process is the centerpiece of our project. If the process works as intended, the college should see online course completion rates, withdrawal rates, and learning outcomes improve. In the end, it is all about improving teaching and learning, ensuring that the online student experience is at least equivalent to the traditional face-to-face experience and assisting faculty use the tools and practices necessary to provide a high-quality, engaged learning experience. The Peer Review Team (PRT) and the Online Instruction Coordinator play critical roles in making this happen. The two functions are separate. The PRT evaluates the online courses, and the Online Instruction Coordinator provides one-on-one design support.

The PRT consists of experienced online instructors who, early on in the project, agreed to pilot using the rubric on their own courses. Once the process rolled out collegewide, we didn’t want to ask others to do what the PRT was unwilling to do during the pilot. Over time, the PRT will incorporate new appointees so that new members can be mentored by experienced members. As members rotate on and off, the college will be creating a critical mass of experienced online faculty mentors, moving the QEP project course by course and discipline by discipline across the curriculum with the goal of creating a culture of online accountability. Beyond the initial course evaluation, however, we needed to provide ongoing instructional design support and subsequently established the position of Online Instruction Coordinator. We intentionally recruited from within, finding a credible and well-respected, in-house faculty member. We found that instructors are willing to improve their instructional practices if they are given the support they need from someone they trust.

The Review Process Timeline

Before the full rollout of the peer review process, the Co-Chairs of the QEP Steering Committee visited each academic division to explain the guidelines for implementation and address any concerns regarding the responsibilities of the Online Instruction Coordinator, Peer Review Team, and TARPs. The timeline for review is a three-stage process. An instructor is notified prior to the beginning of a term if their course has been selected. They are asked to complete a self-assessment using the rubric and are given an opportunity to prepare the course for formal review. During this “preparation” term, the Online Instruction Coordinator and TARPs are available for assistance. The following term, the course undergoes formal review by the PRT. The instructor is provided feedback at mid-term and given two weeks to address any remaining weaknesses. The Online Instruction Coordinator and TARPs are again available for assistance. At the end of the two-week revision period, the course undergoes final review by the PRT. The instructor, their division chair, and Chief Academic Officer (CAO) are notified about the results. If found acceptable, the course is good to proceed for the term from that point forward. If the course is determined to still be lacking, a more detailed remediation plan is developed on a case-by-case basis. Although unlikely, the CAO may choose to pull the course from the class schedule before the next term begins.

Key Takeaways

Implementing an online quality control project is a challenge. Just because institutional data and best practice literature indicate it is a necessary for student success, undertaking a quality enhancement plan doesn’t ensure buy-in once implementation begins. After all, subjecting your course to formal review can be intimidating, even threatening. Who among us enjoys being evaluated? If the process is peer-driven from design through implementation, however, you can create the conditions for success. The conditions for success include:

  • Starting with students; they will help you understand what they need to be successful in an online course.
  • Developing and adequately vetting your own evaluation rubric. You will find that the vast majority of faculty will not find it intimidating if the rubric is grounded in a thorough review of best practice principles and then tailored to fit your institution’s needs.
  • Piloting the rubric and review process with a group of early adopters. You are sure to find glitches and opportunities for miscommunication before you roll out the process collegewide.
  • Identifying what is absolutely required in a responsibly designed and delivered online course, without overdoing it. There is a fine line between prescribing and suggesting. Point instructors in the right direction and provide more than one opportunity for revision.
  • Providing ample technical and instructional design support. Simply setting up an LMS shell and organizing assignments can be a challenge for many, and developing an effective online learning activity, lesson, or learning outcome assessment even more so.
  • Addressing the academic freedom issue head on and early on. It is a frequently misunderstood concept and often used to justify an unwillingness to change. Instructors are free to determine what kinds of content and ideas are required to teach their assigned courses. They are not free to deliver that instruction ineffectively.

Lisa Lee, Professor, Education and Coordinator, Online Instruction

David Schuermer, Professor, English and Director, Grants, Planning, and Institutional Effectiveness

Mary Werner, Professor, English and Chair, Humanities

For a copy the Online Course Rubric and Concordance or further information, please contact the authors at Madisonville Community College, 2000 College Drive, Madisonville, KY 42431. Email: lisae.lee@kctcs.edu, david.schuermer@kctcs.edu, or mary.werner@kctcs.edu

Opinions and views expressed are those of the author(s) and do not necessarily reflect those of NISOD.

Download PDF