Innovation Abstracts Banner

Volume XL, No. 12| April 5, 2018

Low-Stakes Exams: Example of Strategy Assessment

Student engagement and retention go hand in hand. Administrators often make faculty aware of retention statistics and emphasize how faculty must do their part to help students graduate. Faculty are also frequently informed about the correlation between student engagement and retention. First-year experience programs, academic coaching, and early intervention programs are tools used to increase student involvement, success, and retention. However, there is no better place than the classroom for promoting student engagement and retention.

There is a wide assortment of classroom engagement strategies available to today’s faculty, including active learning, scaffolding, community building, reducing anonymity, increasing physical movement, and low-stakes exams. The problem is that not all classroom engagement strategies are appropriate for every student population. Cultural norms, course content, and other factors can affect the success of classroom engagement strategies.

In order to achieve the primary objectives of employing engagement strategies—increased student success and retention—their effectiveness must be properly assessed. Just like assessing student learning objectives, assessing classroom engagement strategies provides valuable information that can guide instructional practices.

Assessing Strategies

Like many professors who are concerned about their students’ success, I have implemented new teaching strategies with great excitement and anticipation. While I felt satisfied with my innovative classroom engagement strategies and believed they would make a positive difference for students, I realized I did not know for sure whether the strategies actually worked. Great effort was being put forth to implement new strategies without knowing their actual effectiveness. Did this strategy really increasing student learning? Did that strategy impact engagement? Once I started assessing my classroom engagement strategies, I was a bit surprised by the results. Assessing my student engagement strategies has compelled me to make changes to my teaching toolbox—I kept some innovative suggestions and discontinued others.

 Low-Stakes Early Exams

I decided to assess the effectiveness of low-stakes early exams on student success in a general psychology course in which a large number of students were enrolled in multiple course sections. Low-stakes early exams provide students with an opportunity to become acclimated with course content, my teaching style, and exam formats.

The format of a low-stakes early exam divides one complete exam into two separate sections taken at different times. If students fail the first section, there is still time for them to learn the material and receive a higher score on the second section, which keeps them motivated to succeed in the course.

The Assessment

One general psychology course section was given two low-stakes early exams and another course section was given one complete exam.

  • Low-stakes early exams: Two exams, one 25-point exam covering two modules, followed two weeks later by one 75-point exam covering five modules.
  • Complete exam: One 100-point exam covering seven modules.

The content in both course sections was taught in the same manner and all assignments were the same. There were personality differences in the two sections, which determined student questions and discussions.

The average grade on the two-module, first portion of the low-stakes exam was 75 percent, and the average exam grade for students who took the five-module, second portion of the exam was 73 percent. Combining the low-stakes scores yielded an average of 74 percent. The average exam grade for students who took the first complete test was 73 percent. Thus, there was a one percent benefit to using the low-stakes exam format.

After the first exam results, I decided to administer a 100-point, complete exam to both course sections that covered the next modules. The 100-point, complete exam showed interesting results. The students who took the low-stakes early exams had an average score of 66 percent on the complete exam, which was an 8 percent decrease from the first exam average. Students who took the first complete exam had an average score of 70.5 percent on the second complete exam, which was a 2.5 percent decrease from the first exam. The greater drop in the average score for students who took the low-stakes early exams made me question whether those exams produced false confidence in that group of students.

I assessed the low-stakes strategy again in the same course the following semester with different sample groups. The low-stakes early exams section had an average of 74 percent on part one and 71 percent on part two, for an average of 72.5 percent, while the first complete exam section had an average score of 72 percent. The results showed less of an advantage than the previous semester. Students who took the low-stakes early exams scored an average of 70 percent on the second complete exam, while students who took the first complete exam scored an average of 72 percent on the second complete exam. The complete exam group didn’t show a score decrease between the two exams. However, the low-stakes exam group showed a 2.5 percent score decrease from the early exams to the second complete exam.

Based on this assessment, I confirmed my hypothesis that administering low-stakes early exams was not beneficial for my students. In fact, the low-stakes early exams may have negatively affected them. While there are many possible explanations for this outcome, I suspect that students who took the low-stakes early exams had a false sense of confidence in their knowledge of the subject matter, and as a result, did not appropriately prepare for the complete exam.

I also looked at the overall course grade distributions for the two test groups. The low-stakes group grade distribution was as follows:

  • 16 percent – A
  • 50 percent – B
  • 25 percent – C
  • 8 percent – D
  • 2 percent – F

The final grade distribution for students who concluded the course in the complete test group was as follows:

  • 25 percent – A
  • 33 percent – B
  • 25 percent – C
  • 15 percent – D
  • 2 percent – F

While the complete test group had more A grades, the low-stakes test group had fewer D grades. Overall, based on the assessment, I decided there wasn’t a benefit to students to providing low-stakes exams in my general psychology course. The strategy of providing low-stakes exams may be effective for students in other classes, but that effectiveness needs to be confirmed using some type of assessment process.

As this example illustrates, a great deal of information can be learned through a simple assessment process. When I try any new student engagement strategy, I always assess its effectiveness by randomly assigning it to one of my course sections, and I compare the results to another course section where the strategy was not used. Because each class has its own personality, culture, and other variables that can affect the outcomes of student learning, it is important to try an engagement strategy multiple times to confirm the extent of its usefulness.

Conclusion

There are many strategies for engaging students and increasing retention rates. However, it is difficult to know which strategies are most effective with specific student populations and curricula. Some strategies intuitively seem to be worth exploring. Nevertheless, it is important to know for sure whether a strategy is having a positive effect on students. By using assessments, faculty can better understand whether a strategy is benefiting students, as well as why or why not. Just like assessing student learning objectives, assessing classroom engagement strategies provides valuable information that can guide classroom procedures and increase student learning.

For further information, contact the author at Harford Community College, 401 Thomas Run Road, Bel Air, MD 21015. Email: rroofray@harford.edu

Opinions and views expressed are those of the author(s) and do not necessarily reflect those of NISOD.

Download PDF