Abstract Title

OBJECTIVE STRUCTURED CLINICAL EXAM (OSCE) RATER TRAINING

Presenter Name

Shara Elrod

Abstract

Purpose Objective structured clinical examinations (OSCEs) are organized, multi-station activities designed to allow students to demonstrate their ability to perform specific clinical skills. OSCEs are increasingly being used in health professions education to objectively evaluate performance-based abilities. Observing and grading OSCEs is a key responsibility of persons who serve as raters. However, there is a surprising dearth of information on validated techniques of OSCE rater training. The objective of this project was to develop and validate a rater training process based on Kilpatrick’s 4 levels of evaluation and which maximizes inter-rater reliability of performance-based OSCE assessment across the University of North Texas System College of Pharmacy (UNT SCP) curriculum. Methods The UNT SCP curriculum includes a four-semester sequence of Pharmacy Practice Skills Labs. Each semester contains at least one OSCE to evaluate performance-based abilities. A training process for raters of interactive OSCE stations was developed. The OSCE rater training included both clinicians and standardized patients. The training was comprised of group discussion of the standards and their meaning, instruction on completing clinical checklists and global impression scales, common sources of systematic rater error, and practice scoring sample videos. Due to varying schedules and distance from campus, the training included both online and live segments. All raters were asked to view a sample recorded encounter of each interactive station. Standardized patients provided a global impression scale. Clinicians completed a binary checklist to provide a numerical grade and a pass/fail designation in addition to the global impression scale. Raters were asked to complete a pre- and post-training survey via Likert scale (1=strongly disagree; 4 = strongly agree; 0 = not applicable) and training outcomes were assessed using Kirkpatrick’s 4 levels of evaluation.

Results Of the 13 raters surveyed (10 clinicians; 3 standardized patients), four raters (31%) completed the pre-training survey and 6 raters (46%) completed the post-training survey. Raters were asked about their knowledge of OSCE philosophy and structure, common sources of rater error, their ability to use objective clinical skills-based checklists and global impression scales, and their confidence in developing consensus standards for grading. As expected, median likert-scale scores improved from the pre-training (1.0) to the post-training survey (4.0). Data detailing inter-rater reliability is forthcoming.

Conclusions In this pilot training program, UNT SCP OSCE raters had overall increases in their knowledge and ability to objectively evaluate pharmacy students in this 1st year Pharmacy Practice Skills Lab. These results support the need for increased focus on OSCE rater training programs.

Presentation Type

Poster

Purpose (a):

Objective structured clinical examinations (OSCEs) are organized, multi-station activities designed to allow students to demonstrate their ability to perform specific clinical skills. OSCEs are increasingly being used in health professions education to objectively evaluate performance-based abilities. Observing and grading OSCEs is a key responsibility of persons who serve as raters. However, there is a surprising dearth of information on validated techniques of OSCE rater training. The objective of this project was to develop and validate a rater training process based on Kilpatrick’s 4 levels of evaluation and which maximizes inter-rater reliability of performance-based OSCE assessment across the University of North Texas System College of Pharmacy (UNT SCP) curriculum.

Methods (b):

The UNT SCP curriculum includes a four-semester sequence of Pharmacy Practice Skills Labs. Each semester contains at least one OSCE to evaluate performance-based abilities. A training process for raters of interactive OSCE stations was developed.

The OSCE rater training included both clinicians and standardized patients. The training was comprised of group discussion of the standards and their meaning, instruction on completing clinical checklists and global impression scales, common sources of systematic rater error, and practice scoring sample videos. Due to varying schedules and distance from campus, the training included both online and live segments. All raters were asked to view a sample recorded encounter of each interactive station. Standardized patients provided a global impression scale. Clinicians completed a binary checklist to provide a numerical grade and a pass/fail designation in addition to the global impression scale. Raters were asked to complete a pre- and post-training survey via Likert scale (1=strongly disagree; 4 = strongly agree; 0 = not applicable) and training outcomes were assessed using Kirkpatrick’s 4 levels of evaluation.

Results (c):

Of the 13 raters surveyed (10 clinicians; 3 standardized patients), four raters (31%) completed the pre-training survey and 6 raters (46%) completed the post-training survey. Raters were asked about their knowledge of OSCE philosophy and structure, common sources of rater error, their ability to use objective clinical skills-based checklists and global impression scales, and their confidence in developing consensus standards for grading. As expected, overall median likert-scale scores improved from the pre-training (1.0) to the post-training survey (4.0). Data detailing inter-rater reliability is forthcoming.

Conclusions (d):

In this pilot training program, UNT SCP OSCE raters had overall increases in their knowledge and ability to objectively evaluate pharmacy students in this 1st year Pharmacy Practice Skills Lab. These results support the need for increased focus on OSCE rater training programs.

This document is currently not available here.

Share

COinS
 

OBJECTIVE STRUCTURED CLINICAL EXAM (OSCE) RATER TRAINING

Purpose Objective structured clinical examinations (OSCEs) are organized, multi-station activities designed to allow students to demonstrate their ability to perform specific clinical skills. OSCEs are increasingly being used in health professions education to objectively evaluate performance-based abilities. Observing and grading OSCEs is a key responsibility of persons who serve as raters. However, there is a surprising dearth of information on validated techniques of OSCE rater training. The objective of this project was to develop and validate a rater training process based on Kilpatrick’s 4 levels of evaluation and which maximizes inter-rater reliability of performance-based OSCE assessment across the University of North Texas System College of Pharmacy (UNT SCP) curriculum. Methods The UNT SCP curriculum includes a four-semester sequence of Pharmacy Practice Skills Labs. Each semester contains at least one OSCE to evaluate performance-based abilities. A training process for raters of interactive OSCE stations was developed. The OSCE rater training included both clinicians and standardized patients. The training was comprised of group discussion of the standards and their meaning, instruction on completing clinical checklists and global impression scales, common sources of systematic rater error, and practice scoring sample videos. Due to varying schedules and distance from campus, the training included both online and live segments. All raters were asked to view a sample recorded encounter of each interactive station. Standardized patients provided a global impression scale. Clinicians completed a binary checklist to provide a numerical grade and a pass/fail designation in addition to the global impression scale. Raters were asked to complete a pre- and post-training survey via Likert scale (1=strongly disagree; 4 = strongly agree; 0 = not applicable) and training outcomes were assessed using Kirkpatrick’s 4 levels of evaluation.

Results Of the 13 raters surveyed (10 clinicians; 3 standardized patients), four raters (31%) completed the pre-training survey and 6 raters (46%) completed the post-training survey. Raters were asked about their knowledge of OSCE philosophy and structure, common sources of rater error, their ability to use objective clinical skills-based checklists and global impression scales, and their confidence in developing consensus standards for grading. As expected, median likert-scale scores improved from the pre-training (1.0) to the post-training survey (4.0). Data detailing inter-rater reliability is forthcoming.

Conclusions In this pilot training program, UNT SCP OSCE raters had overall increases in their knowledge and ability to objectively evaluate pharmacy students in this 1st year Pharmacy Practice Skills Lab. These results support the need for increased focus on OSCE rater training programs.