Skip to main content
Ottawa 2024
Times are shown in your local time zone GMT

Technical matters in OSCEs

Oral Presentation

Oral Presentation

11:30 am

28 February 2024

M204

Session Program

Pavla Simerska Taylor1
1 MD Program, School of Medicine and Dentistry, Griffith University 



Background
Improving feedback quality in Objective Structured Clinical Examinations (OSCEs) is crucial for enhancing student learning.[1] However, providing timely, constructive and comprehensive feedback in OSCEs presents challenges mainly due to time constraints. This study addresses these challenges by exploring innovative strategies to enhance feedback quality and promote effective learning outcomes. 


Summary of work
Mixed-methods were used to investigate strategies for improving quality of feedback in OSCEs. Focus interviews with students and OSCE staff members highlighted the issues. Simple changes such as increased reading/feedback writing time and centralised prompts are introduced, and both qualitative and quantitative data (volume and quality of feedback provided) collected and evaluated. 


Results
Our evaluation showed student dissatisfaction with quality of OSCE feedback and examiners task overload. Data from 3 OSCEs over 2 years consisting of 56 stations and over 600 medical students were analysed and will be discussed. 


Discussion
The results highlight the need for enhancing student feedback in OSCEs. Proposed strategies offer promising solutions to enhance student learning experience while also supporting examiners in providing more effective feedback. 


Conclusions
Introduction of a longer reading time in OSCEs can not only reduce anxiety of students but more importantly provide examiners with more time to complete marking and constructive, quality feedback. Other adjustments creating more time for feedback provision are also discussed. 


Take-home messages / implications for further research or practice 
The findings of this study highlight that the implementation of simple adjustments and strategies can enhance the quality of feedback in OSCEs fostering thus supportive learning experiences for medical students. 




References (maximum three) 

1. Alsahafi A, Ling DLX, Newell M and Kropmans T. A systematic review of effective quality feedback measurement tools used in clinical skills assessment [version 2; peer review: 2 approved]. MedEdPublish 2023, 12:11 (https://doi.org/10.12688/mep.18940.2) 

Eugene Wong
James Dawber1 and James Fraser1
1 Australian College of Rural and Remote Medicine 


The Australian College of Rural and Remote Medicine (ACRRM) is one of the two GP Colleges in Australia accredited by the Australian Medical Council. Our Fellowship program (FACRRM) has been developed by rural doctors to equip Rural Generalists and specialist General Practitioners with knowledge and skills to work in rural or remote contexts. The fellowship assessment program includes multiple modalities including StAMPS - Structured Assessment using Multiple Patient Scenarios. StAMPS is an online assessment combining attributes of an OSCE and Viva Voca across eight scenarios. StAMPS aims to assess higher- order functions in a highly contextualised framework, where candidates have the opportunity to explain what they do and demonstrate their clinical reasoning. 

Recently the standard setting method for StAMPS has been adapted, utilising numeric behaviourally anchored rating scales (BARS) across six items per scenario, a global grade per scenario, a comprehensive borderline review process utilising the recorded videos of the scenarios, and live dashboarding. We show that through the combination of these features, we can collectively provide reliable and defensible pass marks. Comparisons to the borderline regression method are made, as well as experimentation with unsupervised clustering algorithms for verification. 

The successful implementation of this StAMPS standard-setting method across five previous exam occasions indicates a viable alternative to borderline regression for OSCE-style assessments. We believe that this approach to standard setting provides added rigour and validity that borderline regression and other methods are lacking. 



References (maximum three) 

McGown, P.J., Brown, C.A., Sebastian, A. et al. Is the assumption of equal distances between global assessment categories used in borderline regression valid?. BMC Med Educ 22, 708 (2022). https://doi.org/10.1186/s12909-022-03753-5 

McKinley, D. W., & Norcini, J. J. (2014). How to set standards on performance-based examinations: AMEE Guide No. 85. Medical teacher, 36(2), 97-110. 

Pell, G., Fuller, R., Homer, M., & Roberts, T. (2010). How to measure the quality of the OSCE: a review of metrics–AMEE guide no. 49. Medical teacher, 32(10), 802-811. 

Kylie Rice1
Jaeva Shelley1 and Stephanie Banner1
1 University of New England 


1. Background
Despite the widespread application of structured clinical assessments (or OSCEs) as a simulation-based evaluation tool across many healthcare disciplines, the application in professional psychology training programs has been considerably slower. Structured clinical assessments represent an important tool in the development of clinical competencies, where performance cannot be evaluated with written tasks. However, there is limited evidence related to psychometric properties of OSCEs in psychology training programs. This is in part due to the fact that there is little consistency in the development and implementation of OSCEs in psychology training programs. 

This presentation outlines the current evidence related to OSCE use within psychology training programs; and provides an evaluation of the psychometric properties reported for published OSCEs based on quality assurance guidelines. Based on the literature, the utility of the OSCE for various clinical skills will be discussed. In addition, the results of the first known study to develop and implement an OSCE for a professional psychology program in Australia in line with quality assurance guidelines and psychometrically evaluate inter-rater reliability and assessment across various clinical domains will be presented. 

These findings will highlight the current literature on OSCE use in psychology programs, in terms of both quantity and quality, and will make recommendations about implementing quality assurance processes for the development and evaluation of OSCEs in psychology training programs and beyond. 


2. Why is the topic important for research and / or for practice? 
Evidence for the use of OSCEs in professional psychology training programs is limited. Further to this, the process of developing OSCEs for use as an assessment tool appears to vary widely in quality. To date, there is no consistent approach to the development of OSCEs in professional psychology programs, limiting the scope of research in evaluating the utility of OSCEs in this field. The evaluation of an OSCE developed according to quality assurance guidelines demonstrates that OSCEs may be a reliable and valid tool for the assessment of various psychological competencies. The results of inter-rater data support the need for psychometric evaluation and consideration of quality assessment in the application of OSCEs in psychology training programs, and demonstrate that OSCEs can be utilised as a reliable tool for the assessment of various psychological competencies at varying skill levels. Implications for educators and supervisors will be discussed. 


3. Symposium format, including participant engagement methods
This presentation will be facilitated with the use of multi-media and interactive activities. 


4. Take-home messages / symposium outcomes / implications for further research and / or practice
There is a dearth of research on psychometric evaluation and quality in psychology. The value in assessing the psychometric qualities of OSCEs and utilising quality assurance guidelines routinelywithinthemethodologyandimplementationwillbepresented. Applicationsforother healthcare training programs will be discussed. It is hoped that this presentation will motivate psychology programs to standardise development, implementation, and evaluation of OSCEs. 



References (maximum three) 

TBA