Skip to main content
Ottawa 2024
Times are shown in your local time zone GMT

Delivering quality examinations

Workshop

Workshop

10:30 am

26 February 2024

M203

Session Program

Chris McManus1
Gerrard Phillips2, Kenneth Dagg3, Stuart Hood4, Liliana Chis5 and Ben Gillon6
1 University College London
2 Executive Medical Director, The Federation of Royal Colleges of Physicians of the United Kingdom
3 Medical Director for Assessment, MRCP(UK), The Federation of Royal Colleges of Physicians
4 Associate Medical Director, The Federation of the Royal Colleges of Physicians of the United Kingdom
5 MRCP(UK)
6 Head of Assessment Quality and Policy, The Federation of Royal Colleges of Physicians




1. Background. Clinical exams are complex and sometimes go wrong, for as Robert Burns said in 1785, “The best-laid schemes o' mice an' men / Gang aft agley”. Despite best efforts, procedural errors occur, resulting in missing, untrustworthy or invalid marks. Marking schemes then cope badly, while regulations often offer provide only an apology, expunging the attempt, fees refund, and early resits [1], though that can feel unfair to candidates with high valid marks if career progression is affected. This workshop addresses such problems using multiple imputation [2]. 


2. Why is the topic important for research and / or practice?
Procedural issues require urgent handling in exams, with a principled approach acceptable to stakeholders, including candidates, examiners and regulators. 


3. Workshop format, including participant engagement methods.
The 60 minute workshop will be four brief (5 -10 mins) presentations on various aspects of the problem, with participant discussion and sharing of experience at each stage. 


a. Examples of Procedural Problems.
MRCP(UK) examples include: 1: During Covid a local organiser unilaterally decided patients at some encounters need not be examined; 2: An examiner’s illness becoming apparent only after a clinical exam had finished; 3: A cardiac encounter with a wrongly described heart murmur; 4: Communication stations inadvertently providing candidates with examiner briefings. 


b. The nature of missing data problems.
Marking schemes usually require all candidates to have marks on all parts of an assessment, but that often is not the case with procedural problems. This section will consider general statistical approaches to handling missing data, and their strengths and weaknesses. 


c. A specific worked example.
Participants will be provided with detailed information provided to the Board of Examiners for an actual procedural problem, including plausible estimates of passing probabilities. On a technical note, psychometric issues were addressed using the R package mice() in specially written software[3], but no software will be used during this workshop. 


d. Regulatory issues.
Regulators are becoming aware about how exams handle procedural errors and the consequences. This section will be a general sharing of experience, and possible issues. 


4. Who should participate?
The concepts involved in imputation are appropriate for all levels of those organising examinations, including senior examiners, exam organisers, psychometricians and regulators. 


5. Level of workshop (beginner / intermediate / advanced).
Beginner 


6. Take-home messages / workshop outcomes / implications for further research or practice.
All participants should understand how procedural problems can be approached in a principled way when examination schemes do go awry, and the methods involved. Psychometricians in particular will become aware of computational approaches and using multiple imputation for such cases. 


7. Maximum number of participants.
Dependent on room size, perhaps 30 or so to encourage discussion. 



References (maximum three) 

1. The Federation of the Royal Colleges of Physicians of the United Kingdom: Examination Appeals Regulations. London: https://www.mrcpuk.org/sites/default/files/documents/Appeals-regulations-2018.pdf; 2018. 

2. van Buuren S: Flexible imputation of missing data (Second edition). New York: CRC Press; 2018. 

3. van Buuren S, Groothuis-Oudshoorn K: mice: Multivariate Imputation by Chained Equations in R. Journal of Statistical Software 2011, 45(3). 

Eric Holmboe1
Dowin Boatright2
1 ACGME
2 New York University



Background: 
Assessment is essential to professional development. Assessment provides the information needed to give feedback, support coaching and the creation of individualized learning plans, inform progress decisions, determine appropriate supervision levels, and, most importantly, help ensure patients and families receive high-quality, safe care in the training environment. 

Yet, one of the most significant challenges in assessment is the ongoing and pernicious effects of bias. Learners from diverse backgrounds, including those from racial/ethnic groups typically underrepresented in medicine (URiM) and other groups often marginalized by bias in assessment (e.g., women, people who identify as sexual and gender minorities, people living with disabilities, and more), face additional and unwarranted obstacles in their professional development. Studies from around the globe have documented significant assessment biases in health professions education (HPE), from selection for training programs to assessment of clinical competence. This body of research highlights the urgent need for the HPE community to develop methods and tools training programs should use to identify, address, and reduce bias in their own assessment programs. 


Why this is important: 
When learners experience bias, it results in suboptimal learning environments and compromises learners’ well-being and ability to function at the top of their abilities. For example, multiple studies show learners from historically URiM groups receive lower assessment ratings of clinical competence from faculty. Assessment bias can lead to and reinforce stereotype threat and impostor syndrome. These effects are cumulative and even small differences in assessment can translate to disparities in future opportunities, including training and employment, a phenomenon described as the amplification cascade. 

Workshop format: 
The session will contain three sections. The introduction will provide a synopsis of key findings from the latest research on bias in assessment, including studies and data from a U.S. assessment system using competencies. The second section will provide participants an opportunity to reflect on their own program’s potential sources of bias and how they can identify sources of bias in their assessments. The final section will involve sharing and discussing with participants techniques that can be used to help faculty identify and reduce bias in their program, using recommendations from communication science and psychology. 

  • Theory burst covering key types and sources of bias in assessment and the impacts of bias on professional development.
  • Large group conversation
  • Theory burst on methods and tools to identify various forms of assessment bias in HPE training programs.
  • Small group activity and discussion (with worksheet provided) of participant’s current challenges with assessment bias in their own program.
  • Small group report outs and reflections.
  • Theory burst on recommended approaches to reduce assessment bias by faculty and
  • programs.
  • Small group discussion on how and where participants can apply approaches in their own program. Each participant will create personal action plan.
  • Q&A


Who should participate?
All - 50 maximum Level of workshop: Beginner/Intermediate


Take-home messages:

To ensure assessment in HPE is fair, equitable, and supports all learners, deliberate attention must be directed at reducing bias for groups who have and continue to experience assessment bias in their training programs. 



References (maximum three) 

    1. Boatright D, Anderson N, Kim JG, Holmboe ES, McDade WA, Fancher T, Gross CP, Chaudhry S, Nguyen M, Nguemeni Tiako MJ, Colson E, Xu Y, Li F, Dziura JD, Saha S. Racial and Ethnic Differences in Internal Medicine Residency Assessments. JAMA Netw Open. 2022 Dec 1;5(12):e2247649. 

    2. Lucey CR, Hauer KE, Boatright D, Fernandez A. Medical Education's Wicked Problem: Achieving Equity in Assessment for Medical Learners. Acad Med. 2020 Dec;95(12S Addressing Harmful Bias and Eliminating Discrimination in Health Professions Learning Environments):S98-S108. 

    3. Holmboe ES, Osman NY, Murphy CM, Kogan JR. The Urgency of Now: Rethinking and Improving Assessment Practices in Medical Education Programs. Acad Med. 2023