Skip to main content
Ottawa 2024
Times are shown in your local time zone GMT

Assessment across transitions

Oral Presentation

Oral Presentation

1:30 pm

26 February 2024

M206

Session Program

Dominic Johnson1
Gill Vance2 and Bryan Burford2
1 Liverpool University
2 Newcastle University




Background
A proportion of medical students each year fail their final assessments. While processes for resitting vary between medical schools, some are required to resit their entire final year. However, little is known about their experience of this (Patel, 2011). 

The work aimed to explore the experience of failing the medical degree final examination in the context of relevant theoretical models and to derive a theory that described the experience and could be used to understand and support students who fail in the future. 


Summary of work
A modified grounded theory approach explored how failing medical finals affected students and graduates from one medical school in the UK (Charmez, 2006) The project drew on theories of self-esteem and professional identity to examine the phenomena. 

Eighteen interviews at three time points were completed to explore the experience of resitting and the data analysed using a thematic data analysis strategy. 


Results
It was clear from the data that the students went through a series of stages of adverse affective response, moving from shock and frustration to anger and then sadness. 

As the students re-started the year they experienced feelings of stupidity, a sense of boredom and then significant anxiety as they approached finals for the second time knowing if they failed they would not become doctors. 

However, many participants were able to process through these and tried to see the positive in the re-sitting year. This had positive effects on identity and self-esteem and led to a better sense of preparedness to be a doctor. 

This work postulates the experience as an ‘academic adjustment disorder’ (APA 2013). 


Discussion
Viewing the experience as an adjustment disorder, from which students can nevertheless emerge with benefits for their ongoing education and professional development, provides new ways of approaching the processes for supporting students who fail. 



References (maximum three) 

American Psychiatric Association. (2013). Diagnostic and statistical manual of mental disorders (5th ed.). https://doi.org/10.1176/appi.books.9780890425596 

Charmaz, K. (2006) Constructing grounded theory: A practical guide through qualitative research. London: Sage Publications Ltd. 

Patel, R. S. (2011) 'The experience of medical students classified as unsatisfactory at finals: A qualitative exploration of student perceptions about failure', Medical Education, Supplement, 45, p. 36. 

Kimberly Lomis1
1 American Medical Association 



Background
Health Systems Science (HSS) is the fundamental understanding of how care is delivered, how health care professionals work together to deliver that care, and how the health system can improve patient care and health care delivery. 

Posited as the third pillar of medical education, complementing basic and clinical sciences, HSS is essential to optimal care of patients. Yet assessment of HSS competency development has focused on knowledge or project-based activities. How does the health professional who excels in systems thinking differ in their approach to the daily work of caring for patients and communities? How can educators assure that all trainees are inculcating these essential skills? 


Summary of work
Since 2013, the American Medical Association (AMA) has supported innovations to promote the framework of Health Systems Science across the continuum of medical education. Recently, the AMA formed a multi-institutional team engaging leaders of undergraduate and graduate medical education programs to collaborate around assessing HSS competency for transition to residency. 

The team convened in June 2023. Early discussions acknowledged challenges to promoting and assessing HSS competency. The team has identified systems thinking and teaming as priority HSS competencies and is designing targeted assessments to bridge the transition from UME to GME in a standardized manner by multiple institutions across the United States. 


Discussion
This presentation will provide a background in challenges and opportunities of assessing competency development in Health Systems Science. Shared assessment tools under design for the 2024 pilot phase will be described. 


Take-home messages
Health Systems Science is the third pillar of medical education, complementing basic and clinical sciences. 

Assessment in the clinical learning environment of competency in Health Systems Science poses multiple challenges, which shared strategies can mitigate. 

A collaborative, inter-institutional development process supports assessment of Health Systems Science competency across the continuum, to optimize patient care. 



References (maximum three) 

Skochelak SE, Hammoud M, Lomis KD, Lawson LE, Starr SR, Borkan JM, Gonzalo JD. 2020. Health systems science. 2nd ed. Philadelphia (PA): Elsevier. 

https://www.ama-assn.org/education/changemeded-initiative/teaching-health-systems-science 

Gonzalo JD, Haidet P, Blatt B, Wolpaw DR. 2016. Exploring challenges in implementing a health systems science curriculum: a qualitative analysis of student perceptions. Med Educ. 50(5):523–531. 

Susan Humphrey-Murto1
Julie D'Aoust1, Samantha Halman1, Tammy Shaw1, Vijay Daniels2, Lynfa Stroud3, Alice Yu1, Irene Ma4, Beth-Ann Cummings5 and Timothy J. Wood1
1 University of Ottawa
2 University of Alberta
3 University of Toronto
4 University of Calgary
5 McGill University




Learner Education Handover (LEH) is the sharing of information about learners between faculty supervisors. Previous studies demonstrate LEH biased scores after viewing a single performance.1 In the workplace, however, faculty supervisors often have multiple interactions with learners. It is unknown if LEH bias diminishes after several observations. Question: Does LEH influence faculty ratings, entrustment decisions and feedback after observing several performances of the same learner? 


Methods: 
Internal medicine faculty members (n=57) from five medical schools were randomly assigned to one of three study groups. Each group received either positive, negative or no LEH prior to watching five simulated resident-patient encounter videos of the same resident. Participants rated each video using the mini-CEX (5 items /global rating), an entrustment scale and provided written feedback. Feedback was assigned a valence score (-3 to +3). 


Results: 
For most videos, there was no difference in mean mini-CEX, entrustment scores or feedback between the study groups (positive LEH, negative LEH, control). Differences were found for: video 1, the feedback valence was higher in the positive LEH (0.79) compared to control group (-0.53, p<.001) and for video 4 the control group means for mini-CEX global rating and entrustment scores were unexpectedly lower compared to the negative condition (4.68 vs 5.84, p =.04; 2.79 vs 3.47, p =.02) 

In the post-study questionnaire, most raters reported the LEH had minimal effect on their decisions, but that they might be influenced more on the first compared to the last video. Only 29% of raters guessed the true purpose of the study. 


Conclusions: 
Contrary to previous studies, our results demonstrate minimal effect of LEH on scores or feedback by faculty after one encounter.2 No effect of LEH was seen over the subsequent four encounters of the same resident. These results may help alleviate some of the concerns surrounding LEH practices.3 



References (maximum three) 

1. Shaw T, Wood TJ, Touchie T, Pugh D, Humphrey-Murto S. How biased are you? The effect of Prior Performance Information on Attending Physician Ratings and Implications for Learner Handover. Adv Health Sci Educ Theory Pract. 2021 Mar;26(1):199-214. 

2. Humphrey-Murto S, LeBlanc A, Touchie C, Pugh D, Wood TJ, Cowley L, Shaw T. The Influence of Prior Performance Information on Ratings of Current Performance and Implications for Learner Handover: A Scoping Review. Acad Med. 2019 Jul;94(7):1050-1057. 

3. Kassam A, Ruetalo M, Topps M, et al. Key stakeholder opinions for a national learner education handover. BMC Med Educ. 2019;19(1):150. 

Yoon Soo Park1
Sean Hogan2 and Eric Holmboe2
1 University of Illinois College of Medicine
2 Accreditation Council for Graduate Medical Education




Background:
Postgraduate training in the United States requires formative assessments of learners using the Accreditation Council for Graduate Medical Education (ACGME) Milestones assessment system. Recently, the ACGME implemented Milestones 2.0, offering specialties to use the same “Harmonized” Milestones for Professionalism, Communication and Interpersonal Skills, Systems-Based Practice, and Practice Based Learning and Improvement. This study examines factors contributing to variability in postgraduate year (PGY)-1 Harmonized Milestones and differences between specialties. 


Summary of Work:
We use data from entering cohorts in 2021 and 2022 from eight largest specialties: Emergency Medicine, Family Medicine, Internal Medicine, General Surgery, OBGYN, Psychiatry, Radiology, and Pediatrics. Variance components analyses were conducted using cross-classified random-effects models, accounting for clustering at the residency program, medical school levels, and specialty. 


Results:
We analyzed data from 62,005 residents (2,919 programs). When comparing across specialties, specialty accounted for the largest variance (24%) across competencies. By specialty, variance components for trainees, residency programs, and medical schools accounted for 29%, 35%, and 2% of total variance, respectively. Learner variance within specialties demonstrated substantial variability; learner variance in Internal Medicine and General Surgery was 48% and 32% in Harmonized Milestones, meaning programs are identifying different performance levels of trainees. 


Discussion:
Understanding factors that contribute to learner variance during PGY-1 is essential in determining the focus and allocation of training resources, and to prepare learners to successfully transition into postgraduate training. Our findings show substantial differences in the use of Harmonized Milestones across specialties. Despite specialty differences, the Harmonized Milestones offer opportunities to examine variability in learner performance and developmental patterns. 


Conclusions
The Harmonized Milestones can be used to examine developmental trajectories of learners within specialty to examine factors contributing to longitudinal variability in resident Milestones ratings. 


Take-Home Messages:
Harmonized Milestones can identify learners of different performance levels using consistent performance categories and subcompetencies within specialty. 



References (maximum three) 

  1. Park YS, Hamstra SJ, Yamazaki K, Holmboe E. Longitudinal reliability of Milestones- based learning trajectories in Family Medicine residents. JAMA Netw Open. 2021;4(12): e2137179. doi: 10.1001/jamanetworkopen.2021.37179. 

  2. Park YS, Ryan MS, Hogan SO, Berg K, Eickmeyer A, Fancher T, Farnan J, Lawson L, Turner L, Westervelt M, Holmboe E, Santen SA. Transition to residency: National study of factors contributing to variability in learner performance in Emergency Medicine and Family Medicine Milestone Ratings. Acad Med. doi: 10.1097/ACM.0000000000005366. 

Pieter Jansen1
Gabriela Mena Ribadeneira1 and Asela Olupeliyawa1
1 Academy for Medical Education, Medical School, The University of Queensland


Background:
Final-year medical students undergo a marked transformation from senior student to junior professional. Facilitating students to self-identify their learning goals related to “Intern Preparedness” may be particularly useful at this stage of learning. 


Summary of work:
As part of assessment of their Elective, final-year medical students in our Doctor of Medicine (MD) Program write a standardised learning plan and reflective essay to formulate and reflect on their personal learning goals. In 2023 We asked students to specifically identify goals related to Intern Preparedness based on the Medical Deans Australia and New Zealand Guidance Statement: Clinical Practice Core Competencies for Graduating Medical Students (2020) (1). We evaluated the self-reported learning goals and end-of-placement reflections related to Intern Preparedness from 50 final-year medical students. 


Results:
Learning goals related to “Clinical communication”, “Clinical knowledge and skills” and “Safe prescribing” were most frequently reported and those related to "Health systems”, “Public Health” and “Indigenous health” least frequently reported. Reflective essays suggested opportunities and challenges encountered when pursuing these goals. 


Conclusion and Discussion:
Students’ self-reported learning goals are strongly centred around technical skills of medicine, whereas goals related to the broader context of medicine are relatively underreported. This could relate to lack of students’ awareness of the full breadth of graduate outcomes, but the specific context of individual placements will also be relevant. Its relation with students’ learning goals is likely bi-directional as students’ goals and aspirations will strongly determine their choice of placement, and in turn their learning goals will be influenced by the opportunities that are present. 

Take-home messages:
  • Encouraging students to define their own learning goals is a useful strategy to support their transition to clinical practice and foster a mindset of lifelong learning (2); 
  • Reflection is an important strategy for curriculum development to enhance Intern Preparedness teaching and learning. 




References (maximum three) 
1. MDANZ Guidance Statement: Clinical practice core competencies for graduating medical students; May 2020. Available from: https://medicaldeans.org.au/md/2023/06/mdanz_2020_may_core_competencies.pdf 

2. Ross et al. Effective competency-based medical education requires learning environments that promote a mastery goal orientation: A narrative review. Med Teach 2022 May;44(5):527- 534. 

Holly Caretta-Weyer1
1 Stanford University School of Medicine 



Background:
Central to CBME is the need for a developmental continuum of training and practice. Trainees currently experience significant discontinuity in the transition from undergraduate (UME) to graduate medical education (GME). The learner handover aims to smooth this transition; however, little is known about the GME perspective of the desired content of the handover or the process of receiving such a handover from UME stakeholders. 


Summary of Work:
Using case study methodology, semi-structured interviews were conducted with twelve emergency medicine program directors within the US. Participants were asked to describe the ideal content and process of a learner handover from UME to GME. Conventional content analysis was performed using an inductive approach. 


Results:
A model was designed based on the desired content of a learner handover from UME to GME. This model includes a summary of the student's assessment progress on the UME EPAs, progress on specialty-specific EPAs, a reflection on diagnostic reasoning and critical thinking skills, team leadership and communication, follow-through on professional responsibilities, capacity for self-directed learning, and how to facilitate wellbeing in GME training. An ideal process was also defined for transmitting and utilizing the handover. 


Discussion/Conclusions:
Traditionally, entering residents are treated by program directors as blank slates due to a lack of a learner handover. PDs desire an honest assessment of a trainee's strengths and growth areas in order to aid them in their transition. A learner handover following the proposed model will ameliorate much of the current discontinuity. 


Implications:
Formal evaluation of the proposed learner handover process is essential to ensure the needs of all stakeholders are met. Additionally, approaches to adapt the model across other specialties and contexts will need to be developed, piloted, and evaluated to determine what works, for who, and in what context to inform this work going forward. 



References (maximum three) 

1. Morgan HK, Mejicano GC, Skochelak S, Lomis K, Hawkins R, Tunkel A, et al. A responsible educational handover: Improving communication to improve learning. Acad Med. 2020;95:194–199. 

Deborah O'Mara1
1 University of Sydney Medical School/ AMEE/ ANZAHPE 



Background:
The medical education literature includes many predictive studies using linear or logistic regression, particularly for selection research. The emphasis on “predictive validity” remains, though it is contrary to modern views of assessing evidence for validity, as outlined in studies such as (Kane, 1992) and (Cook et al., 2015). 

An exploratory study of the literature suggests that many studies of predictive validity evaluate selection instruments as the predictor and performance in medical school as the outcome variable. Too often studies are limited to easily measured outcomes, such as academic scores or compromised measures of a “good doctor” or explain a small proportion of variance (eg <20%). 


Summary of work
This presentation will summarise predictive studies in medical education selection and assessment in the past 20 years, identifying the amount of variance explained, types of dependent and independent variables, design and usefulness. Alternative research designs will be offered. 


Results:
It is expected that the fallacy of the predictive research design will be supported by several factors; being more suitable for heterogenous populations and data where parametric statistical principles apply; not having a sufficient sample size; low proportion for predictor variable in logistic regression such as student professionalism issues; outcome measures with limited generalisability; and most of the variation being unexplained. 


Discussion:
Alternative research designs more suitable to medication education selection, such as segmentation and longitudinal studies focusing on the differences between target groups, will be discussed. 


Conclusions:
The variables chosen as output variables frequently disappoint as do the proportion of variance explained in predictive studies of medical education. They are not suitable for new assessment systems, such as programmatic assessment or widening access selection policies. 


Take home messages:
Think carefully before assuming you need to do a predictive study for your medical education or selection research. Consider alternative research designs. 


References (maximum three) 

Cook, D. A., Brydges, R., Ginsburg, S., & Hatala, R. (2015). A contemporary approach to validity arguments: a practical guide to Kane's framework. Medical Education, 49(6), 560-575. https://doi.org/10.1111/medu.12678 

Kane, M. T. (1992). An argument-based approach to validity. Psychological bulletin, 112(3), 527.