Times are shown in your local time zone GMT
Ad-blocker Detected - Your browser has an ad-blocker enabled, please disable it to ensure your attendance is not impacted, such as CPD tracking (if relevant). For technical help, contact Support.
Assessment in Postgraduate / Post-registration training
Oral Presentation
Oral Presentation
11:30 am
28 February 2024
M213
Themes
Theme 8: Evaluation
Session Program
11:30 am
Holly Caretta-Weyer1
1 Stanford University School of Medicine
1 Stanford University School of Medicine
Problem:
Competency-based medical education (CBME) has established itself as the predominant paradigm for the future of medical education. CBME is comprised of five core components: 1) An outcomes framework, 2) Progressive sequencing of competencies to support progression, 3) Individualized learning experiences, 4) Teaching tailored to competencies, and 5) Programmatic assessment with emphasis on workplace-based assessment (WBA). Each of these components has given rise to both significant promise and potential pitfalls.
Competency-based medical education (CBME) has established itself as the predominant paradigm for the future of medical education. CBME is comprised of five core components: 1) An outcomes framework, 2) Progressive sequencing of competencies to support progression, 3) Individualized learning experiences, 4) Teaching tailored to competencies, and 5) Programmatic assessment with emphasis on workplace-based assessment (WBA). Each of these components has given rise to both significant promise and potential pitfalls.
Summary of Work:
We embarked on implementing the core components of CBME within Emergency Medicine (EM) across 9 pilot sites representative of the specialty in the US. This included the development and implementation of EPAs that span the continuum of EM training, the mapping of developmental milestones to the EPAs, implementation of an adaptable coaching program and individualized learning plan (ILP), and the adoption of programmatic assessment. We subsequently performed a realist evaluation to analyze the implementation across the pilot sites.
We embarked on implementing the core components of CBME within Emergency Medicine (EM) across 9 pilot sites representative of the specialty in the US. This included the development and implementation of EPAs that span the continuum of EM training, the mapping of developmental milestones to the EPAs, implementation of an adaptable coaching program and individualized learning plan (ILP), and the adoption of programmatic assessment. We subsequently performed a realist evaluation to analyze the implementation across the pilot sites.
Results:
Variability in EPA implementation, ranges of milestones ratings tagged to each EPA, disparate coaching models, adaptations to ILPs, and differences in technology to collect WBA data as well as aggregation and use of programmatic assessment data were identified. Additionally, challenges around faculty development, implementation, and feasibility were identified.
Variability in EPA implementation, ranges of milestones ratings tagged to each EPA, disparate coaching models, adaptations to ILPs, and differences in technology to collect WBA data as well as aggregation and use of programmatic assessment data were identified. Additionally, challenges around faculty development, implementation, and feasibility were identified.
Discussion/Conclusions:
Our realist evaluation of a pilot of implementation of CBME within a single specialty in the US reveals substantial variability in how the core components are actualized. When implementing across an entire specialty, it is essential to consider feasibility based upon context, what can be transferrable across all programs, and what represents acceptable variability.
Our realist evaluation of a pilot of implementation of CBME within a single specialty in the US reveals substantial variability in how the core components are actualized. When implementing across an entire specialty, it is essential to consider feasibility based upon context, what can be transferrable across all programs, and what represents acceptable variability.
Next Steps:
Formal realist and rapid cycle evaluation of the implementation of CBME within Emergency Medicine will be expanding specialty-wide as we engage further with our national partners and will provide further lessons learned based upon our broader implementation as the project progresses to completion.
References (maximum three)
1. Van Melle E, Frank JR, Holmboe ES, Dagnone D, Stockley D, Sherbino J, et al. A core components framework for evaluating implementation of competency-based education programs. Acad Med. 2019;94(7):1002-9.
11:45 am
Ann Lee1
Anoushka Jere1, Lihani du Plessis1, Pascal Van Gerven2, Sylvia Heeneman2 and Shelley Ross1
1 University of Alberta
2 Maastricht University
Anoushka Jere1, Lihani du Plessis1, Pascal Van Gerven2, Sylvia Heeneman2 and Shelley Ross1
1 University of Alberta
2 Maastricht University
Background
The transition to competency-based medical education (CBME) has included a focus on best practices in feedback and assessment of learner competence.1 Continuity of supervision (CoS) has been suggested as one of these best practices, but the evidence comes primarily from undergraduate (UG) longitudinal integrated clerkships.2 At the postgraduate level (PG), there are differences in supervisory relationships that may affect assessment.3 We explored faculty and resident perceptions of CoS on feedback and assessment. Researching CoS at the PGME level will help improve assessment practices essential to medical education.
Summary of Work
We conducted 22 semi-structured interviews involving faculty and residents in a family medicine postgraduate program. Transcripts were analyzed iteratively using constructivist grounded theory methodology.
Results
Interviewees indicated CoS allows faculty to observe development over time and provide iterative feedback and assessment. Both faculty and residents described more confidence in the usefulness of assessments because they are: a) based on more evidence; b) more tailored to each resident, and; c) given in the context of a relationship where trust has developed. Interviewees, however, also warned of risks of CoS including lack of assessment variety and risk of assessment bias particularly when a trusting faculty-resident relationship has not developed.
Discussion
These results suggest CoS provides faculty with the ability to assess residents through a developmental lens to make feedback and assessment more useful for residents and faculty.
Conclusions
Findings from this study highlight the importance of understanding the impact of different CoS relationships on feedback and assessment during residency, which, in turn, influence learning and decision-making.
Take Home Messages
Where CoS exists, feedback and assessment are perceived to be more useful, and to occur through a developmental lens. However, care should be taken to design a workplace-based assessment approach that increases variety of assessments and reduce potential risks of CoS.
References (maximum three)
1. Holmboe, Eric S. MD1; Osman, Nora Y. MD2; Murphy, Christina M.3; Kogan, Jennifer R. MD4. The Urgency of Now: Rethinking and Improving Assessment Practices in Medical Education Programs. Academic Medicine 98(8S):p S37-S49, August 2023. | DOI: 10.1097/ACM.0000000000005251
1. Mazotti L, O'Brien B, Tong L, Hauer KE. Perceptions of evaluation in longitudinal versus traditional clerkships. Med Educ. 2011 May;45(5):464-70. doi: 10.1111/j.1365- 2923.2010.03904.x. PMID: 21486322.
1. Lee AS, Ross S. Continuity of supervision: Does it mean what we think it means? Med Educ. 2021 Apr;55(4):448-454. doi: 10.1111/medu.14378. Epub 2020 Sep 25. PMID: 32929800.
12:00 pm
Debbie Paltridge1
Graeme Campbell1
1 Royal Australasian College of Surgeons
Graeme Campbell1
1 Royal Australasian College of Surgeons
Assessment of Specialist Medical International Graduates (SIMGs) is a critical process to ensure that appropriately qualified and trained doctors are able to work within the Australian healthcare system. Currently surgeons assessed as Partially comparable must undertake a period of supervised practice followed by successful completion of the Fellowship examination. High stakes exit exams are designed for trainees finishing their training not for doctors that may have been in practice for many years. In addition, these examinations are not assessing the breadth of competencies required by a practicing surgeon.
The Royal Australasian College of Surgeons (RACS) have developed a workplace based assessment program entitled External Validation of Professional Performance (EVOPP). The EVOPP program of assessment occurs in the SIMG’s workplace and requires two trained assessors from the SIMG’s specialty to assess their performance over two days. The assessment includes observations (in theatre, outpatients and on a ward round), Interviews with relevant colleagues and a Case Based Discussion using script concordance theory. All evidence is triangulated to identify if the SIMG is ready for independent practice.
To date 10 pilots have been undertaken across a range of specialties. Evaluation data indicates that this assessment program is acceptable to both the SIMGs and Assessors, is able to assess across all RACS 10 competencies and is a valid process. For this to be implemented as a potential replacement for the Fellowship exam there needs to be clarity as to which SIMGs would be suitable for an EVOPP and the consequences if an EVOPP is unsuccessful. This presentation will share the validity data along with the future work required for establishing it as a formal RACS assessment process.
References (maximum three)
1. Nair BKR, Moonen-van Loon JM, Parvathy M, Jolly BC, van der Vleuten CP. Composite reliability of workplace-based assessment of international medical graduates. Med J Aust. 2017 Nov 20;207(10):453. doi: 10.5694/mja17.00130. PMID: 29129176.
2. Lubarsky S, Dory V, Duggan P, Gagnon R, Charlin B. Script concordance testing: from theory to practice: AMEE guide no. 75. Med Teach. 2013;35(3):184-93. doi: 10.3109/0142159X.2013.760036. Epub 2013 Jan 29. PMID: 23360487.