Skip to main content
Ottawa 2024
Times are shown in your local time zone GMT

Theoretical and regulatory matters

Oral Presentation

Oral Presentation

2:00 pm

27 February 2024

M207

Session Program

Anne Kawamura
Briseida Mema1 and Dominique Piquette
1 Critical Care Medicine Department, Hospital for Sick Children


Background
Entrustment is a key activity in competency-based medical education (CBME).
 must decide if the learner is able to work without supervision and if they are willing to take the risks inherent in the learner working independently(1). Therefore, educators must make entrustment decisions about a learner’s capability to adapt to novel contexts and to generate new knowledge. Yet, many of the assessments used in CBME assess current ability, based on retrospective data from observed performances, not future practice. If we are to make judgements about future practice, supervisors must assess learners’ ability to not only perform efficiently and effectively in routine situations, but to assess how they innovate when faced with new, complex, or unexpected challenges(1). Thus, entrustment entails decisions around a learner’s preparation for future learning(2). Preparation for future learning (PFL) is “the ability to learn new information, make effective use of resources, and invent new procedures in order to support learning and problem solving in practice.”(2) 

We know that we can assess learners’ PFL through controlled ‘double transfer’ designs in experimental conditions, where previous knowledge that is ‘transferred in’ can be used with new learning resources to ‘transfer out’ this knowledge to solve novel problems(3). What remains unclear is how we can assess PFL in workplace learning environments where learners’ performances are steeped in context and contingent on the constraints, affordances, and effectivities of both human and non-human elements. This workshop will explore the ways in which educators can assess PFL in the messy, unstructured workplace context to enable them to entrust learners with future practice and assess generative transfer in novel ways. 


Why is the topic important for practice?
Approaching assessment with a PFL lens prompts us to focus on how the learner effectively uses knowledge and skills for future learning. However, because PFL, like other ‘skills’ and ‘traits’ is a contextually-bound state, context matters when considering how to assess PFL in the workplace. Focusing on the interaction between PFL and context will be important for assessing future practice. 


Workshop format
Educators 

This interactive workshop will engage learners through a combination of short presentations, small group activities with large group debriefs, and case-based discussion. The proposed structure of the workshop is as follows: 

Short presentation with large group discussion on assessing capability vs. competence 

Presentation of case study with small group discussion to discuss pros and cons of PFL assessment 

Mini lecture and small group discussion exploring how context interacts with PFL assessment strategies 


Who should participate?
Educators, clinical teachers, researchers with an interest in assessment of preparation for future learning 


Level of workshop Intermediate


Take-home messages

  1. Capabilities can be assessed through observing learners as they face novel and complex problems of practice 

  2. Assessment of learners’ networks of conceptual understanding may provide important measures of PFL 

  3. Entrustment decisions rely on judgements of the learner’s capability to generate new knowledge as well as the supervisor’s attention to context 



Maximum number of participants - 25 


References (maximum three) 

1. Ten Cate O, Carraccio C, Damodaran A, et al. Entrustment Decision Making: Extending Miller's Pyramid. Academic medicine : journal of the Association of American Medical Colleges. Feb 1 2021;96(2):199-204. doi:10.1097/ACM.0000000000003800 

2. Mylopoulos M, Brydges R, Woods NN, Manzone J, Schwartz DL. Preparation for future learning: a missing competency in health professions education? Medical education. Jan 2016;50(1):115-23. doi:10.1111/medu.12893 

3. Mylopoulos M, Woods N. Preparing medical students for future learning using basic science instruction. Medical education. Jul 2014;48(7):667-73. doi:10.1111/medu.12426 

Clarisse Chu1,2
Rehena Ganguly2, Neville Teo1,2 and Abhilash Balakrishnan1,2
1 Singapore General Hospital
2 Duke-NUS Medical School




Background:
The ACGME-I Singapore Otorhinolaryngology residency programme started in 2011 and our first Exit MCQ Examinations were held at the end of the fourth year of residency, in 2015. Residents in both Singapore and the US take the same OTE annually. Otorhinolaryngology residents in Singapore take the OTE in their first to fourth years of the five-year residency programme. In contrast, the Exit MCQ Examination, co-administered by the American Board of Medical Specialties, is a local examination that is different from the US Board Examinations. 

Multiple specialties have described a positive association between in-training examination and final board MCQ examination pass rates.1,2 Our study aims to demonstrate that OTE scores can be used to predict performance in our local Exit MCQ Examination. 


Summary of Work:
A retrospective review was performed of all 24 otorhinolaryngology residents who entered and took the Exit MCQ Examination at a single institution’s residency programme between 2016 to 2023. 


Results:
75% (18/24) passed the Exit MCQ Examination in their first sitting. Univariate logistic regression analyses showed lower OTE stanines in the fourth year of residency was significantly associated with failing the Exit MCQ Examination. 

Youden’s index showed attaining an OTE stanine <4 in the fourth year of residency training was most associated with failing the Exit MCQ Examination. 


Discussion:
OTE scores may be a better predictor of Exit MCQ performance in the fourth year of residency. Optimal OTE score targets for each year of residency were established. 


Conclusion:
Our findings will facilitate the identification of residents at risk of performing poorly in the final Exit MCQ Examination, such that remediation measures can be instituted early. 


Take Home Message: The OTE serves as an important goalpost for resident learning as they work towards specialist accreditation. 




References (maximum three) 

  1. Rayamajhi S, Dhakal P, Wang L, Rai MP, Shrotriya S. Do USMLE steps, and ITE score predict the American Board of Internal Medicine Certifying Exam results? BMC Med Educ. 2020;20(1):79. 

  2. Puscas L. Junior otolaryngology resident in-service exams predict written board exam passage. Laryngoscope. 2019;129(1):124-128. 

Haydeé Parra Acosta1
Annaliz Tena Hagelsieb1 and José López Loya1
1 Autonomous University of Chihuahua



Introduction:
In the face of the emergence of SARS-CoV-2, medical faculties adapted their evaluative methodologies to assess students' academic performance. Competency assessment shifted to the virtual realm, adopting approaches such as summative, formative, and socioformative. The latter not only verifies students' learning but also measures achievement levels in specific competencies and highlights areas for improvement through metacognition, continuous feedback, self-assessment, and peer assessment, facilitating decisions tailored to learning needs (Tobón, 2018; Hernández et al., 2018). Formative and socioformative approaches promote comprehensive, continuous, systematic, participatory, and flexible assessment (Chavira et al., 2022). However, there is limited information on the predominant evaluative strategies during the pandemic and their impact on medical education. 


Objective: 
To determine which evaluative strategy implemented during the pandemic favors the development of specific medical competencies. 


Methodology:
A cross-sectional quantitative study was conducted. A simple random sample of 240 students (5th to 9th semester) from the Medical Surgeon Degree program at three Mexican universities was selected. A validated instrument (ac = 0.909) comprising 51 items was used. The data were analyzed using SPSS v.25, employing descriptive and comparative statistics with p > .05, and correlation analysis based on Pearson's criteria. 


Results: 
56% indicated that written evaluations on virtual platforms were the most common strategy during the pandemic. 41% mentioned the frequent use of rubrics, and 42% highlighted sporadic use of the Objective Structured Clinical Examination (OSCE). Only 29% reported occasional self-assessments and peer assessments. 8th-semester students valued socioformative rubrics more for competency development. A moderate positive correlation (r > 0.50) was found between the OSCE with virtual simulation, socioformative rubrics, and feedback in relation to competency development, despite being less utilized strategies. 


Take home message:
Socioformative evaluation enhances competency development by identifying achievements and areas for improvement. It is crucial to enhance online OSCE assessment and integrate socioformative rubrics. 



References (maximum three) 

Chavira, J. A. P., Acosta, H. P., López, A. G., Loya, J. L., Sánchez, C. R. C., & Acosta, A. K. N. (2022). The Socioformative Rubrics in the OSCE to Assess the Level of Achievement of the Competencies Comprising the Profile of the Physician Graduate. Educación Médica, 23(3), 100740. https://www.sciencedirect.com/science/article/pii/S1575181322000328 

Hernández Mosqueda, J. S., Tobón Tobón, S., Ortega Carbajal, M. F., & Ramírez Cuevas, A. M. (2018). Socioformative Evaluation in Online Training Processes Through Formative Projects. Educar, 54(1), 147-163. https://ddd.uab.cat/record/184664 

Tobón, S. (2018). 1.7 Socioformative Evaluation Methodology. Socioformative Evaluation, 32. https://cife.edu.mx/recursos/wp-content/uploads/2018/08/Libro-evaluaci%C3%B3n- socioformativa-4.0.pdf#page=32 

David Rojas1
1 University of Toronto 



Background
The evaluation of the Competence By Design (CBD) training model in Canada so far has focused on the process of implementation. Evidence shows strengths around the creation of Competency Committees, but also challenges associated with the completion of EPA assessments, cultural change, and time and resources required to implement and sustain the model. Currently, there is urgent interest in capturing and understanding the value that the CBD model has offered to postgraduate training (1). 


Summary of work
Building on principles of Developmental, Value-based and Utilization-focused evaluation(2), we designed a 5-year plan to evaluate CBD in Canada, addressing the current evidence gaps. This work will guide the Royal College of Physicians and Surgeons of Canada (RCPSC) evaluation of CBD moving forward. 


Results 
The new evaluation approach focuses on capturing the expected value, added value, and created value (3) generated by the CBD model; moving away from process metrics like readiness and fidelity of implementation. The “value” construct is context-dependent (i.e., value for whom) and is composed of the effort, process, cost, and results experienced by each training institution (3). This evaluation plan is underpinned by a value-based, complexity-driven framework (2). 


Discussion
The new evaluation plan will help guide the CBD evaluation efforts toward better understanding the CBD contributions to the medical education field. This could, in turn, help inform future evolutions of CBD. 


Conclusions
We proposed going from a linear process-focused evaluation to an iterative, value-based, complexity-driven model that would help identify and clarify the contributions of CBD. Understanding the value of CBD will be the priority of evaluation efforts in Canada over the next 5 years. 


Take-home messages 
A complex educational training model, like CBD, requires a complex evaluation approach to be able to identify the added value of the model, as well as the unintended processes and effects. 



References (maximum three) 

1. Van Melle, E., Hall, A. K., Schumacher, D. J., Kinnear, B., Gruppen, L., Thoma, B., Caretta-Weyer, H., Cooke, L. J., & Frank, J. R. (2021). Capturing outcomes of competency-based medical education: The call and the challenge. Medical Teacher, 43(7), 794–800. https://doi.org/10.1080/0142159X.2021.1925640/SUPPL_FILE/IMTE_A_1925640_S M4648.PPTX 

  1. Patton, M. Q. (2011). Developmental evaluation: Applying complexity concepts to enhance innovation and use. Guilford Press. 

  2. Brandenburger, A. M., & Stuart Jr, H. W. (1996). Value‐based business strategy. Journal of economics & management strategy, 5(1), 5-24. https://doi.org/10.1111/j.1430-9134.1996.00005.x 

Richard Filby-Aziz1
Suzanne Chamberlain1
1 General Medical Council



Background 
The Medical Licensing Assessment (MLA) is being launched in the UK in 2024 for all students at UK medical schools, and international medical graduates seeking a licence to practise medicine in the UK via the examination route. The MLA comprises two components: an applied knowledge test and a clinical and professional skills assessment (CPSA). The MLA model, and its delivery, is different for both components, and for UK students and international medical graduates. This presentation will outline the delivery model and outline some of the learning from the MLA development work. 


Summary of work
As part of the development of the CPSA component of the MLA, the General Medical Council (GMC) has engaged with all UK medical schools to understand the design features of the clinical and professional skills assessments delivered in the final, or penultimate, year of each medical degree. During this engagement we captured information about technical design elements, including the number of stations, station testing time, standard setting methods, and use of conjunctive standards, and some delivery features such as the number of testing sites. 


Discussion
Our engagement with medical schools provides evidence of the diversity of approaches used to assess clinical and professional skills in UK undergraduate medical education. Our understanding of this diversity has been critical to informing the GMC's development of the MLA model, and a set of 20 requirements for the CPSA component, against which medical schools must provide evidence of compliance. The GMC plans to share, with the UK medical schools in the first instance, the learning and examples of good practice that have emerged from this work. 


Conclusions
There is significant diversity across UK medical schools in how clinical and professional skills assessments are designed and delivered at the end of the medical degree. This diversity appears to be shaped by, for example, philosophical approaches to learning, teaching, curricula, and assessment, and practical concerns relating to feasibility, size of cohort, and resources available. As part of the quality assurance process, it is important that these contextual factors are considered. 


Take home messages
The MLA is being launched in the UK in 2024. The MLA model incorporates flexibility and diversity in the design of the clinical and professionals skills component.