Skip to main content
Ottawa 2024
Times are shown in your local time zone GMT

Quality assurance, integration and alignment

Oral Presentation

Oral Presentation

4:00 pm

26 February 2024

M214

Session Program

Varna Taranikanti1
Anamika Sengupta2 and Bei Zhang3
1 Oakland University William Beaumont School of Medicine
2 University of Illinois College of Medicine
3 University of Vermont Larner College of Medicine.




Background
Educational experiences that are active, contextual, integrated, and student-owned lead to meaningful and profound learning. Hence, curricula and their implementation require restructuring as per modern learning theories (constructive, collaborative, contextual, and self- directed)[1]. Effectively conveying one of the primary themes, integration encompassing content, assessment, and pedagogy requires the ingenious creativity of teachers and collaborative efforts from learners. In this symposium, we will delve into a comprehensive review of the integration accomplished thus far regarding content, assessment, and pedagogy, the methodologies employed to achieve those, and the untapped possibilities for further integration to elevate the realm of teaching and learning. 


Why is the topic important for research and/or practice?
Integration is the artful harmonization of pre-existing components into a cohesive and meaningful composite. A well-integrated curriculum follows the principle of constructivism in learning, exhibiting a logical and coherent structure, and it is much easier for students to navigate through. Plus, integrating basic science content and clinical sciences early on in the curriculum helps contextualize learning and develops critical thinking and analytical skills. 

An effective integrated curriculum necessitates the implementation of integrated assessments that combine the learning outcomes from multiple courses into a single, cohesive evaluation[2]. An integrated assessment starts with integrated course material or sessions co-developed and co-facilitated in a case-based manner by basic science faculty and clinicians, with learning objectives from earlier curricular units being routinely revisited in advanced units with an emphasis on their clinical applications. Assessment items are created using vignettes from real patient scenarios via close collaboration between basic science and clinical faculty. An integrated assessment is an important tool to guide students through the preclinical curriculum with the clinical context while honing critical and problem-solving skills. Integrated assessments assess not only what students learn but also how they learn. 


Symposium format, including participant engagement methods
The symposium will start with three mini-presentations with each focused on a specific aspect of integration: content, assessment, and pedagogy. Participants then join the small group discussions based on their interests to delve deeper into the topics. Finally, the insights and findings are shared with the whole group. 
Part I: Mini presentations to invite more valuable insights and ideas (30 minutes) Longitudinal Integration of Medical Biochemistry to Improve Student Interest (10 mins) Integrated Assessment for Integrated Content (10 minutes)
 Integrate Teaching and Learning Using Reflective and Collaborative Writing (10 minutes) 
Part II: Speakers will facilitate small group discussions with participants to identify challenges, explore opportunities, and develop methodologies (with regard to content, assessments, and pedagogy (30 minutes). 
Part III Insights and findings from small groups shared with the whole group followed by Q and A (30 minutes). 


Take-home messages/symposium outcomes/implications for further research and/or practice
Integrating preclinical content, assessments, and pedagogy is essential to ensure medical students are adequately prepared for the clinical years. Collaboration among educators and students is crucial in designing and implementing integrated curricula, assessments, and pedagogy. This partnership ensures that preclinical teaching and learning are realistic, relevant, and reflect the actual challenges of real-world clinical practice. 

References (maximum three) 

1. Dolmans, D.H., et al., Problem-based learning: future challenges for educational practice and research. Med Educ, 2005. 39(7): p. 732-41. 

2. Hwang, J.E., et al., Individual class evaluation and effective teaching characteristics in integrated curricula. BMC Med Educ, 2017. 17(1): p. 252. 

Judi Walker1
Anthea Dallas1
1 University of Tasmania



1. Background
The Tasmanian School of Medicine is building a flexible five-year Evaluation and Quality Improvement Framework as a roadmap to help us monitor the implementation, impact, and outcomes of the new Tasmanian Medicine Program and to prepare for the next accreditation cycle in 2027. 


2.Summary of Work
“Evaluation” is defined as “the set of policies and processes by which a medical education provider determines the extent to which its training and education functions are achieving their intended outcomes” [1]. The Framework's key outcomes drivers are Quality Improvement, Medical Education Research and Accreditation. 

Complexity theory[ii] provides a different and useful perspective for developing the Framework and choosing evaluation models that serve program needs more effectively. It allows us to avoid an overly narrow or simplistic approach to our work. 


3. Results
The Framework is mapped to the AMC’s revised Accreditation Standards, using their identified thematic areas of change. These areas are used in preference to the six Accreditation Standards. 


4. Discussion
We are building a Framework that can simplify data collection and analysis by making sure there is alignment of medical education research questions with questions asked by the AMC, the university, unit coordinators, students, and external stakeholders. It is informing and supporting committees and professional staff to achieve evaluation outcomes. 


5. Conclusion
This flexible five-year Framework provides a roadmap to help us monitor the implementation, impact, and outcomes of the new Program and to prepare for the next accreditation cycle in 2027. 

6. Take-home messages / implications for research and practice 
- The Framework provides annual plans of evaluation and guidelines for staff about what and when existing evaluations are occurring and allows for new or once-off evaluation projects. 
- The Framework is a living document, is under construction, and will be progressed over a five-year cycle. 



References (maximum three) 

[1] Australian Medical Council. Standards for Assessment and Accreditation of Primary Medical Programs. July 2023. p 3. 

[2] Jorm, C and Roberts, C. ‘Using complexity Theory to Guide Medical School Evaluations’ Academic Medicine 9. Vol 93. No 3. March 2018 

Kent Hecker1
Courtney Vengrin2, Janine Hawley2 and Heather Case2
1 International Council for Veterinary Assessment/University of Calgary
2 International Council for Veterinary Assessment




Background:
As a requirement for licensure in both the United States and Canada, the North American Veterinary Licensing Examination (NAVLE) is of critical interest to veterinary medical education. While there are a multitude of factors that can relate to candidate performance on the NAVLE, identifying factors that can be addressed through educational interventions provides both candidates and educators with direction and actionable information (1). 

One such source of information is the Veterinary Educational Assessment (VEA). The VEA is a 240-item web-based multiple-choice examination covering basic veterinary medical sciences and includes five main content areas of anatomy, physiology, pharmacology, microbiology, and pathology. Linkages between the VEA and NAVLE have been demonstrated previously (2,3). By further examining the content areas of the VEA as well as other factors we will identify potential areas of focus for veterinary medical education. 


Summary: Data were collected across 14 VEA administrations spanning from January 2019 to May 2023 representing students at 20 institutions. These data were matched to candidates taking the NAVLE between Fall 2020 and Spring 2023 (n= 4436). Factors from the VEA that were investigated included overall VEA score and scores in the five content areas. NAVLE scores from first-time test taker were utilized. A regression analysis was performed to determine predictive factors for NAVLE scores from VEA content area scores. 


Results:
The VEA content area scores predicted NAVLE score, F(5,4430)=889.339, p<.0005, R2 =.501. All content area scores added significantly to the prediction (p<.001) with the strongest predictors being anatomy, physiology, and pathology. 


Discussion:
By identifying content areas within the basic veterinary medical sciences that are predictive of the NAVLE, educational interventions and decisions could be made with greater clarity. 

Conclusion:
The VEA remains a predictor of NAVLE scores, and separate investigation of the subscores provided greater insight. 



References (maximum three) 

1.Roush, J. K., Rush, B. R., White, B. J., & Wilkerson, M. J. (2014). Correlation of pre- veterinary admissions criteria, intra-professional curriculum measures, AVMA-COE professional competency scores, and the NAVLE. Journal of Veterinary Medical Education, 41(1), 19-26. 

2.Danielson JA, Wu TF, Molgaard LK, Preast VA. Relationships among common measures of student performance and scores on the North American veterinary licensing examination. J Am Vet Med Assoc. (2011) 238:454–61. 10.2460/javma.238.4.454 

3.Danielson JA, Burzette RG. GRE and Undergraduate GPA as Predictors of Veterinary Medical School Grade Point Average, VEA Scores and NAVLE Scores While Accounting for Range Restriction. Front Vet Sci. 2020 Oct 28;7:576354. doi: 10.3389/fvets.2020.576354. PMID: 33195578; PMCID: PMC7655731.