Presentation Description
Tim Wilkinson1,2
Mike Tweed1,2, Rola Ajjawi3, Walter Tavares4, Jacob Pearce5, Imogene Rothnie6, A Curtis Lee1, Libby Newton1 and Inam Haq1
1 Royal Australasian College of Physicians
2 University of Otago
3 Deakin University
4 University of Toronto
5 Australian Council for Educational Research
6 ANZAHPE, AES
Mike Tweed1,2, Rola Ajjawi3, Walter Tavares4, Jacob Pearce5, Imogene Rothnie6, A Curtis Lee1, Libby Newton1 and Inam Haq1
1 Royal Australasian College of Physicians
2 University of Otago
3 Deakin University
4 University of Toronto
5 Australian Council for Educational Research
6 ANZAHPE, AES
Clinical competence assessment is engaged in a paradigmatic shift (1). Programmatic assessment approaches, based on constructivist views of competence and its assessment are gathering momentum, encouraging reduced reliance on post-positivist oriented, point-in-time examinations to certify competence. However, for some postgraduate training programs keen to engage with the promise of programmatic assessment, highly structured point-in-time examinations retain the public and institutional balance of trust to certify individuals’ competence to practice.
Consequently, curriculum custodians in these programs are evaluating assessment frameworks from these apparently divergent perspectives, applying such lenses as cost-benefit, feasibility, validity evidence, and participant well-being. In this context, many also find themselves balancing different stakeholder and theoretical perspectives of assessment based on often implicitly held conceptual assumptions about competence.
In this symposium, experts in clinical competence assessment and those working to implement postgraduate CBME programs will debate several dimensions of this challenge. Drawing on scholarly work, applied practice and stakeholder perspectives, the panel will debate whether and how these divergent assessment paradigms could co-exist in curricula and what we gain or lose by merging them or selecting one over the other.
Importance:
This topic is important as many health education programs find themselves combining traditional and CBME-informed assessment approaches, as both exhibit strengths and challenges for institutions, certifying bodies and participants. Despite the theoretical promise of programmatic assessment, in practice, the resource burden associated with implementation and quality of assessor feedback challenge the ability to produce a sufficient foundation of assessment data. Conversely, concerns about the robustness of barrier/certifying examinations, stressors experienced by examinees and the burden associated with large scale exam delivery impact the future of clinical competence examinations.(2)
This topic is important as many health education programs find themselves combining traditional and CBME-informed assessment approaches, as both exhibit strengths and challenges for institutions, certifying bodies and participants. Despite the theoretical promise of programmatic assessment, in practice, the resource burden associated with implementation and quality of assessor feedback challenge the ability to produce a sufficient foundation of assessment data. Conversely, concerns about the robustness of barrier/certifying examinations, stressors experienced by examinees and the burden associated with large scale exam delivery impact the future of clinical competence examinations.(2)
Format:
A discussion/debate to explore paradigmatic dimensions of assessment programs through the following perspectives to derive insights for practical application:
- Stakeholders in the postgraduate certification of clinical competence: o
- Specialist medical colleges
- Trainees
- Public expectations
- Current and emerging perspectives in assessment research:
- The concept of ‘assessment science’ that acknowledges the array of philosophical perspectives on assessment and their impact on assessment implementation(3).
- Adopting inclusivity as a principle in assessment design and research.
The audience will submit questions and perspectives in person and online to prioritise discussion topics such as:
- What does this paradigm plurality in medical education research mean for assessment features?
- Do we need a new framework to research and evaluate the quality of our clinical competence assessment programs?
- Can we have a programmatic assessment approach for learning with point in time assessment for certification? Is this a philosophical issue influenced by different interpretations of competence?
Panelists will be identified subject to acceptance of the abstract and will include local and international scholars, educators and program participants.
Outcomes
- This discussion will unpack how these emerging and predominant assessment approaches embedded in different paradigmatic frameworks might operate within curriculum frameworks (or not). In doing so this symposium will create an opportunity for conference participants to reflect on tensions and opportunities within their own institutional assessment programs. The symposium will close by distilling practical insights for application.
References (maximum three)
1) Van Melle, E., PhD; Frank, J.R., MD, MA(Ed); Holmboe, E.,S. MD; Dagnone, D., MD, MSc, MMEd; Stockley, D, PhD; Sherbino, J., MD, MEd. (2019) A Core Components Framework for Evaluating Implementation of Competency-Based Medical Education Programs. Academic Medicine 94(7):p 1002-1009, DOI: 10.1097/ACM.0000000000002743
2) Australian Medical Council (2021). Effecting reforms to Australia's Specialist Medical Training and Accreditation System post COVID 19. Report 4: REPORT 4: Chances in assessment programs – Opportunities for system improvement. Retrieved from Covid-19 Reforms (amc.org.au) 10.08.23
3) Tavares, W. & Pearce, J (2023) Attending to Variable Interpretations of Assessment Science and Practice, Teaching and Learning in Medicine, DOI: 10.1080/10401334.2023.2231923