Times are shown in your local time zone GMT
Ad-blocker Detected - Your browser has an ad-blocker enabled, please disable it to ensure your attendance is not impacted, such as CPD tracking (if relevant). For technical help, contact Support.
Programmatic Assessment approaches
Oral Presentation
Oral Presentation
1:30 pm
26 February 2024
M213
Session Program
1:30 pm
Bryan Ashman1
Adrian Cosenza1, Jason Frank2, Ian Incoll3, Chris Kondogiannis1, Markku Nousiainen4, Linda Snell5, James Tomlinson6 and Sindy Vrancic1
1 Australian Orthopaedic Association
2 University of Ottawa
3 University of Newcastle
4 University of Toronto
5 McGill University
6 Sheffield Teaching Hospitals
Adrian Cosenza1, Jason Frank2, Ian Incoll3, Chris Kondogiannis1, Markku Nousiainen4, Linda Snell5, James Tomlinson6 and Sindy Vrancic1
1 Australian Orthopaedic Association
2 University of Ottawa
3 University of Newcastle
4 University of Toronto
5 McGill University
6 Sheffield Teaching Hospitals
Background:
There are few published studies describing validity evidence for programmatic assessment on a national-level surgical training program. We describe an evaluation of assessment of competence in the Australian Orthopaedic Association national training scheme, which transitioned from time-in-training to a competency-based model in 2017 (1).
Summary of work:
The evaluation team used a mixed-methods research approach. The workplace-based assessment tools (WBAs) designed to give a longitudinal view of the acquisition of competence in medical, surgical, and professional skills were analysed for validity and reliability.
Results:
The review found that the use of a mobile device-based application for WBAs and capture of the outcomes in a web-based learning management system provided the necessary data for assessment of competence and feedback on performance. Using WBA results and logbook entries, trainee competence was evaluated by quarterly progress meetings between trainees and their supervisors and end-of-term reviews with Directors of Training at each hospital.
Discussion:
Cumulative data demonstrated gradual progression of autonomy of trainees during the training program and facilitated early intervention for under-performing trainees, resulting in fewer formal remediation events than in the previous time-based program. Challenges identified included resistance to regular documentation of WBAs and variability in the application of the assessment rubrics between individual trainers.
Conclusion:
A core principle of assessment of competence is that it should involve multiple methods and multiple assessors to build a picture of the progression of ability and independence (2). This review confirmed that the assessment system in the new program that documents progression of competence, provides validity evidence for programmatic assessment.
Implications for practice:
Embracing the use of WBAs to document competence requires buy-in from trainees and trainers. Commitment to engage with the process of collecting and collating outcomes of assessment of performance determines the usefulness of this method of competence assessment in surgical training.
References (maximum three)
1. IW Incoll , J Atkin, J Owen, O Khorshid, A Cosenza, JR Frank. Australian orthopaedic surgery training: Australian Orthopaedic Association’s strategic education review. ANZ J. Surg. 2020; 90: 997-1003
2. J Lockyer, C Carraccio, M-K Chan, D Hart, S Smee, C Touchie, ES Holmboe, JR Frank. Core principles of assessment in competency-based medical education. Medical Teacher 2017, 39(6), 609-616
1:45 pm
Delyse Leadbeatter
Jinlong Gao
Jinlong Gao
Background:
Professionalisation and professionalism feature in all health professional programs and have been the subject of much research. However, it is not easy to see how much progress has been made with our collective understanding of effective ways to teach and assess professionalism since Osborn wrote Punishment: A Story for Medical Education in 2000.
Professionalisation and professionalism feature in all health professional programs and have been the subject of much research. However, it is not easy to see how much progress has been made with our collective understanding of effective ways to teach and assess professionalism since Osborn wrote Punishment: A Story for Medical Education in 2000.
Summary of work:
Professionalism teaching tends to focus on the clinical context; however we can intentionally design a curriculum in which professionalisation is intentionally integrated throughout the program. We used Belisle et al’s (2021) conceptual framework of student professionalisation & Kuh et al’s (2017) high impact educational practices together to build a professionalising curriculum for a Doctor of Dental Medicine program.
Professionalism teaching tends to focus on the clinical context; however we can intentionally design a curriculum in which professionalisation is intentionally integrated throughout the program. We used Belisle et al’s (2021) conceptual framework of student professionalisation & Kuh et al’s (2017) high impact educational practices together to build a professionalising curriculum for a Doctor of Dental Medicine program.
Results:
Case examples of two spiral-designed series of learning activities and assessments taken by students during the Doctor of Dental Medicine program will be described and student outputs analysed. The first case example invites students to connect with their prior knowledge and develop skills in teaching and feedback conversations. The second case example utilises narrative techniques to connect students with the complex concepts of person-centred care.
Case examples of two spiral-designed series of learning activities and assessments taken by students during the Doctor of Dental Medicine program will be described and student outputs analysed. The first case example invites students to connect with their prior knowledge and develop skills in teaching and feedback conversations. The second case example utilises narrative techniques to connect students with the complex concepts of person-centred care.
Discussion:
Our emphasis is on creating meaningful learning environments for students to participate in and help guide their professional formation. In health professions education, we have previously overemphasised assessment of professionalism by demonstrations (or absence) of behaviours according to rubrics, but this programmatic approach conveys that professionalisation is a unique social process for each individual student.
Our emphasis is on creating meaningful learning environments for students to participate in and help guide their professional formation. In health professions education, we have previously overemphasised assessment of professionalism by demonstrations (or absence) of behaviours according to rubrics, but this programmatic approach conveys that professionalisation is a unique social process for each individual student.
Conclusions:
Intentionally designed learning activities can facilitate student professionalisation
Intentionally designed learning activities can facilitate student professionalisation
Take-home messages / implications for further research or practice:
Professionalism is not an isolated domain of knowledge but infused in every aspect of the curriculum
A programmatic assessment design enabled the creation of curriculum to promote professionalisation
References (maximum three)
References
Bélisle, Marilou, Patrick Lavoie, Jacinthe Pepin, Nicolas Fernandez, Louise Boyer, Kathleen Lechasseur, Caroline Larue. (2021) A conceptual framework of student professionalization for health professional education and research. International Journal of Nursing Education Scholarship 18, no. 1.
George Kuh, Ken O'Donnell, Carol Geary Schneider (2017) HIPs at Ten, Change: The Magazine of Higher Learning, 49:5, 8-16
Osborn, Emilie. (2000) "Punishment: a story for medical educators." Academic Medicine 75, no. 3: 241-244.
2:00 pm
Claire Palermo1
Andrea Bramley, Andrea Begley, Janica Jamieson, Janeane Dart and Olivia Wright
1 Monash University
Andrea Bramley, Andrea Begley, Janica Jamieson, Janeane Dart and Olivia Wright
1 Monash University
Background
Entrustable Professional Activities (EPAs) are essential tasks executed by a health professional used to describe activities learners will be competently perform independently (1). Dietetics in Australia were early adopters of this assessment innovation and developed an original set of EPAs in 2016, however national uptake has been poor. This study aimed to revise EPAs to align with current professional competency standards and contemporary definitions of EPAs.
Entrustable Professional Activities (EPAs) are essential tasks executed by a health professional used to describe activities learners will be competently perform independently (1). Dietetics in Australia were early adopters of this assessment innovation and developed an original set of EPAs in 2016, however national uptake has been poor. This study aimed to revise EPAs to align with current professional competency standards and contemporary definitions of EPAs.
Summary of work
The research team reviewed the existing EPAs in relation to evolving evidence on EPAs for entry level learners. A new set of seven Core Professional Activities (CPAs) were developed through a series of four consensus development workshops, drawing on a convenience sample of dietetics educators across Australia and New Zealand (n=62),. The CPAs were mapped to the current national competency standards and included reference to a previously published dietetic entrustment scale.
The research team reviewed the existing EPAs in relation to evolving evidence on EPAs for entry level learners. A new set of seven Core Professional Activities (CPAs) were developed through a series of four consensus development workshops, drawing on a convenience sample of dietetics educators across Australia and New Zealand (n=62),. The CPAs were mapped to the current national competency standards and included reference to a previously published dietetic entrustment scale.
Results.
The word ‘entrustable’ was replaced with ‘core’ to reflect the shifting positioning and more central role of learner’s in assessment. Detailed descriptions of activities required by the learner to execute the CPA were drafted alongside milestone descriptors to support judgement decision. Existing tools to assess independence and generate performance data to contribute to programmatic assessment were identified, highlighting the many and multiple processes that universities use to inform assessment decisions.
The word ‘entrustable’ was replaced with ‘core’ to reflect the shifting positioning and more central role of learner’s in assessment. Detailed descriptions of activities required by the learner to execute the CPA were drafted alongside milestone descriptors to support judgement decision. Existing tools to assess independence and generate performance data to contribute to programmatic assessment were identified, highlighting the many and multiple processes that universities use to inform assessment decisions.
Discussion.
As programmatic assessment is becoming common practice in dietetics, the usefulness of the role of CPAs in making decisions around independence and drawing on multiple assessment data points needs evaluation. Fundamental principles of assessment including training of assessors, credible and dependable assessment tools, and student agency, remain critical to support quality assessment.
As programmatic assessment is becoming common practice in dietetics, the usefulness of the role of CPAs in making decisions around independence and drawing on multiple assessment data points needs evaluation. Fundamental principles of assessment including training of assessors, credible and dependable assessment tools, and student agency, remain critical to support quality assessment.
Implications:
For CPAs to have utility, the processes for uses (learners, supervisors/educators and faculty) need to be simple, requiring further work in dietetics.Evolution of core professional activities to support programmatic assessment in dietetics
For CPAs to have utility, the processes for uses (learners, supervisors/educators and faculty) need to be simple, requiring further work in dietetics.Evolution of core professional activities to support programmatic assessment in dietetics
References (maximum three)
1. Ten Cate O. Nuts and bolts of entrustable professional activities. J Grad Med Educ. 2013 Mar;5(1):157-8.
2:15 pm
Kristy Osborne1
Jennifer Guille2 and Jacob Pearce1
1 Australian Council for Educational Research
2 Australasian College Of Physical Scientists & Engineers In Medicine
Jennifer Guille2 and Jacob Pearce1
1 Australian Council for Educational Research
2 Australasian College Of Physical Scientists & Engineers In Medicine
The Australasian College of Physical Scientists and Engineers in Medicine (ACPSEM) is responsible for the certification of physicists, scientists and engineers working in medicine in Australia and New Zealand. In 2019, the ACPSEM Board requested reform of their 3-year Training, Education and Assessment Program (TEAP). The Radiopharmaceutical Science (RPS) reform began in 2022, with the aim of ensuring a flexible curriculum to cope with changing technology and that training be feasibly completed within 3 years.
Although a progressive assessment approach had been implemented in the original TEAP, many areas were not functioning well. The original TEAP lacked checkpoints to ensure progression, had issues with ‘over-assessment’, and did not focus enough on the learning functions of programmatic assessment, with too many assessments feeling ‘high-stakes’. Some skills and content were assessed on multiple occasions unnecessarily, while some skills required more observation and evaluation to ensure progressive judgements about the registrar’s proficiency.
We revised the RPS TEAP by adapting some of the latest findings from programmatic assessment implementations (1-3). The Australian Council for Educational Research worked closely with the lead of the TEAP and an expert group of radiopharmaceutical scientists to adapt programmatic assessment approaches to their context. The result was a revised 3-year TEAP program with a new evidentiary framework, including clearly delineated low-stakes and high-stakes assessment moments, checkpoints, and a suite of assessment types best suited to the learning outcomes. These assessment types included entrustment scale ratings for workplace-based assessment, presentations, formal and reflective reports, short-answer questions, and annotated records. Structured learning activities were also developed.
This work highlights the importance of adapting programmatic assessment to specific training contexts, embracing blurred boundaries between training and assessment, engaging supervisors, management and registrars in the importance of the changes, and importantly, the cultural change needed in thinking about assessment in allied medical education.
References (maximum three)
- Roberts C, Khanna P, Bleasel J, et al. Student perspectives on programmatic assessment in a large medical programme: a critical realist analysis. Med Educ. 2022. doi:10.1111/medu.14807
- Kinnear B, Warm EJ, Caretta-Weyer H, et al. Entrustment unpacked: aligning purposes, stakes, and processes to enhance learner assessment. Acad Med. 2021;96(7S):S56-S63. doi:10.1097/ACM. 0000000000004108
- Schut S, Heeneman S, Bierer B, Driessen E, van Tartwijk J, van Der Vleuten C. Between trust and control: teachers assessment conceptualisations and relationships within programmatic assessment. Med Educ. 2020;54:528-537. doi:10.1111/medu.14075
Shelley Ross1
Kent G. Hecker2 and Todd Milford3
1 University of Alberta
2 University of Calgary
3 University of Victoria
Kent G. Hecker2 and Todd Milford3
1 University of Alberta
2 University of Calgary
3 University of Victoria
Background:
In 2005, van der Vleuten and Schuwirth first introduced the concept of programmatic assessment in health professions education (HPE). Since then, programmatic assessment has been a productive HPE research area, particularly the competency-based medical education (CBME) community. There are 4 basic assumptions of programmatic assessment: a longitudinal perspective on development of competence; regular meaningful feedback to learners; inclusion of workplace-based assessment; decision-making based on cumulative assessment data; and multiple points and methods of assessment data collection. However, our team noted an emerging trend in recent CBME literature suggesting diverging definitions of programmatic assessment. We systematically explored this phenomenon through a narrative review focused on how programmatic assessment is being described in CBME.
In 2005, van der Vleuten and Schuwirth first introduced the concept of programmatic assessment in health professions education (HPE). Since then, programmatic assessment has been a productive HPE research area, particularly the competency-based medical education (CBME) community. There are 4 basic assumptions of programmatic assessment: a longitudinal perspective on development of competence; regular meaningful feedback to learners; inclusion of workplace-based assessment; decision-making based on cumulative assessment data; and multiple points and methods of assessment data collection. However, our team noted an emerging trend in recent CBME literature suggesting diverging definitions of programmatic assessment. We systematically explored this phenomenon through a narrative review focused on how programmatic assessment is being described in CBME.
Summary of work:
We conducted a literature search of English language peer-reviewed publications from 2005-2023. Initial search terms: “competency-based” and “assessment”; further searches added “programmatic assessment”. Abstracts were scanned to identify patterns in the data; data interpretation was through a modified nominal group consensus.
Results:
The initial search resulted in 1584 exhibits; adding “programmatic assessment” reduced the results to 259 exhibits. We found relative stability over time in the definition of “programmatic assessment”, with the notable exception of a decrease in the variety of methods and assessment tools being used.
The initial search resulted in 1584 exhibits; adding “programmatic assessment” reduced the results to 259 exhibits. We found relative stability over time in the definition of “programmatic assessment”, with the notable exception of a decrease in the variety of methods and assessment tools being used.
Discussion:
Programmatic assessment continues to be a goal in CBME. Authors describe programs of assessment that generally reflect van der Vleuten and Schuwirth’s core assumptions. However, a trend was identified of decreased variety in methods of assessment, with an increasing number of articles describing programmatic assessment designed around a single assessment tool.
Programmatic assessment continues to be a goal in CBME. Authors describe programs of assessment that generally reflect van der Vleuten and Schuwirth’s core assumptions. However, a trend was identified of decreased variety in methods of assessment, with an increasing number of articles describing programmatic assessment designed around a single assessment tool.
Conclusions:
There is both promise and peril in the current state of programmatic assessment in CBME.
There is both promise and peril in the current state of programmatic assessment in CBME.
Implications for future research:
A full scoping review of this topic is needed to more thoroughly explore how authors are defining and describing programmatic assessment in CBME.
A full scoping review of this topic is needed to more thoroughly explore how authors are defining and describing programmatic assessment in CBME.
References (maximum three)
1. van der Vleuten CP, Schuwirth LW. Assessing professional competence: from methods to programmes. Med Educ. 2005; 39: 309–17.
2. Bok HGJ, de Jong LH, O’Neill T, Hecker KG. Validity evidence for programmatic assessment in competency-based education. Perspect Med Educ. 2018; 7: 362-372. https://doi.org/10.1007/s40037-018-0481-2
3. Ross S, Hauer K, Wycliffe-Jones K, Hall AK, Molgaard L, Richardson D, Oswald A, Bhanji F. Key considerations in planning and designing programmatic assessment in competency-based medical education. Med Teach. 2021; 43(7): 758-764. DOI: 10.1080/0142159X.2021.1925099
3:00 pm
bree jones1,2
Claire Mustchin3 and Clare McNally2
1 Murdoch Children's Research Institute
2 Melbourne Dental School, The University of Melbourne 3 University Of Melbourne
Claire Mustchin3 and Clare McNally2
1 Murdoch Children's Research Institute
2 Melbourne Dental School, The University of Melbourne 3 University Of Melbourne
Background:
The Bachelor of Oral Health (BOH) is a clinically focused, preparation for practice degree. Accreditation by the Australian Dental Council (ADC) ensures graduates have met the professional competencies required to practice dentistry as Oral Health Therapists (OHTs) in Australia and New Zealand.(1) In 2021 a review of the professional competencies coincided with the introduction of full scope oral health therapy into the BOH at the Melbourne Dental School (MDS). This led to a curriculum re-structure aligned to the competencies and the piloting of a programmatic approach to assessment.
The Bachelor of Oral Health (BOH) is a clinically focused, preparation for practice degree. Accreditation by the Australian Dental Council (ADC) ensures graduates have met the professional competencies required to practice dentistry as Oral Health Therapists (OHTs) in Australia and New Zealand.(1) In 2021 a review of the professional competencies coincided with the introduction of full scope oral health therapy into the BOH at the Melbourne Dental School (MDS). This led to a curriculum re-structure aligned to the competencies and the piloting of a programmatic approach to assessment.
Summary of Work:
The assessments were developed using the Ottawa Consensus Statement on the Principles of Programmatic Assessment (2), optimising and triangulating data for meaningful feedback and not making high-stakes decisions on a single data point. The assessment types were streamlined across the program in accordance with best practice dental assessments (3). We focused on reflection, digital identity development, task integration, feedback literacy, authenticity, and collaborative learning, with an ePortfolio mapped to the ADC professional competencies as the main assessment piece.
The assessments were developed using the Ottawa Consensus Statement on the Principles of Programmatic Assessment (2), optimising and triangulating data for meaningful feedback and not making high-stakes decisions on a single data point. The assessment types were streamlined across the program in accordance with best practice dental assessments (3). We focused on reflection, digital identity development, task integration, feedback literacy, authenticity, and collaborative learning, with an ePortfolio mapped to the ADC professional competencies as the main assessment piece.
Results:
This assessment approach ensures work ready practitioners with clearly defined scopes of practice and strong skills in reflexive practice and has facilitated the continual improvement and development of skills over time, rather than in isolated, high-stakes assessments.
This assessment approach ensures work ready practitioners with clearly defined scopes of practice and strong skills in reflexive practice and has facilitated the continual improvement and development of skills over time, rather than in isolated, high-stakes assessments.
Conclusions:
Despitethesmallscaleofthispilotprogram,thepositiveresponsefromstudents and staff has led to a whole of School adoption of a programmatic approach as part of the MDS- 25 curriculum redesign.
Despitethesmallscaleofthispilotprogram,thepositiveresponsefromstudents and staff has led to a whole of School adoption of a programmatic approach as part of the MDS- 25 curriculum redesign.
Take-home Messages:
1. Adoption of a programmatic approach to assessment takes requires significant buy in from staff
1. Adoption of a programmatic approach to assessment takes requires significant buy in from staff
2. Professional identity formation and the ability to demonstrate scope of practice is a key advantage of the new MDS assessment program
3. Collaboration within and from outside the institution is critical to the success of a large-scale assessment overhaul.
References (maximum three)
(1) Australian Dental Council (2022) Professional competencies of the newly qualified dental practitioner, www.adc.org.au
(2) Heeneman, S., De Jong, L. H., Dawson, L. J., Wilkinson, T. J., Ryan, A., Tait, G. R., Rice, N., Torre, D., Freeman, A., & Van Der Vleuten, C. P. M. (2021). Ottawa 2020 consensus statement for programmatic assessment – 1. Agreement on the principles. Medical Teacher, 43(10), 1139–1148. https://doi.org/10.1080/0142159x.2021.1957088
(3) Williams, J.C., Baillie, S., Rhind, S.M., Warman, S. (2015) A guide to assessment in dental education. The University of Bristol. Creative Commons Attribution 4.0 International Licence.
3:15 pm
David Rojas1
Glendon Tait1 and Mahan Kulasegaram1
1 University of Toronto
Glendon Tait1 and Mahan Kulasegaram1
1 University of Toronto
Background
Programmatic assessment is an increasingly adopted paradigm that utilizes frequent low-stakes assessments to better track growth in knowledge and competence of medical students across time and informs coaching and high-stakes decisions(1). However, the ability of these assessment tools to identify students in difficulty early has not been fully explored.
Programmatic assessment is an increasingly adopted paradigm that utilizes frequent low-stakes assessments to better track growth in knowledge and competence of medical students across time and informs coaching and high-stakes decisions(1). However, the ability of these assessment tools to identify students in difficulty early has not been fully explored.
Summary of work
At the MD Program, University of Toronto, we studied the ability of “longitudinal (all 4 years) Progress Test (PT)" and “non-mandatory pre-clerkship assessment data” to determine if a student would fail the licensing exam. We conducted a sensitivity analysis(2) using the PT data from cohorts graduated in 2020 and 2021. We also developed a Machine Learning (ML) model(3) using quantitative and qualitative data from non-mandatory pre-clerkship assessments from the 2021 cohort.
Results
The sensitivity analysis(2) of longitudinal PT data showed a level of specificity (ability to classify individuals as would pass) to be above 92.5%, while the Negative Predictive Value (percentage of students classified as would pass, who actually passed the licensing exam) was above 99%. PT performance at the end of year 2 or the beginning of year 3 could help identify students at risk of failing the licensing exam.
The sensitivity analysis(2) of longitudinal PT data showed a level of specificity (ability to classify individuals as would pass) to be above 92.5%, while the Negative Predictive Value (percentage of students classified as would pass, who actually passed the licensing exam) was above 99%. PT performance at the end of year 2 or the beginning of year 3 could help identify students at risk of failing the licensing exam.
The ML model(3) developed using Pre-Clerkship assessment data showed an accuracy of 82.33 % and an ability to detect students in difficulty of 86.39 %.
Discussion
Our work supports the usability of PTs to inform students' learning process, while also showing that technologically supported methods (ML) could offer similar levels of accuracy by combining Quantitative and Qualitative data.
Conclusions
We have shown that two different methodologies, using different assessment variables, could offer reliable analysis of student performance for early identification of students in difficulty.
Take-home messages
Organically generated data points and technologically supported solutions (ML) are reliable resources for the early identification of students in difficulty.
References (maximum three)
- Heeneman S, de Jong LH, Dawson L, Wilkinson TJ, Ryan A, Tait GR, Rice N, Torre D, Freeman A, van der Vleuten CPM. Ottawa 2020 consensus statement for programmatic assessment 1: agreement on principles (2021) Medical Teacher. Epub ahead of print Aug. 3. doi:10.1080/0142159X.2021.1957088.
- Monaghan T.F., Rahman S.N., Agudelo C.W., Wein A.J., Lazar J.M., Everaert K., Dmochowski R.R. (2021). Foundational Statistical Principles in Medical Research: Sensitivity, Specificity, Positive Predictive Value, and Negative Predictive Value. Medicina; 57(5):503. https://doi.org/10.3390/medicina57050503
- Anguita, D., Ghelardoni, L., Ghio, A., Oneto, L., & Ridella, S. (2012, April). The'K'in K-fold Cross Validation. In ESANN (pp. 441-446).