Presentation Description
Janica Jamieson1,2
Claire Palermo2, Margaret Hay2, Rachel Bacon3, Janna Lutze4 and Simone Gibson2
1 Edith Cowan University
2 Monash University
3 University of Canberra
4 Discipline of Nutrition & Dietetics, University of Wollongong
Claire Palermo2, Margaret Hay2, Rachel Bacon3, Janna Lutze4 and Simone Gibson2
1 Edith Cowan University
2 Monash University
3 University of Canberra
4 Discipline of Nutrition & Dietetics, University of Wollongong
Background:
Programmatic assessment is an increasingly popular, yet complex education initiative with implementation challenged by contextual parameters (Torre et al., 2021), necessitating robust evaluation to support transference of theory to practice (Haji et al., 2013). Contribution analysis (CA), a theory-based evaluation framework, determines the contribution an intervention makes to outcomes. CA enables evaluation of complex interventions in dynamic-authentic settings (Mayne 2021), making it suited to programmatic assessment. We applied the six steps of CA to evaluate programmatic assessment. Summary of work: (1) Cause- effect questions (2) and a theory of change (ToC) were developed. (3) A qualitative study with programmatic assessment stakeholders (faculty n=19, graduates n=15, supervisors n=32) from four Australian dietetic programs provided evaluation data. These data were (4) assembled into contribution claims and story. (5) Additional data were gathered from the same stakeholders and literature to (6) finalise the ToC and contribution story. Results: Leaders initiated and drove development of programmatic assessment by a design team who applied the principles as guides and used compromise to navigate challenges, leading to a contextually responsive programmatic assessment. All users needed training, with fit-for-purpose tools implemented within an ideologically aligned assessment system. Students became leaders, supervisors teachers, and faculty facilitators, working collaboratively as a learning team with growth mindset. An assessment team used congruency of collated low-stakes data to inform high- stakes decisions. Causal pathways coalesced to create collaborative-individualised learning environments, psychologically safe remediation, enabled credible high-stakes decisions, and prepared graduates for practice. Ultimately, people experienced less stress and care recipients benefited. Discussion: Successful programmatic assessment requires leaders to bring together capable people to enact role responsibilities as intended. Conclusions: CA revealed important causal links underpinning, and leading to, programmatic assessment outcomes. Implications: Leverage and risk points are illuminated for implementors to facilitate individualised and successful manifestations of programmatic assessment across diverse settings.
Programmatic assessment is an increasingly popular, yet complex education initiative with implementation challenged by contextual parameters (Torre et al., 2021), necessitating robust evaluation to support transference of theory to practice (Haji et al., 2013). Contribution analysis (CA), a theory-based evaluation framework, determines the contribution an intervention makes to outcomes. CA enables evaluation of complex interventions in dynamic-authentic settings (Mayne 2021), making it suited to programmatic assessment. We applied the six steps of CA to evaluate programmatic assessment. Summary of work: (1) Cause- effect questions (2) and a theory of change (ToC) were developed. (3) A qualitative study with programmatic assessment stakeholders (faculty n=19, graduates n=15, supervisors n=32) from four Australian dietetic programs provided evaluation data. These data were (4) assembled into contribution claims and story. (5) Additional data were gathered from the same stakeholders and literature to (6) finalise the ToC and contribution story. Results: Leaders initiated and drove development of programmatic assessment by a design team who applied the principles as guides and used compromise to navigate challenges, leading to a contextually responsive programmatic assessment. All users needed training, with fit-for-purpose tools implemented within an ideologically aligned assessment system. Students became leaders, supervisors teachers, and faculty facilitators, working collaboratively as a learning team with growth mindset. An assessment team used congruency of collated low-stakes data to inform high- stakes decisions. Causal pathways coalesced to create collaborative-individualised learning environments, psychologically safe remediation, enabled credible high-stakes decisions, and prepared graduates for practice. Ultimately, people experienced less stress and care recipients benefited. Discussion: Successful programmatic assessment requires leaders to bring together capable people to enact role responsibilities as intended. Conclusions: CA revealed important causal links underpinning, and leading to, programmatic assessment outcomes. Implications: Leverage and risk points are illuminated for implementors to facilitate individualised and successful manifestations of programmatic assessment across diverse settings.
References (maximum three)
Haji, F., Morin, M-P., & Parker, K. (2013). Rethinking programme evaluation in health professions education: beyond ‘did it work?’ Medical Education, 47(4), 342-351.
Mayne, J. (2012). Contribution analysis: coming of age? Evaluation, 18(3), 270-280.
Torre et al., D., Rice, N. E., Ryan, A., Bok, H., Dawson, L. J., Bierer, B., Wilkinson, W. J., Tait, G. R., Laughling, T., Veerpen, K., Heeneman, S., Freeman, A., & van der Vleuten, C. (2021). Ottawa 2020 consensus statements for programmatic assessment 2: implementation and practice, Medical Teacher, 43(10), 1149-1160.