A growing number of higher education programmes in the Netherlands has implemented programmatic assessment. Programmatic assessment is an assessment concept in which the formative and summative function of assessment is intertwined. Although there is consensus about the theoretical principles of programmatic assessment, programs make various specific design choices, fitting with their own context. In this factsheet we give insight into the design choices Dutch higher education programmes make when implementing programmatic assessment.
DOCUMENT
In programmatic assessment (PA), an arrangement of different assessment methods is deliberately designed across the entire curriculum, combined and planned to support both robust decision-making and student learning. In health sciences education, evidence about the merits and pitfalls of PA is emerging. Although there is consensus about the theoretical principles of PA, programs make diverse design choices based on these principles to implement PA in practice, fitting their own contexts. We therefore need a better understanding of how the PA principles are implemented across contexts—within and beyond health sciences education. In this study, interviews were conducted with teachers/curriculum designers representing nine different programs in diverse professional domains. Research questions focused on: (1) design choices made, (2) whether these design choices adhere to PA principles, (3) student and teacher experiences in practice, and (4) context-specific differences between the programs. A wide range of design choices were reported, largely adhering to PA principles but differing across cases due to contextual alignment. Design choices reported by almost all programs include a backbone of learning outcomes, data-points connected to this backbone in a longitudinal design allowing uptake of feedback, intermediate reflective meetings, and decision-making based on a multitude of data-points made by a committee and involving multi-stage procedures. Contextual design choices were made aligning the design to the professional domain and practical feasibility. Further research is needed in particular with regard to intermediate-stakes decisions.
LINK
In response to dissatisfaction with testing cultures in higher education, programmatic assessment has been introduced as an alternative approach. Programmatic ssessment involves the longitudinal collection of data points about student learning, aimed at continuous monitoring and feedback. High-stakes decisions are based on a multitude of data points, involving aggregation, saturation and group-decision making. Evidence about the value of programmatic assessment is emerging in health sciences education. However, research also shows that students find it difficult to take an active role in the assessment process and seek feedback. Lower performing students are underrepresented in research on programmatic assessment, which until now mainly focuses on health sciences education. This study therefore explored low and high performing students’ experiences with learning and decision-making in programmatic assessment in relation to their feedback-seeking behaviour in a Communication Sciences program. In total, 55 students filled out a questionnaire about their perceptions of programmatic assessment, their feedback-seeking behaviour and learning performance. Low-performing and high-performing students were selected and interviewed. Several designable elements of programmatic assessment were distinguished that promote or hinder students’ feedback-seeking behaviour, learning and uptake of feedback.
LINK
Assessment in higher education (HE) is often focused on concluding modules with one or more tests that students need to pass. As a result, both students and teachers are primarily concerned with the summative function of assessment: information from tests is used to make pass/fail decisions about students. In recent years, increasing attention has been paid to the formative function of assessment and focus has shifted towards how assessment can stimulate learning. However, this also leads to a search for balance between both functions of assessment. Programmatic assessment (PA) is an assessment concept in which their intertwining is embraced to strike a new balance. A growing number of higher education programmes has implemented PA. Although there is consensus about the theoretical principles that form the basis for the design of PA, programmes make various specific design choices based on these principles, fitting with their own context. This paper provides insight into the design choices that programmes make when implementing PA and into the considerations that play a role in making these design choices. Such an overview is important for research purposes because it creates a framework for investigating the effects of different design choices within PA.
DOCUMENT
At the conference of the European Association for Research on Learning and Instruction (EARLI) Niels Bohnen and Suzan van Ierland presented their research about enhancing student self-regluation through prgrammatic assessment. The aim of the current study is to discover to which degree studying within a course program based on programmatic assessment enhances self-regulation of students compared to students in a traditional course program. The results of the study could provide guidelines for the implementation of self-directed learning within course programmes at HAS green academy, inparticular aimed at programmatic assessment.
DOCUMENT
A model for programmatic assessment in action is proposed that optimizes assessment for learning as well as decision making on learner progress. It is based on a set of assessment principles that are interpreted from empirical research. The model specifies cycles of training, assessment and learner support activities that are completed by intermediate and final moments of evaluation on aggregated data-points. Essential is that individual data-points are maximized for their learning and feedback value, whereas high stake decisions are based on the aggregation of many data-points. Expert judgment plays an important role in the program. Fundamental is the notion of sampling and bias reduction for dealing with subjectivity. Bias reduction is sought in procedural assessment strategies that are derived from qualitative research criteria. A number of challenges and opportunities are discussed around the proposed model. One of the virtues would be to move beyond the dominating psychometric discourse around individual instruments towards a systems approach of assessment design based on empirically grounded theory.
MULTIFILE
Assessment in higher education (HE) is often focused on concluding modules with one or more tests that students need to pass. As a result, both students and teachers are primarily concerned with the summative function of assessment: information from tests is used to make pass/fail decisions about students. In recent years, increasing attention has been paid to the formative function of assessment and focus has shifted towards how assessment can stimulate learning. However, this also leads to a search for balance between both functions of assessment. Programmatic assessment (PA) is an assessment concept in which their intertwining is embraced to strike a new balance. A growing number of higher education programmes has implemented PA. Although there is consensus about the theoretical principles that form the basis for the design of PA, programmes make various specific design choices based on these principles, fitting with their own context. This paper provides insight into the design choices that programmes make when implementing PA and into the considerations that play a role in making these design choices. Such an overview is important for research purposes because it creates a framework for investigating the effects of different design choices within PA.
DOCUMENT
Abstract Purpose The primary aim of this study was to investigate the effect of including the Dutch National Pharmacotherapy Assessment (DNPA) in the medical curriculum on the level and development of prescribing knowledge and skills of junior doctors. The secondary aim was to evaluate the relationship between the curriculum type and the prescribing competence of junior doctors. Methods We re-analysed the data of a longitudinal study conducted in 2016 involving recently graduated junior doctors from 11 medical schools across the Netherlands and Belgium. Participants completed three assessments during the first year after graduation (around graduation (+/−4 weeks), and 6 months, and 1 year after graduation), each of which contained 35 multiple choice questions (MCQs) assessing knowledge and three clinical case scenarios assessing skills. Only one medical school used the DNPA in its medical curriculum; the other medical schools used conventional means to assess prescribing knowledge and skills. Five medical schools were classified as providing solely theoretical clinical pharmacology and therapeutics (CPT) education; the others provided both theoretical and practical CPT education (mixed curriculum). Results Of the 1584 invited junior doctors, 556 (35.1%) participated, 326 (58.6%) completed the MCQs and 325 (58.5%) the clinical case scenarios in all three assessments. Junior doctors whose medical curriculum included the DNPA had higher knowledge scores than other junior doctors (76.7% [SD 12.5] vs. 67.8% [SD 12.6], 81.8% [SD 11.1] vs. 76.1% [SD 11.1], 77.0% [12.1] vs. 70.6% [SD 14.0], p<0.05 for all three assessments, respectively). There was no difference in skills scores at the moment of graduation (p=0.110), but after 6 and 12 months junior doctors whose medical curriculum included the DNPA had higher skills scores (both p<0.001). Junior doctors educated with a mixed curriculum had significantly higher scores for both knowledge and skills than did junior doctors educated with a theoretical curriculum (p<0.05 in all assessments). Conclusion Our findings suggest that the inclusion of the knowledge focused DNPA in the medical curriculum improves the prescribing knowledge, but not the skills, of junior doctors at the moment of graduation. However, after 6 and 12 months, both the knowledge and skills were higher in the junior doctors whose medical curriculum included the DNPA. A curriculum that provides both theoretical and practical education seems to improve both prescribing knowledge and skills relative to a solely theoretical curriculum.
MULTIFILE
Fontys, met name de lerarenopleidingen in Tilburg en Eindhoven, voert sinds 1999 een duale opleiding uit ten behoeve van onderwijspersoneel voor het Bve. Bij de ontwikkeling van deze opleiding zijn tien Regionale Opleidingscentra (ROC s) betrokken uit Brabant en Limburg. Studenten beginnen doorgaans niet blanco aan de duale trajecten. Als ze op wat oudere leeftijd instromen, hebben ze vaak al een diversiteit aan scholing en werkervaring achter de rug. Dat verhoogt de noodzaak om vooraf via een intake assessment te onderzoeken over welke relevante eerder verworven competenties de betrokkenen beschikken. Goede intake-assessments leiden uiteindelijk tot flexibilisering in de uitvoering. Bij de invulling van de opleiding wordt met de resultaten rekening gehouden. Diverse medewerkers van Fontys hebben zich met de constructie van een betrouwbaar en valide instrumentarium voor intake-assessment in de duale opleiding beziggehouden. In dit rapport zijn de ervaringen gebundeld en in een breder kader geplaatst.
DOCUMENT
Testen van software is een speerpunt in onze opleiding Software Engineering. In de propedeusefase wordt de testgedreven software-ontwikkeling geoefend. De student wordt aangeleerd software met testen at te leveren. Als onderdeel van de toetsing werd een performance-assessment ontwikkeld, dat de mogelijkheid biedt modelleren, programmeren en testen integraal te toetsen. Studenten blijken deze nieuwe toetsvorm positief te waarderen. In het kader van competentiegericht onderwijs is dit performance-assessment een waardevolle toevoeging.
DOCUMENT