A growing number of higher education programmes in the Netherlands has implemented programmatic assessment. Programmatic assessment is an assessment concept in which the formative and summative function of assessment is intertwined. Although there is consensus about the theoretical principles of programmatic assessment, programs make various specific design choices, fitting with their own context. In this factsheet we give insight into the design choices Dutch higher education programmes make when implementing programmatic assessment.
DOCUMENT
Assessment in higher education (HE) is often focused on concluding modules with one or more tests that students need to pass. As a result, both students and teachers are primarily concerned with the summative function of assessment: information from tests is used to make pass/fail decisions about students. In recent years, increasing attention has been paid to the formative function of assessment and focus has shifted towards how assessment can stimulate learning. However, this also leads to a search for balance between both functions of assessment. Programmatic assessment (PA) is an assessment concept in which their intertwining is embraced to strike a new balance. A growing number of higher education programmes has implemented PA. Although there is consensus about the theoretical principles that form the basis for the design of PA, programmes make various specific design choices based on these principles, fitting with their own context. This paper provides insight into the design choices that programmes make when implementing PA and into the considerations that play a role in making these design choices. Such an overview is important for research purposes because it creates a framework for investigating the effects of different design choices within PA.
DOCUMENT
Assessment in higher education (HE) is often focused on concluding modules with one or more tests that students need to pass. As a result, both students and teachers are primarily concerned with the summative function of assessment: information from tests is used to make pass/fail decisions about students. In recent years, increasing attention has been paid to the formative function of assessment and focus has shifted towards how assessment can stimulate learning. However, this also leads to a search for balance between both functions of assessment. Programmatic assessment (PA) is an assessment concept in which their intertwining is embraced to strike a new balance. A growing number of higher education programmes has implemented PA. Although there is consensus about the theoretical principles that form the basis for the design of PA, programmes make various specific design choices based on these principles, fitting with their own context. This paper provides insight into the design choices that programmes make when implementing PA and into the considerations that play a role in making these design choices. Such an overview is important for research purposes because it creates a framework for investigating the effects of different design choices within PA.
DOCUMENT
In programmatic assessment (PA), an arrangement of different assessment methods is deliberately designed across the entire curriculum, combined and planned to support both robust decision-making and student learning. In health sciences education, evidence about the merits and pitfalls of PA is emerging. Although there is consensus about the theoretical principles of PA, programs make diverse design choices based on these principles to implement PA in practice, fitting their own contexts. We therefore need a better understanding of how the PA principles are implemented across contexts—within and beyond health sciences education. In this study, interviews were conducted with teachers/curriculum designers representing nine different programs in diverse professional domains. Research questions focused on: (1) design choices made, (2) whether these design choices adhere to PA principles, (3) student and teacher experiences in practice, and (4) context-specific differences between the programs. A wide range of design choices were reported, largely adhering to PA principles but differing across cases due to contextual alignment. Design choices reported by almost all programs include a backbone of learning outcomes, data-points connected to this backbone in a longitudinal design allowing uptake of feedback, intermediate reflective meetings, and decision-making based on a multitude of data-points made by a committee and involving multi-stage procedures. Contextual design choices were made aligning the design to the professional domain and practical feasibility. Further research is needed in particular with regard to intermediate-stakes decisions.
LINK
Formative assessment (FA) is an effective educational approach for optimising student learning and is considered as a promising avenue for assessment within physical education (PE). Nevertheless, implementing FA is a complex and demanding task for in-service PE teachers who often lack formal training on this topic. To better support PE teachers in implementing FA into their practice, we need better insight into teachers’ experiences while designing and implementing formative strategies. However, knowledge on this topic is limited, especially within PE. Therefore, this study examined the experiences of 15 PE teachers who participated in an 18-month professional development programme. Teachers designed and implemented various formative activities within their PE lessons, while experiences were investigated through logbook entries and focus groups. Findings indicated various positive experiences, such as increased transparency in learning outcomes and success criteria for students as well as increased student involvement, but also revealed complexities, such as shifting teacher roles and insufficient feedback literacy among students. Overall, the findings of this study underscore the importance of a sustained, collaborative, and supported approach to implementing FA.
DOCUMENT
Abstract Purpose The primary aim of this study was to investigate the effect of including the Dutch National Pharmacotherapy Assessment (DNPA) in the medical curriculum on the level and development of prescribing knowledge and skills of junior doctors. The secondary aim was to evaluate the relationship between the curriculum type and the prescribing competence of junior doctors. Methods We re-analysed the data of a longitudinal study conducted in 2016 involving recently graduated junior doctors from 11 medical schools across the Netherlands and Belgium. Participants completed three assessments during the first year after graduation (around graduation (+/−4 weeks), and 6 months, and 1 year after graduation), each of which contained 35 multiple choice questions (MCQs) assessing knowledge and three clinical case scenarios assessing skills. Only one medical school used the DNPA in its medical curriculum; the other medical schools used conventional means to assess prescribing knowledge and skills. Five medical schools were classified as providing solely theoretical clinical pharmacology and therapeutics (CPT) education; the others provided both theoretical and practical CPT education (mixed curriculum). Results Of the 1584 invited junior doctors, 556 (35.1%) participated, 326 (58.6%) completed the MCQs and 325 (58.5%) the clinical case scenarios in all three assessments. Junior doctors whose medical curriculum included the DNPA had higher knowledge scores than other junior doctors (76.7% [SD 12.5] vs. 67.8% [SD 12.6], 81.8% [SD 11.1] vs. 76.1% [SD 11.1], 77.0% [12.1] vs. 70.6% [SD 14.0], p<0.05 for all three assessments, respectively). There was no difference in skills scores at the moment of graduation (p=0.110), but after 6 and 12 months junior doctors whose medical curriculum included the DNPA had higher skills scores (both p<0.001). Junior doctors educated with a mixed curriculum had significantly higher scores for both knowledge and skills than did junior doctors educated with a theoretical curriculum (p<0.05 in all assessments). Conclusion Our findings suggest that the inclusion of the knowledge focused DNPA in the medical curriculum improves the prescribing knowledge, but not the skills, of junior doctors at the moment of graduation. However, after 6 and 12 months, both the knowledge and skills were higher in the junior doctors whose medical curriculum included the DNPA. A curriculum that provides both theoretical and practical education seems to improve both prescribing knowledge and skills relative to a solely theoretical curriculum.
MULTIFILE
The main purpose of the research was the development and testing of an assessment tool for the grading of Dutch students' performance in information problem solving during their study tasks. Scholarly literature suggests that an analytical scoring rubric would be a good tool for this.Described in this article are the construction process of such a scoring rubric and the evaluation of the prototype based on the assessment of its usefulness in educational practice, the efficiency in use and the reliability of the rubric. To test this last point, the rubric was used by two professors when they graded the same set of student products. 'Interrater reliability' for the professors' gradings was estimated by calculating absolute agreement of the scores, adjacent agreement and decision consistency. An English version of the scoring rubric has been added to this journal article as an appendix. This rubric can be used in various discipline-based courses in Higher Education in which information problem solving is one of the learning activities. After evaluating the prototype it was concluded that the rubric is particularly useful to graders as it keeps them focussed on relevant aspects during the grading process. If the rubric is used for summative evaluation of credit bearing student work, it is strongly recommended to use the scoring scheme as a whole and to let the grading work be done by at least two different markers. [Jos van Helvoort & the Chartered Institute of Library and Information Professionals-Information Literacy Group]
DOCUMENT
Background: Assessment can have various functions, and is an important impetus for student learning. For assessment to be effective, it should be aligned with curriculum goals and of sufficient quality. Although it has been suggested that assessment quality in physical education (PE) is suboptimal, research into actual assessment practices has been relatively scarce. Purpose: The goals of the present study were to determine the quality of assessment, teachers’ views on the functions of assessment, the alignment of assessment with learning goals, and the actual assessment practices in secondary PE in the Netherlands. Participants and setting: A total of 260 PE teachers from different schools in the Netherlands filled out an online Physical Education Assessment Questionnaire (PEAQ) on behalf of their school. Data collection: The online questionnaire (PEAQ) contained the following sections: quality of assessment, intended functions of assessment, assessment practices, and intended goals of PE. Data analysis: Percentages of agreement were calculated for all items. In addition, assessment quality items were recoded into a numerical value between 1 and 5 (mean ± SD). Cronbach’s alpha was calculated for each predefined quality aspect of the PEAQ, and for assessment quality as a whole. Findings: Mean assessment quality (±SD) was 3.6 ± 0.6. With regard to the function of assessment, most PE teachers indicated that they intended using assessment as a means of supporting the students’ learning process (formative function). At the same time, the majority of schools take PE grades into account for determining whether a student may enter the next year (summative function). With regard to assessment practices, a large variety of factors are included when grading, and observation is by far the assessment technique most widely applied. A minority of PE teachers grade students without predetermined assessment criteria, and usually criteria are identical for all students. There is an apparent discrepancy between reported PE goals and assessment practices; although increasing students’ fitness levels is the least important goal of PE lessons according to the PE teachers, 81% reports that fitness is one of the factors being judged. Conversely, while 94% considers gaining knowledge about physical activity and sports as one of the goals of PE, only 34% actually assesses knowledge. Conclusions: Assessment in Dutch PE is of moderate quality. The findings further suggest that PE teachers consider assessment for learning important but that their assessment practices are not generally in line with this view. Furthermore, there seems to be a lack of alignment between intended learning outcomes and what is actually being valued and assessed. We believe that these results call for a concerted effort from PE departments, school boards, and the education inspectorate to scrutinise existing assessment practices, and work together to optimise PE assessment.
LINK
In today’s foreign language (FL) education, teachers universally recognise the importance of fostering students’ ability to communicate in the target language. However, the current assessments often do not (sufficiently) evaluate this. In her dissertation, Charline Rouffet aims to gather insight into the potential of assessments to steer FL teaching practices. Communicative learning objectives FL teachers fully support the communicative learning objectives formulated at national level and embrace the principles of communicative language teaching. Yet, assessments instead primarily focus on formal language knowledge in isolation (e.g., grammar rules), disconnected from real-world communicative contexts. This misalignment between assessment practices and communicative objectives hampers effective FL teaching. CBA toolbox The aim of this design-based PhD research project is to gather insight into the potential of assessments to steer FL teaching practices. To this end, tools for developing communicative classroom-based assessment (CBA) programmes were designed and implemented in practice, in close collaboration with FL teachers. Rouffet's dissertation consists of multiple studies, in which the current challenges of FL education are addressed and the usage of the CBA toolbox is investigated. Findings reveal that assessing FL competencies in a more communicative way can transform teaching practices, placing communicative abilities at the heart of FL education.
DOCUMENT
Why a position statement on Assessment in Physical Education? The purpose of this AIESEP Position Statement on Assessment in Physical Education (PE) is fourfold: • To advocate internationally for the importance of assessment practices as central to providing meaningful, relevant and worthwhile physical education; • To advise the field of PE about assessment-related concepts informed by research and contemporary practice; • To identify pressing research questions and avenues for new research in the area of PE assessment; • To provide a supporting rationale for colleagues who wish to apply for research funds to address questions about PE assessment or who have opportunities to work with or influence policy makers. The main target groups for this position statement are PE teachers, PE pre-service teachers, PE curriculum officers, PE teacher educators, PE researchers, PE administrators and PE policy makers. How was this position statement created? The AIESEP specialist seminar ‘Future Directions in PE Assessment’ was held from October 18-20 2018, at Fontys University of Applied Sciences in Eindhoven, the Netherlands. The seminar aimed to bring together leading scholars in the field to present and discuss ‘evidence-informed’ views on various topics around PE assessment. It brought together 71 experts from 20 countries (see appendix 2) to share research on PE assessment via keynote lectures and research presentations and to discuss assessment-related issues in interactive sessions. Input from this meeting informed a first draft version of the statement. This first draft was sent to all participants of the specialist seminar for feedback, from which a second draft was created. This draft was presented at the AIESEP International Conference 2019 in Garden City, New York, after which further feedback was collected from participants both on site and through an online survey. The main contributors to the writing of the position statement are mentioned in appendix 1. Approval was granted by the AIESEP Board on May 7th, 2020. Largely in keeping with the main themes of the AIESEP specialist seminar ‘Future Directions in PE Assessment’, this Position Statement is divided into the following sections: Assessment Literacy; Accountability & Policy; Instructional Alignment; Assessment for Learning; Physical Education Teacher Education (PETE) and Continuing Professional Development; Digital Technology in PE Assessment. These sections are preceded by a brief overview of research data on PE. The statement concludes with directions for future research.
DOCUMENT