Formative assessment (FA) is an effective educational approach for optimising student learning and is considered as a promising avenue for assessment within physical education (PE). Nevertheless, implementing FA is a complex and demanding task for in-service PE teachers who often lack formal training on this topic. To better support PE teachers in implementing FA into their practice, we need better insight into teachers’ experiences while designing and implementing formative strategies. However, knowledge on this topic is limited, especially within PE. Therefore, this study examined the experiences of 15 PE teachers who participated in an 18-month professional development programme. Teachers designed and implemented various formative activities within their PE lessons, while experiences were investigated through logbook entries and focus groups. Findings indicated various positive experiences, such as increased transparency in learning outcomes and success criteria for students as well as increased student involvement, but also revealed complexities, such as shifting teacher roles and insufficient feedback literacy among students. Overall, the findings of this study underscore the importance of a sustained, collaborative, and supported approach to implementing FA.
DOCUMENT
A growing number of higher education programmes in the Netherlands has implemented programmatic assessment. Programmatic assessment is an assessment concept in which the formative and summative function of assessment is intertwined. Although there is consensus about the theoretical principles of programmatic assessment, programs make various specific design choices, fitting with their own context. In this factsheet we give insight into the design choices Dutch higher education programmes make when implementing programmatic assessment.
DOCUMENT
Assessment in higher education (HE) is often focused on concluding modules with one or more tests that students need to pass. As a result, both students and teachers are primarily concerned with the summative function of assessment: information from tests is used to make pass/fail decisions about students. In recent years, increasing attention has been paid to the formative function of assessment and focus has shifted towards how assessment can stimulate learning. However, this also leads to a search for balance between both functions of assessment. Programmatic assessment (PA) is an assessment concept in which their intertwining is embraced to strike a new balance. A growing number of higher education programmes has implemented PA. Although there is consensus about the theoretical principles that form the basis for the design of PA, programmes make various specific design choices based on these principles, fitting with their own context. This paper provides insight into the design choices that programmes make when implementing PA and into the considerations that play a role in making these design choices. Such an overview is important for research purposes because it creates a framework for investigating the effects of different design choices within PA.
DOCUMENT
Earlier research argues that educational programmes based on social cognitive theory are successful in improving students’ self-efficacy. Focusing on some formative assessment characteristics, this qualitative research intends to study in-depth how student teachers’ assessment experiences contribute to their self-efficacy. We interviewed 15 second year student teachers enrolled in a competence based teacher educational programme. Thematic content analysis results reveal that the assessment characteristics ‘authenticity’ and ‘feedback’ exert a positive influence on student teachers self-efficacy during all phases of the portfolio competence assessment. The results provide a fine-grained view of several types of self-efficacy information connected with these assessment phases.
DOCUMENT
In two projects I have experimented with student designing their own assessment. One project was for a minor with only a few participants, so suitable for the experiment. The other was a regular course with approximately 50 students where the assessment form was partially free. I have done this project for over more than 10 years now. In this project every project group of students gets the assignment to let the other students experience what they have learned in their project. We would like to discuss how we can give students the opportunity to design their own assessment and still measure intended learning outcomes. And how can we learn from different cultures (between programs, faculties, universities and countries) in facilitating students to design their own assessment. Besides, we think by giving students more control over their own learning we will challenge students to focus on thriving and not just surviving.
DOCUMENT
CC-BY-NC-NDObjective: In the past decade, several authors have advocated that formative assessment programmes have an impact on teachers’ knowledge. Consequently, various requirements have been proposed in the literature for the design of these programmes. Only few studies, however, have focused on a direct comparison between programmes with respect to differences observed in their effect on teachers’ knowledge. Therefore in this study we explored the impact of three formative assessment programmes on teachers’ knowledge about supporting students’ reflection.Methods: Our study was carried out in the domain of vocational nursing education. Teachers were assigned to an expertise-based assessment programme, a self-assessment combined with collegial feedback programme, or a negotiated assessment programme. We scored the verbal transcriptions of teachers’ responses to video vignette interviews in order to measure their knowledge in a pre- and post-test. Multilevel regression analyses were performed to investigate differences in teachers’ knowledge between the three programmes on the post-test; potential moderating effects of pre-test scores, contextual and individual factors were controlled for.Findings: The knowledge of teachers participating in the expertise-based assessment programme was significantly higher than that of teachers participating in the self-assessment combined with collegial feedback programme. Furthermore, the findings indicate that for professional learning, not only the approach to formative assessment is an important variable, but also the extent to which (a) teachers are intrinsically motivated and (b) they experience a high degree of collegiality at their school.
MULTIFILE
In programmatic assessment (PA), an arrangement of different assessment methods is deliberately designed across the entire curriculum, combined and planned to support both robust decision-making and student learning. In health sciences education, evidence about the merits and pitfalls of PA is emerging. Although there is consensus about the theoretical principles of PA, programs make diverse design choices based on these principles to implement PA in practice, fitting their own contexts. We therefore need a better understanding of how the PA principles are implemented across contexts—within and beyond health sciences education. In this study, interviews were conducted with teachers/curriculum designers representing nine different programs in diverse professional domains. Research questions focused on: (1) design choices made, (2) whether these design choices adhere to PA principles, (3) student and teacher experiences in practice, and (4) context-specific differences between the programs. A wide range of design choices were reported, largely adhering to PA principles but differing across cases due to contextual alignment. Design choices reported by almost all programs include a backbone of learning outcomes, data-points connected to this backbone in a longitudinal design allowing uptake of feedback, intermediate reflective meetings, and decision-making based on a multitude of data-points made by a committee and involving multi-stage procedures. Contextual design choices were made aligning the design to the professional domain and practical feasibility. Further research is needed in particular with regard to intermediate-stakes decisions.
LINK
from the article: "The purpose of this paper is to design a rubric instrument for assessing oral presentation performance in higher education and to test its validity with an expert group. Design/methodology/approach This study, using mixed methods, focusses on: designing a rubric by identifying assessment instruments in previous presentation research and implementing essential design characteristics in a preliminary developed rubric; and testing the validity of the constructed instrument with an expert group of higher educational professionals (n=38). Findings The result of this study is a validated rubric instrument consisting of 11 presentation criteria, their related levels in performance, and a five-point scoring scale. These adopted criteria correspond to the widely accepted main criteria for presentations, in both literature and educational practice, regarding aspects as content of the presentation, structure of the presentation, interaction with the audience and presentation delivery. Practical implications Implications for the use of the rubric instrument in educational practice refer to the extent to which the identified criteria should be adapted to the requirements of presenting in a certain domain and whether the amount and complexity of the information in the rubric, as criteria, levels and scales, can be used in an adequate manner within formative assessment processes. Originality/value This instrument offers the opportunity to formatively assess students’ oral presentation performance, since rubrics explicate criteria and expectations. Furthermore, such an instrument also facilitates feedback and self-assessment processes. Finally, the rubric, resulting from this study, could be used in future quasi-experimental studies to measure students’ development in presentation performance in a pre-and post-test situation."
LINK
Assessment in higher education (HE) is often focused on concluding modules with one or more tests that students need to pass. As a result, both students and teachers are primarily concerned with the summative function of assessment: information from tests is used to make pass/fail decisions about students. In recent years, increasing attention has been paid to the formative function of assessment and focus has shifted towards how assessment can stimulate learning. However, this also leads to a search for balance between both functions of assessment. Programmatic assessment (PA) is an assessment concept in which their intertwining is embraced to strike a new balance. A growing number of higher education programmes has implemented PA. Although there is consensus about the theoretical principles that form the basis for the design of PA, programmes make various specific design choices based on these principles, fitting with their own context. This paper provides insight into the design choices that programmes make when implementing PA and into the considerations that play a role in making these design choices. Such an overview is important for research purposes because it creates a framework for investigating the effects of different design choices within PA.
DOCUMENT
In response to dissatisfaction with testing cultures in higher education, programmatic assessment has been introduced as an alternative approach. Programmatic ssessment involves the longitudinal collection of data points about student learning, aimed at continuous monitoring and feedback. High-stakes decisions are based on a multitude of data points, involving aggregation, saturation and group-decision making. Evidence about the value of programmatic assessment is emerging in health sciences education. However, research also shows that students find it difficult to take an active role in the assessment process and seek feedback. Lower performing students are underrepresented in research on programmatic assessment, which until now mainly focuses on health sciences education. This study therefore explored low and high performing students’ experiences with learning and decision-making in programmatic assessment in relation to their feedback-seeking behaviour in a Communication Sciences program. In total, 55 students filled out a questionnaire about their perceptions of programmatic assessment, their feedback-seeking behaviour and learning performance. Low-performing and high-performing students were selected and interviewed. Several designable elements of programmatic assessment were distinguished that promote or hinder students’ feedback-seeking behaviour, learning and uptake of feedback.
LINK