Teachers’ assessment literacy affects the quality of assessments and is, therefore, an essential part of teachers’ competence. Recent studies define assessment literacy as a dynamic, contextual and social construct, situated in practice and mediated by teachers’ identity and conceptions of assessment. This study provides a further elaboration of assessment literacy by exploring teachers’ conceptions of assessment literacy from a sociocultural perspective. Eleven online focus group interviews were conducted within the context of Dutch higher professional education between June and December 2020. A template analysis method was used to analyse the data. Seven interrelated aspects of assessment literacy were identified, namely ‘continuously developing assessment literacy’, ‘conscientious decision making’, ‘aligning’, ‘collaborating’, ‘discussing’, ‘improving and innovating’, and ‘coping with tensions’. This representation of assessment literacy, based on teachers’ conceptions, may guide teachers’ development of assessment literacy in practice.
LINK
Formative assessment (FA) is an effective educational approach for optimising student learning and is considered as a promising avenue for assessment within physical education (PE). Nevertheless, implementing FA is a complex and demanding task for in-service PE teachers who often lack formal training on this topic. To better support PE teachers in implementing FA into their practice, we need better insight into teachers’ experiences while designing and implementing formative strategies. However, knowledge on this topic is limited, especially within PE. Therefore, this study examined the experiences of 15 PE teachers who participated in an 18-month professional development programme. Teachers designed and implemented various formative activities within their PE lessons, while experiences were investigated through logbook entries and focus groups. Findings indicated various positive experiences, such as increased transparency in learning outcomes and success criteria for students as well as increased student involvement, but also revealed complexities, such as shifting teacher roles and insufficient feedback literacy among students. Overall, the findings of this study underscore the importance of a sustained, collaborative, and supported approach to implementing FA.
DOCUMENT
This study aims to map VE teachers’ perceived importance of assessment competence. The study was conducted in the Netherlands among teachers of professional studies in Universities of Applied Sciences. A large-scale study was conducted to represent a broad population of teachers, including various vocational fields, roles, and situations, allowing for the exploration of differences across these contextual variables.
LINK
The main purpose of the research was the development and testing of an assessment tool for the grading of Dutch students' performance in information problem solving during their study tasks. Scholarly literature suggests that an analytical scoring rubric would be a good tool for this.Described in this article are the construction process of such a scoring rubric and the evaluation of the prototype based on the assessment of its usefulness in educational practice, the efficiency in use and the reliability of the rubric. To test this last point, the rubric was used by two professors when they graded the same set of student products. 'Interrater reliability' for the professors' gradings was estimated by calculating absolute agreement of the scores, adjacent agreement and decision consistency. An English version of the scoring rubric has been added to this journal article as an appendix. This rubric can be used in various discipline-based courses in Higher Education in which information problem solving is one of the learning activities. After evaluating the prototype it was concluded that the rubric is particularly useful to graders as it keeps them focussed on relevant aspects during the grading process. If the rubric is used for summative evaluation of credit bearing student work, it is strongly recommended to use the scoring scheme as a whole and to let the grading work be done by at least two different markers. [Jos van Helvoort & the Chartered Institute of Library and Information Professionals-Information Literacy Group]
DOCUMENT
The aim of part 3 is the development of basic instruments to measure respondent resilience to disinformation. Cases and examples of disinformation that will be used in the instruments will be taken from a COVID-19 context when applicable. People who are resilient to COVID-19 disinformation are supposed to be ‘media or information literate’. Therefore, the construct that is aimed to be measured with the instruments is Media and Information Literacy, abbreviated as MIL. Instruments that will be developed must be adaptable for different target groups (pupils, library staff and teachers). The basic instruments will therefore contain for instance scales that can be modified to measure the effectiveness of the train-the-trainer workshops as well as that of fake news workshops in secondary education. Final instruments will be used in the IO3 phase to make recommendations for improvement. Analyses of results of those final assessments will be performed for each country separately. Because the basic instruments that will be developed in output 1 are intended to be used as pre- and post-tests in output 2, the focus will be on the impact of the interventions. For evaluating the processes during the interventions and the participant experiences, extra instruments should be developed.
MULTIFILE
The aim of this research was to gain evidence based arguments for the use of the scoring rubric for performance assessment of information literacy [1] in Dutch Universities of Applied Sciences. Faculty members from four different departments of The Hague University were interviewed on the ways in which they use the scoring rubric and their arguments for it. A fifth lecturer answered the main question by email. The topic list, which has been used as a guide for the interviews, was based on subject analysis of scholar literature on rubric use. Four of the five respondents used (parts of) the rubric for the measurement of students’ performances in information use but none of them used the rubric as it is. What the faculty staff told the researcher is that the rubric helped them to improve the grading criteria for existing assignments. Only one respondent used the rubric itself, but this lecturer extended it with some new criteria on writing skills. It was also discovered that the rubric is not only used for grading but also for the development of new learning content on research skills. [De hier gepubliceerde versie is het 'accepted paper' van het origineel dat is gepubliceerd op www.springerlink.com . De officiële publicatie kan worden gedownload op http://link.springer.com/chapter/10.1007/978-3-319-03919-0_58]
DOCUMENT
Why a position statement on Assessment in Physical Education? The purpose of this AIESEP Position Statement on Assessment in Physical Education (PE) is fourfold: • To advocate internationally for the importance of assessment practices as central to providing meaningful, relevant and worthwhile physical education; • To advise the field of PE about assessment-related concepts informed by research and contemporary practice; • To identify pressing research questions and avenues for new research in the area of PE assessment; • To provide a supporting rationale for colleagues who wish to apply for research funds to address questions about PE assessment or who have opportunities to work with or influence policy makers. The main target groups for this position statement are PE teachers, PE pre-service teachers, PE curriculum officers, PE teacher educators, PE researchers, PE administrators and PE policy makers. How was this position statement created? The AIESEP specialist seminar ‘Future Directions in PE Assessment’ was held from October 18-20 2018, at Fontys University of Applied Sciences in Eindhoven, the Netherlands. The seminar aimed to bring together leading scholars in the field to present and discuss ‘evidence-informed’ views on various topics around PE assessment. It brought together 71 experts from 20 countries (see appendix 2) to share research on PE assessment via keynote lectures and research presentations and to discuss assessment-related issues in interactive sessions. Input from this meeting informed a first draft version of the statement. This first draft was sent to all participants of the specialist seminar for feedback, from which a second draft was created. This draft was presented at the AIESEP International Conference 2019 in Garden City, New York, after which further feedback was collected from participants both on site and through an online survey. The main contributors to the writing of the position statement are mentioned in appendix 1. Approval was granted by the AIESEP Board on May 7th, 2020. Largely in keeping with the main themes of the AIESEP specialist seminar ‘Future Directions in PE Assessment’, this Position Statement is divided into the following sections: Assessment Literacy; Accountability & Policy; Instructional Alignment; Assessment for Learning; Physical Education Teacher Education (PETE) and Continuing Professional Development; Digital Technology in PE Assessment. These sections are preceded by a brief overview of research data on PE. The statement concludes with directions for future research.
DOCUMENT
This chapter describes the use of a scoring rubric to encourage students to improve their information literacy skills. It will explain how the students apply the rubric to supply feedback on their peers’ performance in information problem solving (IPS) tasks. Supplying feedback appears to be a promising learning approach in acquiring knowledge about information literacy, not only for the assessed but also for the assessor. The peer assessment approach helps the feedback supplier to construct actively sustainable knowledge about the IPS process. This knowledge surpasses the construction of basic factual knowledge – level 1 of the ‘Revised taxonomy of learning objectives’ (Krathwohl, 2002) – and stimulates the understanding and application of the learning content as well as the more complex cognitive processes of analysis, evaluation and creation. This is the author version of a book published by Elsevier. Dit is de auteursversie van een hoofdstuk dat is gepubliceerd bij Elsevier.
DOCUMENT
Purpose: The main purpose of the research was to measure reliability and validity of the Scoring Rubric for Information Literacy (Van Helvoort, 2010). Design/methodology/approach: Percentages of agreement and Intraclass Correlation were used to describe interrater reliability. For the determination of construct validity, factor analysis and reliability analysis were used. Criterion validity was calculated with Pearson correlations. Findings: In the described case, the Scoring Rubric for Information Literacy appears to be a reliable and valid instrument for the assessment of information literate performance. Originality/value: Reliability and validity are prerequisites to recommend a rubric for application. The results confirm that this Scoring Rubric for Information Literacy can be used in courses in higher education, not only for assessment purposes but also to foster learning. Oorspronkelijke artikel bij Emerald te vinden bij http://dx.doi.org/10.1108/JD-05-2016-0066
MULTIFILE
The purpose of this literature study was to obtain an overview of previous civic literacy projects and their characteristics as primarily described in educational science literature. Eighteen academic articles on civic literacy projects in higher education were studied in detail and coded using the qualitative data analysis instrument, Atlas.ti. The codes and quotations compiled were then divided in various categories and represented in a two-axis model. The definitions of ‘civic literacy’ found in the literature varied from an interest in social issues and a critical attitude to a more activist attitude (axis number 1). The analysis of the literature showed that, especially in more recent years, more students than citizens have benefited from civic literacy projects in higher education (axis number 2). The visualization of the findings in the two-axis model helps to place civic literacy projects in a broader frame. The final authenticated version is available online at https://doi.org/10.1007/978-3-030-13472-3_9
MULTIFILE