Teachers’ assessment literacy affects the quality of assessments and is, therefore, an essential part of teachers’ competence. Recent studies define assessment literacy as a dynamic, contextual and social construct, situated in practice and mediated by teachers’ identity and conceptions of assessment. This study provides a further elaboration of assessment literacy by exploring teachers’ conceptions of assessment literacy from a sociocultural perspective. Eleven online focus group interviews were conducted within the context of Dutch higher professional education between June and December 2020. A template analysis method was used to analyse the data. Seven interrelated aspects of assessment literacy were identified, namely ‘continuously developing assessment literacy’, ‘conscientious decision making’, ‘aligning’, ‘collaborating’, ‘discussing’, ‘improving and innovating’, and ‘coping with tensions’. This representation of assessment literacy, based on teachers’ conceptions, may guide teachers’ development of assessment literacy in practice.
LINK
Formative assessment (FA) is an effective educational approach for optimising student learning and is considered as a promising avenue for assessment within physical education (PE). Nevertheless, implementing FA is a complex and demanding task for in-service PE teachers who often lack formal training on this topic. To better support PE teachers in implementing FA into their practice, we need better insight into teachers’ experiences while designing and implementing formative strategies. However, knowledge on this topic is limited, especially within PE. Therefore, this study examined the experiences of 15 PE teachers who participated in an 18-month professional development programme. Teachers designed and implemented various formative activities within their PE lessons, while experiences were investigated through logbook entries and focus groups. Findings indicated various positive experiences, such as increased transparency in learning outcomes and success criteria for students as well as increased student involvement, but also revealed complexities, such as shifting teacher roles and insufficient feedback literacy among students. Overall, the findings of this study underscore the importance of a sustained, collaborative, and supported approach to implementing FA.
DOCUMENT
Background: Accurate measurement of health literacy is essential to improve accessibility and effectiveness of health care and prevention. One measure frequently applied in international research is the Short Assessment of Health Literacy (SAHL). While the Dutch SAHL (SAHL-D) has proven to be valid and reliable, its administration is time consuming and burdensome for participants. Our aim was to further validate, strengthen and shorten the SAHL-D using Rasch analysis. Methods: Available cross-sectional SAHL-D data was used from adult samples (N = 1231) to assess unidimensionality, local independence, item fit, person fit, item hierarchy, scale targeting, precision (person reliability and person separation), and presence of differential item functioning (DIF) depending on age, gender, education and study sample. Results: Thirteen items for a short form were selected based on item fit and DIF, and scale properties were compared between the two forms. The long form had several items with DIF for age, gender, educational level and study sample. Both forms showed lower measurement precision at higher health literacy levels. Conclusions: The findings support the validity and reliability of the SAHL-D for the long form and the short form, which can be used for a rapid assessment of health literacy in research and clinical practice.
DOCUMENT
The main purpose of the research was the development and testing of an assessment tool for the grading of Dutch students' performance in information problem solving during their study tasks. Scholarly literature suggests that an analytical scoring rubric would be a good tool for this.Described in this article are the construction process of such a scoring rubric and the evaluation of the prototype based on the assessment of its usefulness in educational practice, the efficiency in use and the reliability of the rubric. To test this last point, the rubric was used by two professors when they graded the same set of student products. 'Interrater reliability' for the professors' gradings was estimated by calculating absolute agreement of the scores, adjacent agreement and decision consistency. An English version of the scoring rubric has been added to this journal article as an appendix. This rubric can be used in various discipline-based courses in Higher Education in which information problem solving is one of the learning activities. After evaluating the prototype it was concluded that the rubric is particularly useful to graders as it keeps them focussed on relevant aspects during the grading process. If the rubric is used for summative evaluation of credit bearing student work, it is strongly recommended to use the scoring scheme as a whole and to let the grading work be done by at least two different markers. [Jos van Helvoort & the Chartered Institute of Library and Information Professionals-Information Literacy Group]
DOCUMENT
The aim of this research was to gain evidence based arguments for the use of the scoring rubric for performance assessment of information literacy [1] in Dutch Universities of Applied Sciences. Faculty members from four different departments of The Hague University were interviewed on the ways in which they use the scoring rubric and their arguments for it. A fifth lecturer answered the main question by email. The topic list, which has been used as a guide for the interviews, was based on subject analysis of scholar literature on rubric use. Four of the five respondents used (parts of) the rubric for the measurement of students’ performances in information use but none of them used the rubric as it is. What the faculty staff told the researcher is that the rubric helped them to improve the grading criteria for existing assignments. Only one respondent used the rubric itself, but this lecturer extended it with some new criteria on writing skills. It was also discovered that the rubric is not only used for grading but also for the development of new learning content on research skills. [De hier gepubliceerde versie is het 'accepted paper' van het origineel dat is gepubliceerd op www.springerlink.com . De officiële publicatie kan worden gedownload op http://link.springer.com/chapter/10.1007/978-3-319-03919-0_58]
DOCUMENT
The aim of part 3 is the development of basic instruments to measure respondent resilience to disinformation. Cases and examples of disinformation that will be used in the instruments will be taken from a COVID-19 context when applicable. People who are resilient to COVID-19 disinformation are supposed to be ‘media or information literate’. Therefore, the construct that is aimed to be measured with the instruments is Media and Information Literacy, abbreviated as MIL. Instruments that will be developed must be adaptable for different target groups (pupils, library staff and teachers). The basic instruments will therefore contain for instance scales that can be modified to measure the effectiveness of the train-the-trainer workshops as well as that of fake news workshops in secondary education. Final instruments will be used in the IO3 phase to make recommendations for improvement. Analyses of results of those final assessments will be performed for each country separately. Because the basic instruments that will be developed in output 1 are intended to be used as pre- and post-tests in output 2, the focus will be on the impact of the interventions. For evaluating the processes during the interventions and the participant experiences, extra instruments should be developed.
MULTIFILE
Purpose: The main purpose of the research was to measure reliability and validity of the Scoring Rubric for Information Literacy (Van Helvoort, 2010). Design/methodology/approach: Percentages of agreement and Intraclass Correlation were used to describe interrater reliability. For the determination of construct validity, factor analysis and reliability analysis were used. Criterion validity was calculated with Pearson correlations. Findings: In the described case, the Scoring Rubric for Information Literacy appears to be a reliable and valid instrument for the assessment of information literate performance. Originality/value: Reliability and validity are prerequisites to recommend a rubric for application. The results confirm that this Scoring Rubric for Information Literacy can be used in courses in higher education, not only for assessment purposes but also to foster learning. Oorspronkelijke artikel bij Emerald te vinden bij http://dx.doi.org/10.1108/JD-05-2016-0066
MULTIFILE
The purpose of this literature study was to obtain an overview of previous civic literacy projects and their characteristics as primarily described in educational science literature. Eighteen academic articles on civic literacy projects in higher education were studied in detail and coded using the qualitative data analysis instrument, Atlas.ti. The codes and quotations compiled were then divided in various categories and represented in a two-axis model. The definitions of ‘civic literacy’ found in the literature varied from an interest in social issues and a critical attitude to a more activist attitude (axis number 1). The analysis of the literature showed that, especially in more recent years, more students than citizens have benefited from civic literacy projects in higher education (axis number 2). The visualization of the findings in the two-axis model helps to place civic literacy projects in a broader frame. The final authenticated version is available online at https://doi.org/10.1007/978-3-030-13472-3_9
MULTIFILE
From the publisher's website: Large groups in society, in particular people with low literacy, lack the necessary proactivity and problem-solving skills to be self-reliant. One omnipresent problem area where these skills are relevant regards filling in forms and questionnaires. These problems could be potentially alleviated by taking advantage of the possibilities of information and communication technology (ICT), for example by offering alternatives to text, interactive self-explaining scales and easily accessible background information on the questionnaires’ rationale. The goal of this paper was to present explorative design guidelines for developing interactive questionnaires for low-literate persons. The guidelines have been derived during a user-centered design process of the Dutch Talking Touch Screen Questionnaire (DTTSQ), an interactive health assessment questionnaire used in physical therapy. The DTTSQ was developed to support patients with low health literacy, meaning they have problems with seeking, understanding and using health information. A decent number of guidelines have been derived and presented according to an existing, comprehensive model. Also, lessons learned were derived from including low-literate persons in the user-centered design process. The guidelines should be made available to ICT developers and, when applied properly, will contribute to the advancement of (health) literacy and empower citizens to fully participate in society
LINK
This chapter describes the use of a scoring rubric to encourage students to improve their information literacy skills. It will explain how the students apply the rubric to supply feedback on their peers’ performance in information problem solving (IPS) tasks. Supplying feedback appears to be a promising learning approach in acquiring knowledge about information literacy, not only for the assessed but also for the assessor. The peer assessment approach helps the feedback supplier to construct actively sustainable knowledge about the IPS process. This knowledge surpasses the construction of basic factual knowledge – level 1 of the ‘Revised taxonomy of learning objectives’ (Krathwohl, 2002) – and stimulates the understanding and application of the learning content as well as the more complex cognitive processes of analysis, evaluation and creation. This is the author version of a book published by Elsevier. Dit is de auteursversie van een hoofdstuk dat is gepubliceerd bij Elsevier.
DOCUMENT