The main purpose of the research was the development and testing of an assessment tool for the grading of Dutch students' performance in information problem solving during their study tasks. Scholarly literature suggests that an analytical scoring rubric would be a good tool for this.Described in this article are the construction process of such a scoring rubric and the evaluation of the prototype based on the assessment of its usefulness in educational practice, the efficiency in use and the reliability of the rubric. To test this last point, the rubric was used by two professors when they graded the same set of student products. 'Interrater reliability' for the professors' gradings was estimated by calculating absolute agreement of the scores, adjacent agreement and decision consistency. An English version of the scoring rubric has been added to this journal article as an appendix. This rubric can be used in various discipline-based courses in Higher Education in which information problem solving is one of the learning activities. After evaluating the prototype it was concluded that the rubric is particularly useful to graders as it keeps them focussed on relevant aspects during the grading process. If the rubric is used for summative evaluation of credit bearing student work, it is strongly recommended to use the scoring scheme as a whole and to let the grading work be done by at least two different markers. [Jos van Helvoort & the Chartered Institute of Library and Information Professionals-Information Literacy Group]
This chapter describes the use of a scoring rubric to encourage students to improve their information literacy skills. It will explain how the students apply the rubric to supply feedback on their peers’ performance in information problem solving (IPS) tasks. Supplying feedback appears to be a promising learning approach in acquiring knowledge about information literacy, not only for the assessed but also for the assessor. The peer assessment approach helps the feedback supplier to construct actively sustainable knowledge about the IPS process. This knowledge surpasses the construction of basic factual knowledge – level 1 of the ‘Revised taxonomy of learning objectives’ (Krathwohl, 2002) – and stimulates the understanding and application of the learning content as well as the more complex cognitive processes of analysis, evaluation and creation. This is the author version of a book published by Elsevier. Dit is de auteursversie van een hoofdstuk dat is gepubliceerd bij Elsevier.
Purpose: The main purpose of the research was to measure reliability and validity of the Scoring Rubric for Information Literacy (Van Helvoort, 2010). Design/methodology/approach: Percentages of agreement and Intraclass Correlation were used to describe interrater reliability. For the determination of construct validity, factor analysis and reliability analysis were used. Criterion validity was calculated with Pearson correlations. Findings: In the described case, the Scoring Rubric for Information Literacy appears to be a reliable and valid instrument for the assessment of information literate performance. Originality/value: Reliability and validity are prerequisites to recommend a rubric for application. The results confirm that this Scoring Rubric for Information Literacy can be used in courses in higher education, not only for assessment purposes but also to foster learning. Oorspronkelijke artikel bij Emerald te vinden bij http://dx.doi.org/10.1108/JD-05-2016-0066