The main purpose of the research was the development and testing of an assessment tool for the grading of Dutch students' performance in information problem solving during their study tasks. Scholarly literature suggests that an analytical scoring rubric would be a good tool for this.Described in this article are the construction process of such a scoring rubric and the evaluation of the prototype based on the assessment of its usefulness in educational practice, the efficiency in use and the reliability of the rubric. To test this last point, the rubric was used by two professors when they graded the same set of student products. 'Interrater reliability' for the professors' gradings was estimated by calculating absolute agreement of the scores, adjacent agreement and decision consistency. An English version of the scoring rubric has been added to this journal article as an appendix. This rubric can be used in various discipline-based courses in Higher Education in which information problem solving is one of the learning activities. After evaluating the prototype it was concluded that the rubric is particularly useful to graders as it keeps them focussed on relevant aspects during the grading process. If the rubric is used for summative evaluation of credit bearing student work, it is strongly recommended to use the scoring scheme as a whole and to let the grading work be done by at least two different markers. [Jos van Helvoort & the Chartered Institute of Library and Information Professionals-Information Literacy Group]
DOCUMENT
The purpose of this article is to expand on a previous study on the development of a scoring rubric for information literacy1. The present paper examines how students at the Department of Information Services and Information Management, The Hague University, use the scoring rubric for their school work and/or in their regular jobs and social life. The research presented here focuses on a group of adult students who follow a part time evening variant of the Bachelor curriculum. The methods employed in this study consisted of an online survey to select students who had used the scoring rubric at least once after the workshop in which it was introduced. Following on from this, a focus group with respondents who had answered positively to the invitation at the end of the survey was organised and chaired by a neutral moderator. Samples that could be used in this research were very small. The findings may therefore not be generalized to all other groups of students. However, the results appear to be of relevance to the IL community. The students who participated in the focus group reported that they used it for self-assessment throughout the course, in subsequent courses, and to become more critical of their own writings and those of other people. The research also makes clear that adult students appreciate the feedback generated by completing the scoring rubric form but that this is not a substitute for the face-to-face feedback they receive from their teachers. [Dit is de auteursversie waarvoor Elsevier toestemming heeft gegeven.]
DOCUMENT
Purpose: The main purpose of the research was to measure reliability and validity of the Scoring Rubric for Information Literacy (Van Helvoort, 2010). Design/methodology/approach: Percentages of agreement and Intraclass Correlation were used to describe interrater reliability. For the determination of construct validity, factor analysis and reliability analysis were used. Criterion validity was calculated with Pearson correlations. Findings: In the described case, the Scoring Rubric for Information Literacy appears to be a reliable and valid instrument for the assessment of information literate performance. Originality/value: Reliability and validity are prerequisites to recommend a rubric for application. The results confirm that this Scoring Rubric for Information Literacy can be used in courses in higher education, not only for assessment purposes but also to foster learning. Oorspronkelijke artikel bij Emerald te vinden bij http://dx.doi.org/10.1108/JD-05-2016-0066
MULTIFILE
The aim of this research was to gain evidence based arguments for the use of the scoring rubric for performance assessment of information literacy [1] in Dutch Universities of Applied Sciences. Faculty members from four different departments of The Hague University were interviewed on the ways in which they use the scoring rubric and their arguments for it. A fifth lecturer answered the main question by email. The topic list, which has been used as a guide for the interviews, was based on subject analysis of scholar literature on rubric use. Four of the five respondents used (parts of) the rubric for the measurement of students’ performances in information use but none of them used the rubric as it is. What the faculty staff told the researcher is that the rubric helped them to improve the grading criteria for existing assignments. Only one respondent used the rubric itself, but this lecturer extended it with some new criteria on writing skills. It was also discovered that the rubric is not only used for grading but also for the development of new learning content on research skills. [De hier gepubliceerde versie is het 'accepted paper' van het origineel dat is gepubliceerd op www.springerlink.com . De officiële publicatie kan worden gedownload op http://link.springer.com/chapter/10.1007/978-3-319-03919-0_58]
DOCUMENT
This chapter describes the use of a scoring rubric to encourage students to improve their information literacy skills. It will explain how the students apply the rubric to supply feedback on their peers’ performance in information problem solving (IPS) tasks. Supplying feedback appears to be a promising learning approach in acquiring knowledge about information literacy, not only for the assessed but also for the assessor. The peer assessment approach helps the feedback supplier to construct actively sustainable knowledge about the IPS process. This knowledge surpasses the construction of basic factual knowledge – level 1 of the ‘Revised taxonomy of learning objectives’ (Krathwohl, 2002) – and stimulates the understanding and application of the learning content as well as the more complex cognitive processes of analysis, evaluation and creation. This is the author version of a book published by Elsevier. Dit is de auteursversie van een hoofdstuk dat is gepubliceerd bij Elsevier.
DOCUMENT
URL-conference: http://ecil2017.ilconf.org/
DOCUMENT
from the article: "The purpose of this paper is to design a rubric instrument for assessing oral presentation performance in higher education and to test its validity with an expert group. Design/methodology/approach This study, using mixed methods, focusses on: designing a rubric by identifying assessment instruments in previous presentation research and implementing essential design characteristics in a preliminary developed rubric; and testing the validity of the constructed instrument with an expert group of higher educational professionals (n=38). Findings The result of this study is a validated rubric instrument consisting of 11 presentation criteria, their related levels in performance, and a five-point scoring scale. These adopted criteria correspond to the widely accepted main criteria for presentations, in both literature and educational practice, regarding aspects as content of the presentation, structure of the presentation, interaction with the audience and presentation delivery. Practical implications Implications for the use of the rubric instrument in educational practice refer to the extent to which the identified criteria should be adapted to the requirements of presenting in a certain domain and whether the amount and complexity of the information in the rubric, as criteria, levels and scales, can be used in an adequate manner within formative assessment processes. Originality/value This instrument offers the opportunity to formatively assess students’ oral presentation performance, since rubrics explicate criteria and expectations. Furthermore, such an instrument also facilitates feedback and self-assessment processes. Finally, the rubric, resulting from this study, could be used in future quasi-experimental studies to measure students’ development in presentation performance in a pre-and post-test situation."
LINK
The main research question in this chapter was: Which information problem solving skills are, according to the lecturers in the Bachelor of ICT, important for their students? Selecting items from a results list and judging the information on actuality, relevance and reliability were regarded as extremely important by most of the interviewed lecturers. All these sub-skills refer to the third criterion of the scoring rubric, the quality of the primary sources. As mentioned before, one of the NSE lecturers holds the opinion that students should improve their behaviour exactly on this point. Another sub-skill that is seen as very important by the interviewees is the analysis of information to be applied in the student’s own knowledge product. This refers to the fifth criterion of the rubric, the creation of new knowledge. The quality of primary sources and the creation of new knowledge criteria both bear extra weights in the grading process with the scoring rubric. A third criterion which also bears extra weight (‘orientation on the topic’) was mentioned as an important subskill by some interviewees but not as explicitly as the other two criterions. One of the facets of information problem solving that need improvement, according to one of the lecturers, is the reflection on the whole process to stimulate the anchoring of this mode of working. In the concept of information problem solving are higher order skills (orientation and question formulation, judging information and creation of new knowledge) distinguished from lower order skills (reference list, in-text citations, the selection of keywords and databases). Considering all results of this research, one can conclude that the importance of the higher order IPS skills – which refer to ‘learning to think’ (Elshout, 1990) – is recognised by most of the interviewed lecturers. The lower order skills are considered less important by most of them.
DOCUMENT
De laatste jaren is er in het hbo veel aandacht voor de legitimering van de diploma’s. Het rapport “Vreemde ogen dwingen” heeft een impuls gegeven aan het verbeteren van de afstudeerprocessen bij vrijwel alle opleidingen. Onderdeel van de verbeterslag is het verbeteren van de modellen waarmee eindwerken worden beoordeeld. Deze modellen moeten bijdrage aan een valide, betrouwbaar en voor de student inzichtelijke beoordeling. Belangrijke vraag daarbij is hoe zo’n model kan worden vormgegeven en of een beoordelingsmodel specifiek is voor een opleiding of hetzelfde kan zijn voor meerdere opleidingen. Dit artikel beschrijft een project aan de Hogeschool Utrecht waarin een gezamenlijk beoordelingsmodel voor afstuderen in het economiedomein is ontwikkeld. Het bleek mogelijk een gezamenlijk beoordelingsformulier te ontwerpen waarin ruimte is voor het toetsen van opleidingsspecifieke eindkwalificaties. Het artikel beschrijft de manier waarop het beoordelingsmodel is ontwikkeld en het resultaat van deze ontwikkeling en geeft een aantal succesfactoren aan.
LINK