A growing number of higher education programmes in the Netherlands has implemented programmatic assessment. Programmatic assessment is an assessment concept in which the formative and summative function of assessment is intertwined. Although there is consensus about the theoretical principles of programmatic assessment, programs make various specific design choices, fitting with their own context. In this factsheet we give insight into the design choices Dutch higher education programmes make when implementing programmatic assessment.
DOCUMENT
Assessment in higher education (HE) is often focused on concluding modules with one or more tests that students need to pass. As a result, both students and teachers are primarily concerned with the summative function of assessment: information from tests is used to make pass/fail decisions about students. In recent years, increasing attention has been paid to the formative function of assessment and focus has shifted towards how assessment can stimulate learning. However, this also leads to a search for balance between both functions of assessment. Programmatic assessment (PA) is an assessment concept in which their intertwining is embraced to strike a new balance. A growing number of higher education programmes has implemented PA. Although there is consensus about the theoretical principles that form the basis for the design of PA, programmes make various specific design choices based on these principles, fitting with their own context. This paper provides insight into the design choices that programmes make when implementing PA and into the considerations that play a role in making these design choices. Such an overview is important for research purposes because it creates a framework for investigating the effects of different design choices within PA.
DOCUMENT
Assessment in higher education (HE) is often focused on concluding modules with one or more tests that students need to pass. As a result, both students and teachers are primarily concerned with the summative function of assessment: information from tests is used to make pass/fail decisions about students. In recent years, increasing attention has been paid to the formative function of assessment and focus has shifted towards how assessment can stimulate learning. However, this also leads to a search for balance between both functions of assessment. Programmatic assessment (PA) is an assessment concept in which their intertwining is embraced to strike a new balance. A growing number of higher education programmes has implemented PA. Although there is consensus about the theoretical principles that form the basis for the design of PA, programmes make various specific design choices based on these principles, fitting with their own context. This paper provides insight into the design choices that programmes make when implementing PA and into the considerations that play a role in making these design choices. Such an overview is important for research purposes because it creates a framework for investigating the effects of different design choices within PA.
DOCUMENT
Abstract Purpose The primary aim of this study was to investigate the effect of including the Dutch National Pharmacotherapy Assessment (DNPA) in the medical curriculum on the level and development of prescribing knowledge and skills of junior doctors. The secondary aim was to evaluate the relationship between the curriculum type and the prescribing competence of junior doctors. Methods We re-analysed the data of a longitudinal study conducted in 2016 involving recently graduated junior doctors from 11 medical schools across the Netherlands and Belgium. Participants completed three assessments during the first year after graduation (around graduation (+/−4 weeks), and 6 months, and 1 year after graduation), each of which contained 35 multiple choice questions (MCQs) assessing knowledge and three clinical case scenarios assessing skills. Only one medical school used the DNPA in its medical curriculum; the other medical schools used conventional means to assess prescribing knowledge and skills. Five medical schools were classified as providing solely theoretical clinical pharmacology and therapeutics (CPT) education; the others provided both theoretical and practical CPT education (mixed curriculum). Results Of the 1584 invited junior doctors, 556 (35.1%) participated, 326 (58.6%) completed the MCQs and 325 (58.5%) the clinical case scenarios in all three assessments. Junior doctors whose medical curriculum included the DNPA had higher knowledge scores than other junior doctors (76.7% [SD 12.5] vs. 67.8% [SD 12.6], 81.8% [SD 11.1] vs. 76.1% [SD 11.1], 77.0% [12.1] vs. 70.6% [SD 14.0], p<0.05 for all three assessments, respectively). There was no difference in skills scores at the moment of graduation (p=0.110), but after 6 and 12 months junior doctors whose medical curriculum included the DNPA had higher skills scores (both p<0.001). Junior doctors educated with a mixed curriculum had significantly higher scores for both knowledge and skills than did junior doctors educated with a theoretical curriculum (p<0.05 in all assessments). Conclusion Our findings suggest that the inclusion of the knowledge focused DNPA in the medical curriculum improves the prescribing knowledge, but not the skills, of junior doctors at the moment of graduation. However, after 6 and 12 months, both the knowledge and skills were higher in the junior doctors whose medical curriculum included the DNPA. A curriculum that provides both theoretical and practical education seems to improve both prescribing knowledge and skills relative to a solely theoretical curriculum.
MULTIFILE
In programmatic assessment (PA), an arrangement of different assessment methods is deliberately designed across the entire curriculum, combined and planned to support both robust decision-making and student learning. In health sciences education, evidence about the merits and pitfalls of PA is emerging. Although there is consensus about the theoretical principles of PA, programs make diverse design choices based on these principles to implement PA in practice, fitting their own contexts. We therefore need a better understanding of how the PA principles are implemented across contexts—within and beyond health sciences education. In this study, interviews were conducted with teachers/curriculum designers representing nine different programs in diverse professional domains. Research questions focused on: (1) design choices made, (2) whether these design choices adhere to PA principles, (3) student and teacher experiences in practice, and (4) context-specific differences between the programs. A wide range of design choices were reported, largely adhering to PA principles but differing across cases due to contextual alignment. Design choices reported by almost all programs include a backbone of learning outcomes, data-points connected to this backbone in a longitudinal design allowing uptake of feedback, intermediate reflective meetings, and decision-making based on a multitude of data-points made by a committee and involving multi-stage procedures. Contextual design choices were made aligning the design to the professional domain and practical feasibility. Further research is needed in particular with regard to intermediate-stakes decisions.
LINK
The main purpose of the research was the development and testing of an assessment tool for the grading of Dutch students' performance in information problem solving during their study tasks. Scholarly literature suggests that an analytical scoring rubric would be a good tool for this.Described in this article are the construction process of such a scoring rubric and the evaluation of the prototype based on the assessment of its usefulness in educational practice, the efficiency in use and the reliability of the rubric. To test this last point, the rubric was used by two professors when they graded the same set of student products. 'Interrater reliability' for the professors' gradings was estimated by calculating absolute agreement of the scores, adjacent agreement and decision consistency. An English version of the scoring rubric has been added to this journal article as an appendix. This rubric can be used in various discipline-based courses in Higher Education in which information problem solving is one of the learning activities. After evaluating the prototype it was concluded that the rubric is particularly useful to graders as it keeps them focussed on relevant aspects during the grading process. If the rubric is used for summative evaluation of credit bearing student work, it is strongly recommended to use the scoring scheme as a whole and to let the grading work be done by at least two different markers. [Jos van Helvoort & the Chartered Institute of Library and Information Professionals-Information Literacy Group]
DOCUMENT
Cybersecurity threat and incident managers in large organizations, especially in the financial sector, are confronted more and more with an increase in volume and complexity of threats and incidents. At the same time, these managers have to deal with many internal processes and criteria, in addition to requirements from external parties, such as regulators that pose an additional challenge to handling threats and incidents. Little research has been carried out to understand to what extent decision support can aid these professionals in managing threats and incidents. The purpose of this research was to develop decision support for cybersecurity threat and incident managers in the financial sector. To this end, we carried out a cognitive task analysis and the first two phases of a cognitive work analysis, based on two rounds of in-depth interviews with ten professionals from three financial institutions. Our results show that decision support should address the problem of balancing the bigger picture with details. That is, being able to simultaneously keep the broader operational context in mind as well as adequately investigating, containing and remediating a cyberattack. In close consultation with the three financial institutions involved, we developed a critical-thinking memory aid that follows typical incident response process steps, but adds big picture elements and critical thinking steps. This should make cybersecurity threat and incident managers more aware of the broader operational implications of threats and incidents while keeping a critical mindset. Although a summative evaluation was beyond the scope of the present research, we conducted iterative formative evaluations of the memory aid that show its potential.
DOCUMENT
Lecture in PhD Programme Life Science Education Research UMCU. Course Methods of Life Science Education Research. Utrecht, The Netherlands. abstract Audit trail procedures are applied as a way to check the validity of qualitative research designs, qualitative analyses, and the claims that are made. Audit trail procedures can be conducted based on the three criteria of visibility, comprehensibility, and acceptability (Akkerman et al., 2008). During an audit trail procedure, all documents and materials resulting from the data gathering and the data analysis are assessed by an auditor. In this presentation, we presented a summative audit trail procedure (Agricola, Prins, Van der Schaaf & Van Tartwijk, 2021), whereas in a second study we used a formative one (Agricola, Van der Schaaf, Prins & Van Tartwijk, 2022). For both studies, two different auditors were chosen. For the study presented in Agricola et al. (2021) the auditor was one of the PhD supervisors, while in that presented Agricola et al. (2022) was a junior researcher not involved in the project. The first auditor had a high level of expertise in the study’s topic and methodology. As a result, he was able to provide a professional and critical assessment report. Although the second auditor might be considered to be more objective than the first, as she was not involved in the project, more meetings were needed to explain the aim of the study and the aim of the audit trail procedure. There are many ideas about the criteria that qualitative studies should meet (De Kleijn en Van Leeuwen, 2018). I argue that procedures of checking for interrater agreement and understanding, the triangulation, and audit trail procedures can increase the internal validity of qualitative studies. Agricola, B. T., Prins, F. J., van der Schaaf, M. F., & van Tartwijk, J. (2021). Supervisor and Student Perspectives on Undergraduate Thesis Supervision in Higher Education. Scandinavian Journal of Educational Research, 65(5), 877-897. doi: https://doi.org/10.1080/00313831.2020.1775115 Agricola, B. T., van der Schaaf, M. F., Prins, F. J., & van Tartwijk, J. (2022). The development of research supervisors’ pedagogical content knowledge in a lesson study project. Educational Action Research. doi: https://doi.org/10.1080/09650792.2020.1832551 de Kleijn, R. A. M., & Van Leeuwen, A. (2018). Reflections and review on the audit procedure: Guidelines for more transparency. International Journal of Qualitative Methods, 17(1), 1-8. doi: https://doi.org/10.1177/1609406918763214 Akkerman, S., Admiraal, W., Brekelmans, M., & Oost, H. (2008). Auditing quality of research in social sciences. Quality & Quantity, 42(2), 257-274. doi: https://doi.org/10.1007/s11135-006-9044-4
DOCUMENT
Adversarial thinking is essential when dealing with cyber incidents and for finding security vulnerabilities. Capture the Flag (CTF) competitions are used all around the world to stimulate adversarial thinking. Jeopardy-style CTFs, given their challenge-and-answer based nature, are used more and more in cybersecurity education as a fun and engaging way to inspire students. Just like traditional written exams, Jeopardy-style CTFs can be used as summative assessment. Did a student provide the correct answer, yes or no. Did the participant in the CTF competition solve the challenge, yes or no. This research project provides a framework for measuring the learning outcomes of a Jeopardy-style CTF and applies this framework to two CTF events as case studies. During these case studies, participants were tested on their knowledge and skills in the field of cybersecurity and queried on their attitude towards CTF education. Results show that the main difference between traditional written exam and a Jeopardy-style CTF is the way in which questions a re formulated. CTF education is stated to be challenging and fun because questions are formulated as puzzles that need to be solved in a gamified and competitive environment. Just like traditional written exams, no additional insight into why the participant thinks the correct answer is the correct answer has been observed or if the participant really did learn anything new by participating. Given that the main difference between a traditional written exam and a Jeopardy-style CTF is the way in which questions are formulated, learning outcomes can be measured in the same way. We can ask ourselves how many participants solved which challenge and to which measurable statements about “knowledge, skill and attitude” in the field of cybersecurity each challenge is related. However, when mapping the descriptions of the quiz-questions and challenges from the two CTF events as case studies to the NICE framework on Knowledge, Skills and Abilities in cybersecurity, the NICE framework did not provide us with detailed measurable statements that could be used in education. Where the descriptions of the quiz-questions and challenges were specific, the learning outcomes of the NICE framework are only formulated in a quite general matter. Finally, some evidence for Csíkszentmihályi’s theory of Flow has been observed. Following the theory of Flow, a person can become fully immersed in performing a task, also known as “being in the zone” if the “challenge level” of the task is in line with the person’s “skill level”. The persons mental state towards a task will be different depending on the challenge level of the task and required skill level for completing it. Results show that participants state that some challenges were difficult and fun, where other challenges were easy and boring. As a result of this9 project, a guide / checklist is provided for those intending to use CTF in education.
DOCUMENT
This paper describes the participatory development process of a web-based communication system focusing on disease management, particularly infection control of Methicillin-resistant Staphylococcus aureus (MRSA). These infections are becoming a major public health issue; they can have serious consequences such as pneumonia, sepsis or death [1]. This makes it even more important for people to be provided with up-to-date and reliable information. Users of a bilingual communication system (Dutch and German) participated in the development process via a needs assessment, the co-creation of the content and the system via usability tests, and in the summative evaluation of the usage of the system. The system enabled users to search efficiently and effectively for practical and relevant information. Moreover, we found that the participation of the intended users is a prerequisite to create a fit between the needs and expectations of the end-users, the technology and the social context of usage of technology. The summative evaluation showed that the system was frequently used (approximately 11,000 unique visitors per month). The most popular categories include ‘MRSA in general’ (20%, both languages) and ‘Acquiring MRSA’ (17% NL, 13% GER). Most users enter the site using internet search engines (Google) instead of the on-site search engine. When they are on the site, they prefer convenient searching via FAQ or related questions. Furthermore, the results showed that the participation of stakeholders is a prerequisite for a successful implementation of the system. To guide the participation of stakeholders we developed a roadmap that integrates human-centered development with business modelling activities.
DOCUMENT