Introduction: In March 2014, the New South Wales (NSW) Government (Australia) announced the NSW Integrated Care Strategy. In response, a family-centred, population-based, integrated care initiative for vulnerable families and their children in Sydney, Australia was developed. The initiative was called Healthy Homes and Neighbourhoods. A realist translational social epidemiology programme of research and collaborative design is at the foundation of its evaluation. Theory and Method: The UK Medical Research Council (MRC) Framework for evaluating complex health interventions was adapted. This has four components, namely 1) development, 2) feasibility/piloting, 3) evaluation and 4) implementation. We adapted the Framework to include: critical realist, theory driven, and continuous improvement approaches. The modified Framework underpins this research and evaluation protocol for Healthy Homes and Neighbourhoods. Discussion: The NSW Health Monitoring and Evaluation Framework did not make provisions for assessment of the programme layers of context, or the effect of programme mechanism at each level. We therefore developed a multilevel approach that uses mixed-method research to examine not only outcomes, but also what is working for whom and why.
LINK
OBJECTIVES: To improve transmural palliative care for acutely admitted older patients, the PalliSupport transmural care pathway was developed. Implementation of this care pathway was challenging. The aim of this study was to improve understanding why the implementation partly failed.DESIGN: A qualitative process evaluation study.SETTING/PARTICIPANTS: 17 professionals who were involved in the PalliSupport program were interviewed.METHODS: Online semi-structured interviews. Thematic analysis to create themes according to the implementation framework of Grol & Wensing.RESULTS: From this study, themes within four levels of implementation emerged: 1) The innovation: challenges in current palliative care, the setting of the pathway and boost for improvement; 2) Individual professional: feeling (un)involved and motivation; 3) Organizational level: project management; 4) Political and economic level: project plan and evaluation.CONCLUSION AND IMPLICATIONS: We learned that the challenges involved in implementing a transmural care pathway in palliative care should not be underestimated. For successful implementation, we emphasize the importance of creating a program that fits the complexity of transmural palliative care. We suggest starting on a small scale and invest in project management. This could help to involve all stakeholders and anticipate current challenges in palliative care. To increase acceptance, create one care pathway that can start and be used in all care settings. Make sure that there is sufficient flexibility in time and room to adjust the project plan, so that a second pilot study can possibly be performed, and choose a scientific evaluation with both rigor and practical usefulness to evaluate effectiveness.
DOCUMENT
As most Universities around the world the Amsterdam University of Applied Sciences conduct surveys (student evaluation monitor: STEM) among their students to evaluate the different courses and their teachers. At the Department of Media, Information and Communication the response by students tend to decline in the course of the year. In 2011-2012 with a limited enrolment of 900 first year students, 70% responded to the first survey conducted after the first exams in October and dropped to 26% in the last survey at the end of the first year (July 2012). In 2012-2013 (with the same amount of students) the response was respectively 75% and 30%. This might be due to several factors, such as the length of the questionnaire, the way the survey is spread (via e-mail to the students University account), the time of spreading the surveys (after the courses and exams) or simple due to lack of interest. Another problem of the surveys is found in the quest to limit the length of the questionnaires. Hereby, some relevant aspects to apprehend the success of students (or the return of the department) and the quality of the courses and teachers aren’t measured, such as: coherence between the courses, the students opinion about the form of education and exams, the connection between the evaluation and the exam results or other influential factors of student’s success. Given these difficulties and the fact that insight in all of the above mentioned aspects are crucial for both students and teachers and not in the least for the management, a new approach for evaluating is needed. An evaluating system that can uncover crucial information, for example to pinpoint the characteristics of dropout or long-term students in order to limit these, and/or improve the education/course. This paper will describe a pilot study wherein a first step towards a new way of evaluating is taken by separating the course- and teacher evaluation from the rest of the surveys by using an app/QR or website. Furthermore, the literature about in- or outside class surveys and student success will serve as a theoretical base for the discussion this pilot and is part of a broader PhD research.
LINK
While traditional crime rates are decreasing, cybercrime is on the rise. As a result, the criminal justice system is increasingly dealing with criminals committing cyber-dependent crimes. However, to date there are no effective interventions to prevent recidivism in this type of offenders. Dutch authorities have developed an intervention program, called Hack_Right. Hack_Right is an alternative criminal justice program for young first-offenders of cyber-dependent crimes. In order to prevent recidivism, this program places participants in organizations where they are taught about ethical hacking, complete (technical) assignments and reflect on their offense. In this study, we have evaluated the Hack_Right program and the pilot interventions carried out thus far. By examining the program theory (program evaluation) and implementation of the intervention (process evaluation), the study adds to the scarce literature about cybercrime interventions. During the study, two qualitative research methods have been applied: 1) document analysis and 2) interviews with intervention developers, imposers, implementers and participants. In addition to the observation that the scientific basis for linking specific criminogenic factors to cybercriminals is still fragile, the article concludes that the theoretical base and program integrity of Hack_Right need to be further developed in order to adhere to principles of effective interventions.
DOCUMENT
Explainable Artificial Intelligence (XAI) aims to provide insights into the inner workings and the outputs of AI systems. Recently, there’s been growing recognition that explainability is inherently human-centric, tied to how people perceive explanations. Despite this, there is no consensus in the research community on whether user evaluation is crucial in XAI, and if so, what exactly needs to be evaluated and how. This systematic literature review addresses this gap by providing a detailed overview of the current state of affairs in human-centered XAI evaluation. We reviewed 73 papers across various domains where XAI was evaluated with users. These studies assessed what makes an explanation “good” from a user’s perspective, i.e., what makes an explanation meaningful to a user of an AI system. We identified 30 components of meaningful explanations that were evaluated in the reviewed papers and categorized them into a taxonomy of human-centered XAI evaluation, based on: (a) the contextualized quality of the explanation, (b) the contribution of the explanation to human-AI interaction, and (c) the contribution of the explanation to human- AI performance. Our analysis also revealed a lack of standardization in the methodologies applied in XAI user studies, with only 19 of the 73 papers applying an evaluation framework used by at least one other study in the sample. These inconsistencies hinder cross-study comparisons and broader insights. Our findings contribute to understanding what makes explanations meaningful to users and how to measure this, guiding the XAI community toward a more unified approach in human-centered explainability.
MULTIFILE
Learning activities in a makerspace are hands-on and characterized by design and inquiry. Evaluation is needed both for learners and their coaches in order to effectively guide the learning process of the children and for feedback on the effectiveness of the after-school maker activities. Due to its constructionist nature, learning in a makerspace requires specific forms of evaluation. In this paper we describe the development of an instrument that facilitates and captures reflection on the activities that children undertook in a library makerspace. Our aim is to capture learning in this context with multiple instruments: analysis of the artifacts that are made, observation of hands-on activities and interviews - which all are time consuming methods. Hence, we developed an easy to use tool for self-evaluation of maker learner activities for children. We build on the design of a visual instrument used for learning by design and inquiry in primary education. The findings and results are transferable to (formative) assessment and evaluation of learning activities by learners in other types of education and specific in maker education.
DOCUMENT
Posterpresentatie die een introductie geeft in het onderzoek: "Hoe kan ontwikkelingsgerichte evaluatie een bijdrage leveren aan verduurzaming van DBE waarbij recht gedaan wordt aan kenmerkende aspecten van DBE als onderwijsconcept."
DOCUMENT
Background: The Nurses in the Lead (NitL) programme consists of a systematic approach and training to 1) empower community nurses in implementing evidence, targeted at encouraging functional activities of older adults, and 2) train community nurses in enabling team members to change their practice. This article aims to describe the process evaluation of NitL. Methods: A mixed-methods formative process evaluation with a predominantly qualitative approach was conducted. Qualitative data were collected by interviews with community nurses (n = 7), focus groups with team members (n = 31), and reviewing seven implementation plans and 28 patient records. Quantitative data were collected among community nurses and team members (N = 90) using a questionnaire to assess barriers in encouraging functional activities and attendance lists. Data analysis was carried out through descriptive statistics and content analysis. Results: NitL was largely executed according to plan. Points of attention were the use and value of the background theory within the training, completion of implementation plans, and reporting in patient records by community nurses. Inhibiting factors for showing leadership and encouraging functional activities were a lack of time and a high complexity of care; facilitating factors were structure and clear communication within teams. Nurses considered the systematic approach useful and the training educational for their role. Most team members considered NitL practical and were satisfied with the coaching provided by community nurses. To optimise NitL, community nurses recommended providing the training first and extending the training. The team members recommended continuing clinical lessons, which were an implementation strategy from the community nurses. Conclusions: NitL was largely executed as planned, and appears worthy of further application in community care practice. However, adaptations are recommended to make NitL more promising in practice in empowering community nurse leadership in implementing evidence.
DOCUMENT
Living labs are complex multi-stakeholder collaborations that often employ a usercentred and design-driven methodology to foster innovation. Conventional management tools fall short in evaluating them. However, some methods and tools dedicated to living labs' special characteristics and goals have already been developed. Most of them are still in their testing phase. Those tools are not easily accessible and can only be found in extensive research reports, which are difficult to dissect. Therefore, this paper reviews seven evaluation methods and tools specially developed for living labs. Each section of this paper is structured in the following manner: tool’s introduction (1), who uses the tool (2), and how it should be used (3). While the first set of tools, namely “ENoLL 20 Indicators”, “SISCODE Self-assessment”, and “SCIROCCO Exchange Tool” assess a living lab as an organisation and are diving deeper into the organisational activities and the complex context, the second set of methods and tools, “FormIT” and “Living Lab Markers”, evaluate living labs’ methodologies: the process they use to come to innovations. The paper's final section presents “CheRRIes Monitoring and Evaluation Tool” and “TALIA Indicator for Benchmarking Service for Regions”, which assess the regional impact made by living labs. As every living lab is different regarding its maturity (as an organisation and in its methodology) and the scope of impact it wants to make, the most crucial decision when evaluating is to determine the focus of the assessment. This overview allows for a first orientation on worked-out methods and on possible indicators to use. It also concludes that the existing tools are quite managerial in their method and aesthetics and calls for designers and social scientists to develop more playful, engaging and (possibly) learning-oriented tools to evaluate living labs in the future. LinkedIn: https://www.linkedin.com/in/overdiek12345/ https://www.linkedin.com/in/mari-genova-17a727196/?originalSubdomain=nl
DOCUMENT
Cooperatives are special because the members not only own the cooperative, but also patronize it. CEO’s decision has an impact on the overall members’ interests. Understanding how CEOs differ from members regarding their evaluations on cooperative performance and what causes the differences, is valuable for CEOs to best serve the members. This paper evaluates the difference between CEO and member evaluation regarding their cooperatives, and further examines the role of governance in predicting the evaluations and differences in evaluations, based on a set of first-hand data containing Chinese agricultural cooperatives (240 CEOs and 543 members). Cooperative performance is measured by three indicators: member profitability, social influence in the local community, and overall performance. The results show that members have higher scores than CEOs regarding member profitability and overall performance, while CEOs have a higher evaluation regarding social influence. “This is an Accepted Manuscript of an article published by Taylor & Francis in 'The Social Science Journal' on 27 Jan. 2020 available online: https://www.tandfonline.com/doi/abs/10.1016/j.soscij.2019.01.006. LinkedIn: https://www.linkedin.com/in/xiao-peng-20466772/
MULTIFILE