Explainable Artificial Intelligence (XAI) aims to provide insights into the inner workings and the outputs of AI systems. Recently, there’s been growing recognition that explainability is inherently human-centric, tied to how people perceive explanations. Despite this, there is no consensus in the research community on whether user evaluation is crucial in XAI, and if so, what exactly needs to be evaluated and how. This systematic literature review addresses this gap by providing a detailed overview of the current state of affairs in human-centered XAI evaluation. We reviewed 73 papers across various domains where XAI was evaluated with users. These studies assessed what makes an explanation “good” from a user’s perspective, i.e., what makes an explanation meaningful to a user of an AI system. We identified 30 components of meaningful explanations that were evaluated in the reviewed papers and categorized them into a taxonomy of human-centered XAI evaluation, based on: (a) the contextualized quality of the explanation, (b) the contribution of the explanation to human-AI interaction, and (c) the contribution of the explanation to human- AI performance. Our analysis also revealed a lack of standardization in the methodologies applied in XAI user studies, with only 19 of the 73 papers applying an evaluation framework used by at least one other study in the sample. These inconsistencies hinder cross-study comparisons and broader insights. Our findings contribute to understanding what makes explanations meaningful to users and how to measure this, guiding the XAI community toward a more unified approach in human-centered explainability.
MULTIFILE
For people with early-dementia (PwD), it can be challenging to remember to eat and drink regularly and maintain a healthy independent living. Existing intelligent home technologies primarily focus on activity recognition but lack adaptive support. This research addresses this gap by developing an AI system inspired by the Just-in-Time Adaptive Intervention (JITAI) concept. It adapts to individual behaviors and provides personalized interventions within the home environment, reminding and encouraging PwD to manage their eating and drinking routines. Considering the cognitive impairment of PwD, we design a human-centered AI system based on healthcare theories and caregivers’ insights. It employs reinforcement learning (RL) techniques to deliver personalized interventions. To avoid overwhelming interaction with PwD, we develop an RL-based simulation protocol. This allows us to evaluate different RL algorithms in various simulation scenarios, not only finding the most effective and efficient approach but also validating the robustness of our system before implementation in real-world human experiments. The simulation experimental results demonstrate the promising potential of the adaptive RL for building a human-centered AI system with perceived expressions of empathy to improve dementia care. To further evaluate the system, we plan to conduct real-world user studies.
DOCUMENT
Home care patients often use many medications and are prone to drug-related problems (DRPs). For the management of problems related to drug use, home care could add to the multidisciplinary expertise of general practitioners (GPs) and pharmacists. The home care observation of medication-related problems by home care employees (HOME)-instrument is paper-based and assists home care workers in reporting potential DRPs. To facilitate the multiprofessional consultation, a digital report of DRPs from the HOME-instrument and digital monitoring and consulting of DRPs between home care and general practices and pharmacies is desired. The objective of this study was to develop an electronic HOME system (eHOME), a mobile version of the HOME-instrument that includes a monitoring and a consulting system for primary care.
DOCUMENT
Introduction: In March 2014, the New South Wales (NSW) Government (Australia) announced the NSW Integrated Care Strategy. In response, a family-centred, population-based, integrated care initiative for vulnerable families and their children in Sydney, Australia was developed. The initiative was called Healthy Homes and Neighbourhoods. A realist translational social epidemiology programme of research and collaborative design is at the foundation of its evaluation. Theory and Method: The UK Medical Research Council (MRC) Framework for evaluating complex health interventions was adapted. This has four components, namely 1) development, 2) feasibility/piloting, 3) evaluation and 4) implementation. We adapted the Framework to include: critical realist, theory driven, and continuous improvement approaches. The modified Framework underpins this research and evaluation protocol for Healthy Homes and Neighbourhoods. Discussion: The NSW Health Monitoring and Evaluation Framework did not make provisions for assessment of the programme layers of context, or the effect of programme mechanism at each level. We therefore developed a multilevel approach that uses mixed-method research to examine not only outcomes, but also what is working for whom and why.
LINK
Technology in general, and assistive technology in particular, is considered to be a promising opportunity to address the challenges of an aging population. Nevertheless, in health care, technology is not as widely used as could be expected. In this chapter, an overview is given of theories and models that help to understand this phenomenon. First, the design of (assistive) technologies will be addressed and the importance of human-centered design in the development of new assistive devices will be discussed. Also theories and models are addressed about technology acceptance in general. Specific attention will be given to technology acceptance in healthcare professionals, and the implementation of technology within healthcare organizations. The chapter will be based on the state of the art of scientific literature and will be illustrated with examples from our research in daily practice considering the different perspectives of involved stakeholders.
LINK
One aspect of a responsible application of Artificial Intelligence (AI) is ensuring that the operation and outputs of an AI system are understandable for non-technical users, who need to consider its recommendations in their decision making. The importance of explainable AI (XAI) is widely acknowledged; however, its practical implementation is not straightforward. In particular, it is still unclear what the requirements are of non-technical users from explanations, i.e. what makes an explanation meaningful. In this paper, we synthesize insights on meaningful explanations from a literature study and two use cases in the financial sector. We identified 30 components of meaningfulness in XAI literature. In addition, we report three themes associated with explanation needs that were central to the users in our use cases, but are not prominently described in literature: actionability, coherent narratives and context. Our results highlight the importance of narrowing the gap between theoretical and applied responsible AI.
MULTIFILE
With artificial intelligence (AI) systems entering our working and leisure environments with increasing adaptation and learning capabilities, new opportunities arise for developing hybrid (human-AI) intelligence (HI) systems, comprising new ways of collaboration. However, there is not yet a structured way of specifying design solutions of collaboration for hybrid intelligence (HI) systems and there is a lack of best practices shared across application domains. We address this gap by investigating the generalization of specific design solutions into design patterns that can be shared and applied in different contexts. We present a human-centered bottom-up approach for the specification of design solutions and their abstraction into team design patterns. We apply the proposed approach for 4 concrete HI use cases and show the successful extraction of team design patterns that are generalizable, providing re-usable design components across various domains. This work advances previous research on team design patterns and designing applications of HI systems.
MULTIFILE
While traditional crime rates are decreasing, cybercrime is on the rise. As a result, the criminal justice system is increasingly dealing with criminals committing cyber-dependent crimes. However, to date there are no effective interventions to prevent recidivism in this type of offenders. Dutch authorities have developed an intervention program, called Hack_Right. Hack_Right is an alternative criminal justice program for young first-offenders of cyber-dependent crimes. In order to prevent recidivism, this program places participants in organizations where they are taught about ethical hacking, complete (technical) assignments and reflect on their offense. In this study, we have evaluated the Hack_Right program and the pilot interventions carried out thus far. By examining the program theory (program evaluation) and implementation of the intervention (process evaluation), the study adds to the scarce literature about cybercrime interventions. During the study, two qualitative research methods have been applied: 1) document analysis and 2) interviews with intervention developers, imposers, implementers and participants. In addition to the observation that the scientific basis for linking specific criminogenic factors to cybercriminals is still fragile, the article concludes that the theoretical base and program integrity of Hack_Right need to be further developed in order to adhere to principles of effective interventions.
DOCUMENT
Office well-being aims to explore and support a healthy, balanced and active work style in office environments. Recent work on tangible user interfaces has started to explore the role of physical, tangible interfaces as active interventions to explore how to tackle problems such as inactive work and lifestyles, and increasingly sedentary behaviours. We identify a fragmented research landscape on tangible Office well-being interventions, missing the relationship between interventions, data, design strategies, and outcomes, and behaviour change techniques. Based on the analysis of 40 papers, we identify 7 classifications in tangible Office well-being interventions and analyse the intervention based on their role and foundation in behaviour change. Based on the analysis, we present design considerations for the development of future tangible Office well-being design interventions and present an overview of the current field and future research into tangible Office well-being interventions to design for a healthier and active office environment.
DOCUMENT
From the article: The ethics guidelines put forward by the AI High Level Expert Group (AI-HLEG) present a list of seven key requirements that Human-centered, trustworthy AI systems should meet. These guidelines are useful for the evaluation of AI systems, but can be complemented by applied methods and tools for the development of trustworthy AI systems in practice. In this position paper we propose a framework for translating the AI-HLEG ethics guidelines into the specific context within which an AI system operates. This approach aligns well with a set of Agile principles commonly employed in software engineering. http://ceur-ws.org/Vol-2659/
DOCUMENT