Explainable Artificial Intelligence (XAI) aims to provide insights into the inner workings and the outputs of AI systems. Recently, there’s been growing recognition that explainability is inherently human-centric, tied to how people perceive explanations. Despite this, there is no consensus in the research community on whether user evaluation is crucial in XAI, and if so, what exactly needs to be evaluated and how. This systematic literature review addresses this gap by providing a detailed overview of the current state of affairs in human-centered XAI evaluation. We reviewed 73 papers across various domains where XAI was evaluated with users. These studies assessed what makes an explanation “good” from a user’s perspective, i.e., what makes an explanation meaningful to a user of an AI system. We identified 30 components of meaningful explanations that were evaluated in the reviewed papers and categorized them into a taxonomy of human-centered XAI evaluation, based on: (a) the contextualized quality of the explanation, (b) the contribution of the explanation to human-AI interaction, and (c) the contribution of the explanation to human- AI performance. Our analysis also revealed a lack of standardization in the methodologies applied in XAI user studies, with only 19 of the 73 papers applying an evaluation framework used by at least one other study in the sample. These inconsistencies hinder cross-study comparisons and broader insights. Our findings contribute to understanding what makes explanations meaningful to users and how to measure this, guiding the XAI community toward a more unified approach in human-centered explainability.
MULTIFILE
It is now widely accepted that decisions made by AI systems must be explainable to their users. However, in practice, it often remains unclear how this explainability should be concretely implemented. This is especially important for nontechnical users, such as claims assessors at insurance companies, who need to understand AI system decisions and be able to explain them to customers. Think, for example, of explaining a rejected insurance claim or loan application. Although the importance of explainable AI is broadly recognized, there is often a lack of practical tools to achieve it. That’s why, in this handbook, we have combined insights from two use cases in the financial sector with findings from an extensive literature review. This has led to the identification of 30 key aspects of meaningful AI explanations. Based on these aspects, we developed a checklist to help AI developers make their systems more explainable. The checklist not only provides insight into how understandable an AI application currently is for end users, but also highlights areas for improvement.
DOCUMENT
The transition from adolescence to adulthood also has been described as a window of opportunity or vulnerability when developmental and contextual changes converge to support positive turnarounds and redirections (Masten, Long, Kuo, McCormick, & Desjardins, 2009; Masten, Obradović, & Burt, 2006). The transition years also are a criminological crossroads, as major changes in criminal careers often occur at these ages as well. For some who began their criminal careers during adolescence, offending continues and escalates; for others involvement in crime wanes; and yet others only begin serious involvement in crime at these ages. There are distinctive patterns of offending that emerge during the transition from adolescence to adulthood. One shows a rise of offending in adolescence and the persistence of high crime rates into adulthood; a second reflects the overall age-crime curve pattern of increasing offending in adolescence followed by decreases during the transition years; and the third group shows a late onset of offending relative to the age-crime curve. Developmental theories of offending ought to be able to explain these markedly different trajectories
DOCUMENT
One aspect of a responsible application of Artificial Intelligence (AI) is ensuring that the operation and outputs of an AI system are understandable for non-technical users, who need to consider its recommendations in their decision making. The importance of explainable AI (XAI) is widely acknowledged; however, its practical implementation is not straightforward. In particular, it is still unclear what the requirements are of non-technical users from explanations, i.e. what makes an explanation meaningful. In this paper, we synthesize insights on meaningful explanations from a literature study and two use cases in the financial sector. We identified 30 components of meaningfulness in XAI literature. In addition, we report three themes associated with explanation needs that were central to the users in our use cases, but are not prominently described in literature: actionability, coherent narratives and context. Our results highlight the importance of narrowing the gap between theoretical and applied responsible AI.
MULTIFILE
This white paper is the result of a research project by Hogeschool Utrecht, Floryn, Researchable, and De Volksbank in the period November 2021-November 2022. The research project was a KIEM project1 granted by the Taskforce for Applied Research SIA. The goal of the research project was to identify the aspects that play a role in the implementation of the explainability of artificial intelligence (AI) systems in the Dutch financial sector. In this white paper, we present a checklist of the aspects that we derived from this research. The checklist contains checkpoints and related questions that need consideration to make explainability-related choices in different stages of the AI lifecycle. The goal of the checklist is to give designers and developers of AI systems a tool to ensure the AI system will give proper and meaningful explanations to each stakeholder.
MULTIFILE
There is mounting evidence that efforts to mitigate the adverse effects of human activity on climate and biodiversity have so far been unsuccessful. Explanations for this failure point to a number of factors discussed in this article. While acknowledging cognitive dissonance as a significant contributing factor to continuing unsustainable practices, this article seeks to explore hegemonic rationality of industrial expansion and economic growth and resulting politics of denial. These politics promote the economic rationale for exploitation of the environment, with pursuit of material wealth seen as the most rational goal. Framed this way, this rationality is presented by political and corporate decision-makers as common sense and continuous environmentally destructive behavior is justified under the guise of consumer choices, hampering meaningful action for sustainable change. This article underlines forms of alternative rationality, namely, non-utilitarian and non-hierarchical worldview of environmental and human flourishing, that can advance sustainability. LinkedIn: https://www.linkedin.com/in/helenkopnina/
DOCUMENT
One aspect of a responsible application of Artificial Intelligence (AI) is ensuring that the operation and outputs of an AI system are understandable for non-technical users, who need to consider its recommendations in their decision making. The importance of explainable AI (XAI) is widely acknowledged; however, its practical implementation is not straightforward. In particular, it is still unclear what the requirements are of non-technical users from explanations, i.e. what makes an explanation meaningful. In this paper, we synthesize insights on meaningful explanations from a literature study and two use cases in the financial sector. We identified 30 components of meaningfulness in XAI literature. In addition, we report three themes associated with explanation needs that were central to the users in our use cases, but are not prominently described in literature: actionability, coherent narratives and context. Our results highlight the importance of narrowing the gap between theoretical and applied responsible AI.
MULTIFILE
This guide was developed for designers and developers of AI systems, with the goal of ensuring that these systems are sufficiently explainable. Sufficient here means that it meets the legal requirements from AI Act and GDPR and that users can use the system properly. Explainability of decisions is an important requirement in many systems and even an important principle for AI systems [HLEG19]. In many AI systems, explainability is not self-evident. AI researchers expect that the challenge of making AI explainable will only increase. For one thing, this comes from the applications: AI will be used more and more often, for larger and more sensitive decisions. On the other hand, organizations are making better and better models, for example, by using more different inputs. With more complex AI models, it is often less clear how a decision was made. Organizations that will deploy AI must take into account users' need for explanations. Systems that use AI should be designed to provide the user with appropriate explanations. In this guide, we first explain the legal requirements for explainability of AI systems. These come from the GDPR and the AI Act. Next, we explain how AI is used in the financial sector and elaborate on one problem in detail. For this problem, we then show how the user interface can be modified to make the AI explainable. These designs serve as prototypical examples that can be adapted to new problems. This guidance is based on explainability of AI systems for the financial sector. However, the advice can also be used in other sectors.
DOCUMENT
Algorithms that significantly impact individuals and society should be transparent, yet they can often function as complex black boxes. Such high-risk AI systems necessitate explainability of their inner workings and decision-making processes, which is also crucial for fostering trust, understanding, and adoption of AI. Explainability is a major topic, not only in literature (Maslej et al. 2024) but also in AI regulation. The EU AI Act imposes explainability requirements on providers and deployers of high-risk AI systems. Additionally, it grants the right to explanation for individuals affected by high-risk AI systems. However, legal literature illustrates a lack of clarity and consensus regarding the definition of explainability and the interpretation of the relevant obligations of the AI Act (See e.g. Bibal et al. 2021; Nannini 2024; Sovrano et al. 2022). The practical implementation also presents further challenges, calling for an interdisciplinary approach (Gyevnar, Ferguson, and Schafer 2023; Nahar et al. 2024, 2110).Explainability can be examined from various perspectives. One such perspective concerns a functional approach, where explanations serve specific functions (Hacker and Passoth 2022). Looking at this functional perspective of explanations, my previous work elaborates on the central functions of explanations interwoven in the AI Act. Through comparative research on the evolution of the explainability provisions in soft and hard law on AI from the High-Level Expert Group on AI, Council of Europe, and OECD, my previous research establishes that explanations in the AI Act primarily serve to provide understanding of the inner workings and output of an AI system, to enable contestation of a decision, to increase usability, and to achieve legal compliance (Van Beem, ongoing work, paper presented at Bileta 2025 conference; submission expected June 2025).Moreover, my previous work reveals that the AI lifecycle is an important concept in AI policy and legal documents. The AI lifecycle includes phases that lead to the design, development, and deployment of an AI system (Silva and Alahakoon 2022). The AI Act requires various explanations in each phase. The provider and deployer shall observe an explainability by design and development approach throughout the entire AI lifecycle, adapting explanations as their AI evolves equally. However, the practical side of balancing between clear, meaningful, legally compliant explanations and technical explanations proves challenging.Assessing this practical side, my current research is a case study in the agricultural sector, where AI plays an increasing role and where explainability is a necessary ingredient for adoption (EPRS 2023). The case study aims to map which legal issues AI providers, deployers, and other AI experts in field crop farming encounter. Secondly, the study explores the role of explainability (and the field of eXplainable AI) in overcoming such legal challenges. The study is conducted through further doctrinal research, case law analysis, and empirical research using interviews, integrating the legal and technical perspectives. Aiming to enhance trustworthiness and adoption of AI in agriculture, this research seeks to contribute to an interdisciplinary debate regarding the practical application of the AI Act's explainability obligations.
DOCUMENT
Background: People with severe mental illnesses (SMIs) have difficulty participating in society through work or other daily activities. Aims: To establish the effectiveness with which the Boston University Approach to Psychiatric Rehabilitation (BPR) improves the level of social participation in people with SMIs, in the Netherlands. Method: In a randomized controlled trial involving 188 people with SMIs, we compared BPR (n = 98) with an Active Control Condition (ACC, n = 90) (Trial registration ISRCTN88987322). Multilevel modeling was used to study intervention effects over two six-month periods. The primary outcome measure was level of social participation, expressed as having participated in paid or unpaid employment over the past six months, as the total hours spent in paid or unpaid employment, and as the current level of social participation. Secondary outcome measures were clients’ views on rehabilitation goal attainment, Quality of Life (QOL), personal recovery, self-efficacy, and psychosocial functioning. Results: During the study, social participation, QOL, and psychosocial functioning improved in patients in both groups. However, BPR was not more effective than ACC on any of the outcomes. Better social participation was predicted by previous work experience and a lower intensity of psychiatric symptoms. Conclusions: While ACC was as effective as BPR in improving the social participation of individuals with SMIs, much higher percentages of participants in our sample found (paid) work or other meaningful activities than in observational studies without specific support for social participation. This suggests that focused rehabilitation efforts are beneficial, irrespective of the specific methodology used.
DOCUMENT