Explainable Artificial Intelligence (XAI) aims to provide insights into the inner workings and the outputs of AI systems. Recently, there’s been growing recognition that explainability is inherently human-centric, tied to how people perceive explanations. Despite this, there is no consensus in the research community on whether user evaluation is crucial in XAI, and if so, what exactly needs to be evaluated and how. This systematic literature review addresses this gap by providing a detailed overview of the current state of affairs in human-centered XAI evaluation. We reviewed 73 papers across various domains where XAI was evaluated with users. These studies assessed what makes an explanation “good” from a user’s perspective, i.e., what makes an explanation meaningful to a user of an AI system. We identified 30 components of meaningful explanations that were evaluated in the reviewed papers and categorized them into a taxonomy of human-centered XAI evaluation, based on: (a) the contextualized quality of the explanation, (b) the contribution of the explanation to human-AI interaction, and (c) the contribution of the explanation to human- AI performance. Our analysis also revealed a lack of standardization in the methodologies applied in XAI user studies, with only 19 of the 73 papers applying an evaluation framework used by at least one other study in the sample. These inconsistencies hinder cross-study comparisons and broader insights. Our findings contribute to understanding what makes explanations meaningful to users and how to measure this, guiding the XAI community toward a more unified approach in human-centered explainability.
MULTIFILE
For people with early-dementia (PwD), it can be challenging to remember to eat and drink regularly and maintain a healthy independent living. Existing intelligent home technologies primarily focus on activity recognition but lack adaptive support. This research addresses this gap by developing an AI system inspired by the Just-in-Time Adaptive Intervention (JITAI) concept. It adapts to individual behaviors and provides personalized interventions within the home environment, reminding and encouraging PwD to manage their eating and drinking routines. Considering the cognitive impairment of PwD, we design a human-centered AI system based on healthcare theories and caregivers’ insights. It employs reinforcement learning (RL) techniques to deliver personalized interventions. To avoid overwhelming interaction with PwD, we develop an RL-based simulation protocol. This allows us to evaluate different RL algorithms in various simulation scenarios, not only finding the most effective and efficient approach but also validating the robustness of our system before implementation in real-world human experiments. The simulation experimental results demonstrate the promising potential of the adaptive RL for building a human-centered AI system with perceived expressions of empathy to improve dementia care. To further evaluate the system, we plan to conduct real-world user studies.
DOCUMENT
poster voor de EuSoMII Annual Meeting in Pisa, Italië in oktober 2023. PURPOSE & LEARNING OBJECTIVE Artificial Intelligence (AI) technologies are gaining popularity for their ability to autonomously perform tasks and mimic human reasoning [1, 2]. Especially within the medical industry, the implementation of AI solutions has seen an increasing pace [3]. However, the field of radiology is not yet transformed with the promised value of AI, as knowledge on the effective use and implementation of AI is falling behind due to a number of causes: 1) Reactive/passive modes of learning are dominant 2) Existing developments are fragmented 3) Lack of expertise and differing perspectives 4) Lack of effective learning space Learning communities can help overcome these problems and address the complexities that come with human-technology configurations [4]. As the impact of a technology is dependent on its social management and implementation processes [5], our research question then becomes: How do we design, configure, and manage a Learning Community to maximize the impact of AI solutions in medicine?
DOCUMENT
The Dutch hospitality sector is the 8th largest contributor to GDP, employing over 500,000 people, yet it remains heavily reliant on manual processes and human labor for service delivery. Structural staff shortages, rising labor costs, and increasing operational demands are pushing the industry to its limits. Hotels and restaurants, the backbone of this sector, are struggling with operational inefficiencies, high staff turnover, and the growing difficulty of maintaining high service standards. An overhaul of the traditional hospitality model is necessary to unlock sustainable growth. This Embrace IT project provides a structured, collaborative approach to solving these pressing challenges. Focusing on three critical areas—housekeeping services, food services, and reception services—the project will co-create concrete, tech-driven solutions together with hospitality businesses, technology providers, and knowledge institutions. These areas represent key operational cost drivers and are vital to revenue generation, making them priorities for industry leaders. By developing technology that complements human labor, the project ensures that operational efficiency improves while leveraging worker well-being and hospitality experience. Over four years, Embrace IT will establish a sustainable innovation ecosystem within the hospitality sector. Through iterative co-creation and field testing of automation, AI, and immersive technologies, the project will equip businesses with the tools and structures to shift from short-term, reactive strategies to long-term, sustainable digital transformation. Moving beyond the current "sensing" phase, where businesses recognize technological trends but are hesitant to act, Embrace IT will deliver concrete and scalable solutions that foster industry-wide adoption. Embrace IT aligns with key sector policy documents such as the 2024 Digital Destinations strategy from the Netherlands Board of Tourism & Conventions (NBTC), ensuring direct support of the broader vision for digital transformation of Dutch hospitality. This project will increase productivity of the sector while improving working conditions and leveraging hospitality experience – to ensure lasting societal impact.
Steeds meer organisaties vinden het belangrijk om ‘ethisch verantwoorde’ AI-toepassingen te ontwikkelen. Maar wat is precies ethisch verantwoord? En hoe ontwerp je AI-systemen die aan ethische richtlijnen voldoen? In dit coöperatieve spel ontwerp je samen, op basis van de ethische principes opgesteld door de EU, ethisch verantwoorde AI-toepassingen.
Steeds meer organisaties vinden het belangrijk om ‘ethisch verantwoorde’ AI-toepassingen te ontwikkelen. Maar wat is precies ethisch verantwoord? En hoe ontwerp je AI-systemen die aan ethische richtlijnen voldoen? In dit coöperatieve spel ontwerp je samen, op basis van de ethische principes opgesteld door de EU, ethisch verantwoorde AI-toepassingen.Doel In het spel worden de spelers benaderd door de startup ‘Ethics Inc.’ om een betrouwbare AI-toepassing te ontwikkelen die voldoet aan Europese richtlijnen. De spelers vormen samen een denktank om de startup te adviseren over hoe zij deze toepassing het best kunnen ontwikkelen. Het spel eindigt als er consensus is bereikt na één of meer rondes. Samen maak je een vertaalslag van abstracte waarden naar de specifieke gebruikerscontext. Het spel is gebaseerd op de EU Ethics Guidelines For Trustworthy AI welke ook ten grondslag ligt aan de aankomende EU-wetgeving rondom AI. Resultaten Een coöperatief spel waarin er samen ethisch verantwoorde AI-toepassingen ontworpen worden (3 tot 8 spelers, 75 minuten). Je kunt een fysiek exemplaar bestellen door een mail te sturen naar info@stt.nl. De kaarten zijn ook als PDF te downloaden om zelf uit te printen. Er is ook een digitale versie van het spel gemaakt, deze wordt op het moment getest en doorontwikkeld. Het gedachtegoed achter het ontwerpspel is verder uitgediept in het paper An Agile Framework for Trustworthy AI welke is geaccepteerd voor de ECAI-2020 Workshop “New Foundations for Human-Centered AI”. Looptijd 01 maart 2020 - 31 december 2022 Aanpak De Stichting Toekomstbeeld der Techniek STT heeft het fysieke spel samen met het lectoraat Artificial Intelligence (HU) ontwikkeld, waarbij een grafisch vormgever een aansprekende stijl heeft neergezet. Daarnaast is de Stichting Koninklijk Nederlands Normalisatie Instituut (NEN) betrokken als partner. In 2021 en verder ligt de focus van dit project op een digitale versie van het spel, waarmee het breder ingezet kan worden in de praktijk. Impact voor het onderwijs Voor studenten is Ethics Inc. een tool om van begin af aan AI-toepassingen te ontwerpen die de Europese AI richtlijnen meenemen (Ethics by design). Er lopen pilots bij verschillende opleidingen om de inzet van het spel ook daar te testen.