Explainable Artificial Intelligence (XAI) aims to provide insights into the inner workings and the outputs of AI systems. Recently, there’s been growing recognition that explainability is inherently human-centric, tied to how people perceive explanations. Despite this, there is no consensus in the research community on whether user evaluation is crucial in XAI, and if so, what exactly needs to be evaluated and how. This systematic literature review addresses this gap by providing a detailed overview of the current state of affairs in human-centered XAI evaluation. We reviewed 73 papers across various domains where XAI was evaluated with users. These studies assessed what makes an explanation “good” from a user’s perspective, i.e., what makes an explanation meaningful to a user of an AI system. We identified 30 components of meaningful explanations that were evaluated in the reviewed papers and categorized them into a taxonomy of human-centered XAI evaluation, based on: (a) the contextualized quality of the explanation, (b) the contribution of the explanation to human-AI interaction, and (c) the contribution of the explanation to human- AI performance. Our analysis also revealed a lack of standardization in the methodologies applied in XAI user studies, with only 19 of the 73 papers applying an evaluation framework used by at least one other study in the sample. These inconsistencies hinder cross-study comparisons and broader insights. Our findings contribute to understanding what makes explanations meaningful to users and how to measure this, guiding the XAI community toward a more unified approach in human-centered explainability.
MULTIFILE
The increasing use of AI in industry and society not only expects but demands that we build human-centred competencies into our AI education programmes. The computing education community needs to adapt, and while the adoption of standalone ethics modules into AI programmes or the inclusion of ethical content into traditional applied AI modules is progressing, it is not enough. To foster student competencies to create AI innovations that respect and support the protection of individual rights and society, a novel ground-up approach is needed. This panel presents on one such approach, the development of a Human-Centred AI Masters (HCAIM) as well as the insights and lessons learned from the process. In particular, we discuss the design decisions that have led to the multi-institutional master’s programme. Moreover, this panel allows for discussion on pedagogical and methodological approaches, content knowledge areas and the delivery of such a novel programme, along with challenges faced, to inform and learn from other educators that are considering developing such programmes.
DOCUMENT
As AI systems become increasingly prevalent in our daily lives and work, it is essential to contemplate their social role and how they interact with us. While functionality and increasingly explainability and trustworthiness are often the primary focus in designing AI systems, little consideration is given to their social role and the effects on human-AI interactions. In this paper, we advocate for paying attention to social roles in AI design. We focus on an AI healthcare application and present three possible social roles of the AI system within it to explore the relationship between the AI system and the user and its implications for designers and practitioners. Our findings emphasise the need to think beyond functionality and highlight the importance of considering the social role of AI systems in shaping meaningful human-AI interactions.
DOCUMENT
Trust in AI is crucial for effective and responsible use in high-stakes sectors like healthcare and finance. One of the most commonly used techniques to mitigate mistrust in AI and even increase trust is the use of Explainable AI models, which enables human understanding of certain decisions made by AI-based systems. Interaction design, the practice of designing interactive systems, plays an important role in promoting trust by improving explainability, interpretability, and transparency, ultimately enabling users to feel more in control and confident in the system’s decisions. This paper introduces, based on an empirical study with experts from various fields, the concept of Explanation Stream Patterns, which are interaction patterns that structure and organize the flow of explanations in decision support systems. Explanation Stream Patterns formalize explanation streams by incorporating procedures such as progressive disclosure of explanations or interacting with explanations in a more deliberate way through cognitive forcing functions. We argue that well-defined Explanation Stream Patterns provide practical tools for designing interactive systems that enhance human-AI decision-making.
MULTIFILE
For people with early-dementia (PwD), it can be challenging to remember to eat and drink regularly and maintain a healthy independent living. Existing intelligent home technologies primarily focus on activity recognition but lack adaptive support. This research addresses this gap by developing an AI system inspired by the Just-in-Time Adaptive Intervention (JITAI) concept. It adapts to individual behaviors and provides personalized interventions within the home environment, reminding and encouraging PwD to manage their eating and drinking routines. Considering the cognitive impairment of PwD, we design a human-centered AI system based on healthcare theories and caregivers’ insights. It employs reinforcement learning (RL) techniques to deliver personalized interventions. To avoid overwhelming interaction with PwD, we develop an RL-based simulation protocol. This allows us to evaluate different RL algorithms in various simulation scenarios, not only finding the most effective and efficient approach but also validating the robustness of our system before implementation in real-world human experiments. The simulation experimental results demonstrate the promising potential of the adaptive RL for building a human-centered AI system with perceived expressions of empathy to improve dementia care. To further evaluate the system, we plan to conduct real-world user studies.
DOCUMENT
Digital surveillance technologies using artificial intelligence (AI) tools such as computer vision and facial recognition are becoming cheaper and easier to integrate into governance practices worldwide. Morocco serves as an example of how such technologies are becoming key tools of governance in authoritarian contexts. Based on qualitative fieldwork including semi-structured interviews, observation, and extensive desk reviews, this chapter focusses on the role played by AI-enhanced technology in urban surveillance and the control of migration between the Moroccan–Spanish borders. Two cross-cutting issues emerge: first, while international donors provide funding for urban and border surveillance projects, their role in enforcing transparency mechanisms in their implementation remains limited; second, Morocco’s existing legal framework hinders any kind of public oversight. Video surveillance is treated as the sole prerogative of the security apparatus, and so far public actors have avoided to engage directly with the topic. The lack of institutional oversight and public debate on the matter raise serious concerns on the extent to which the deployment of such technologies affects citizens’ rights. AI-enhanced surveillance is thus an intrinsically transnational challenge in which private interests of economic gain and public interests of national security collide with citizens’ human rights across the Global North/Global South divide.
MULTIFILE
As artificial intelligence (AI) reshapes hiring, organizations increasingly rely on AI-enhanced selection methods such as chatbot-led interviews and algorithmic resume screening. While AI offers efficiency and scalability, concerns persist regarding fairness, transparency, and trust. This qualitative study applies the Artificially Intelligent Device Use Acceptance (AIDUA) model to examine how job applicants perceive and respond to AI-driven hiring. Drawing on semi-structured interviews with 15 professionals, the study explores how social influence, anthropomorphism, and performance expectancy shape applicant acceptance, while concerns about transparency and fairness emerge as key barriers. Participants expressed a strong preference for hybrid AI-human hiring models, emphasizing the importance of explainability and human oversight. The study refines the AIDUA model in the recruitment context and offers practical recommendations for organizations seeking to implement AI ethically and effectively in selection processes.
MULTIFILE
Algorithmic affordances—interactive mechanisms that allow users to exercise tangible control over algorithms—play a crucial role in recommender systems. They can facilitate users’ sense of autonomy, transparency, and ultimately ownership over a recommender’s results, all qualities that are central to responsible AI. Designers, among others, are tasked with creating these interactions, yet state that they lack resources to do so effectively. At the same time, academic research into these interactions rarely crosses the research-practice gap. As a solution, designers call for a structured library of algorithmic affordances containing well-tested, well-founded, and up-to-date examples sourced from both real-world and experimental interfaces. Such a library should function as a boundary object, bridging academia and professional design practice. Academics could use it as a supplementary platform to disseminate their findings, while both practitioners and educators could draw upon it for inspiration and as a foundation for innovation. However, developing a library that accommodates multiple stakeholders presents several challenges, including the need to establish a common language for categorizing algorithmic affordances and devising a categorization of algorithmic affordances that is meaningful to all target groups. This research attempts to bring the designer perspective into this categorization.
LINK
This research investigates the potential and challenges of using artificial intelligence, specifically the ChatGPT-4 model developed by OpenAI, in grading and providing feedback in an educational setting. By comparing the grading of a human lecturer and ChatGPT-4 in an experiment with 105 students, our study found a strong positive correlation between the scores given by both, despite some mismatches. In addition, we observed that ChatGPT-4's feedback was effectively personalized and understandable for students, contributing to their learning experience. While our findings suggest that AI technologies like ChatGPT-4 can significantly speed up the grading process and enhance feedback provision, the implementation of these systems should be thoughtfully considered. With further research and development, AI can potentially become a valuable tool to support teaching and learning in education. https://saiconference.com/FICC
DOCUMENT
With artificial intelligence (AI) systems entering our working and leisure environments with increasing adaptation and learning capabilities, new opportunities arise for developing hybrid (human-AI) intelligence (HI) systems, comprising new ways of collaboration. However, there is not yet a structured way of specifying design solutions of collaboration for hybrid intelligence (HI) systems and there is a lack of best practices shared across application domains. We address this gap by investigating the generalization of specific design solutions into design patterns that can be shared and applied in different contexts. We present a human-centered bottom-up approach for the specification of design solutions and their abstraction into team design patterns. We apply the proposed approach for 4 concrete HI use cases and show the successful extraction of team design patterns that are generalizable, providing re-usable design components across various domains. This work advances previous research on team design patterns and designing applications of HI systems.
MULTIFILE