Trust in AI is crucial for effective and responsible use in high-stakes sectors like healthcare and finance. One of the most commonly used techniques to mitigate mistrust in AI and even increase trust is the use of Explainable AI models, which enables human understanding of certain decisions made by AI-based systems. Interaction design, the practice of designing interactive systems, plays an important role in promoting trust by improving explainability, interpretability, and transparency, ultimately enabling users to feel more in control and confident in the system’s decisions. This paper introduces, based on an empirical study with experts from various fields, the concept of Explanation Stream Patterns, which are interaction patterns that structure and organize the flow of explanations in decision support systems. Explanation Stream Patterns formalize explanation streams by incorporating procedures such as progressive disclosure of explanations or interacting with explanations in a more deliberate way through cognitive forcing functions. We argue that well-defined Explanation Stream Patterns provide practical tools for designing interactive systems that enhance human-AI decision-making.
MULTIFILE
Algorithmic affordances—interactive mechanisms that allow users to exercise tangible control over algorithms—play a crucial role in recommender systems. They can facilitate users’ sense of autonomy, transparency, and ultimately ownership over a recommender’s results, all qualities that are central to responsible AI. Designers, among others, are tasked with creating these interactions, yet state that they lack resources to do so effectively. At the same time, academic research into these interactions rarely crosses the research-practice gap. As a solution, designers call for a structured library of algorithmic affordances containing well-tested, well-founded, and up-to-date examples sourced from both real-world and experimental interfaces. Such a library should function as a boundary object, bridging academia and professional design practice. Academics could use it as a supplementary platform to disseminate their findings, while both practitioners and educators could draw upon it for inspiration and as a foundation for innovation. However, developing a library that accommodates multiple stakeholders presents several challenges, including the need to establish a common language for categorizing algorithmic affordances and devising a categorization of algorithmic affordances that is meaningful to all target groups. This research attempts to bring the designer perspective into this categorization.
LINK
In the consensus machine, AI acts as a moderating influence to encourage people to find common ground. Moving beyond individual human-AI interaction, we apply AI technology to create a tool that serves people as a group
DOCUMENT
Explainable Artificial Intelligence (XAI) aims to provide insights into the inner workings and the outputs of AI systems. Recently, there’s been growing recognition that explainability is inherently human-centric, tied to how people perceive explanations. Despite this, there is no consensus in the research community on whether user evaluation is crucial in XAI, and if so, what exactly needs to be evaluated and how. This systematic literature review addresses this gap by providing a detailed overview of the current state of affairs in human-centered XAI evaluation. We reviewed 73 papers across various domains where XAI was evaluated with users. These studies assessed what makes an explanation “good” from a user’s perspective, i.e., what makes an explanation meaningful to a user of an AI system. We identified 30 components of meaningful explanations that were evaluated in the reviewed papers and categorized them into a taxonomy of human-centered XAI evaluation, based on: (a) the contextualized quality of the explanation, (b) the contribution of the explanation to human-AI interaction, and (c) the contribution of the explanation to human- AI performance. Our analysis also revealed a lack of standardization in the methodologies applied in XAI user studies, with only 19 of the 73 papers applying an evaluation framework used by at least one other study in the sample. These inconsistencies hinder cross-study comparisons and broader insights. Our findings contribute to understanding what makes explanations meaningful to users and how to measure this, guiding the XAI community toward a more unified approach in human-centered explainability.
MULTIFILE
Concerns have been raised over the increased prominence ofgenerative AI in art. Some fear that generative models could replace theviability for humans to create art and oppose developers training generative models on media without the artist's permission. Proponents of AI art point to the potential increase in accessibility. Is there an approach to address the concerns artists raise while still utilizing the potential these models bring? Current models often aim for autonomous music generation. This, however, makes the model a black box that users can't interact with. By utilizing an AI pipeline combining symbolic music generation and a proposed sample creation system trained on Creative Commons data, a musical looping application has been created to provide non-expert music users with a way to start making their own music. The first results show that it assists users in creating musical loops and shows promise for future research into human-AI interaction in art.
DOCUMENT
Algorithmic affordances are defined as user interaction mechanisms that allow users tangible control over AI algorithms, such as recommender systems. Designing such algorithmic affordances, including assessing their impact, is not straightforward and practitioners state that they lack resources to design adequately for interfaces of AI systems. This could be amended by creating a comprehensive pattern library of algorithmic affordances. This library should provide easy access to patterns, supported by live examples and research on their experiential impact and limitations of use. The Algorithmic Affordances in Recommender Interfaces workshop aimed to address key challenges related to building such a pattern library, including pattern identification features, a framework for systematic impact evaluation, and understanding the interaction between algorithmic affordances and their context of use, especially in education or with users with a low algorithmic literacy. Preliminary solutions were proposed for these challenges.
LINK
Recommender systems (letterlijk vertaald 'aanbevelingssystemen') zijn dedrijvende kracht achter aanbevelingen en feeds in onder andere webwinkels,streamingdiensten en sociale media. Deze aanbevelingen hebben grote invloedop welke artikelen we lezen of kopen, of welke meningen we horen. Echter, dezeaanbevelingen zijn vaak niet objectief, waardoor bestaande ongelijkheden in desamenleving kunnen vergroten. In dit artikel beschrijven we hoe transparantealgoritmes en verklaringen dit tegen kunnen gaan, en presenteren we interactiepatronendie gebruikers meer controle over hun aanbevelingen geven.
DOCUMENT
For people with early-dementia (PwD), it can be challenging to remember to eat and drink regularly and maintain a healthy independent living. Existing intelligent home technologies primarily focus on activity recognition but lack adaptive support. This research addresses this gap by developing an AI system inspired by the Just-in-Time Adaptive Intervention (JITAI) concept. It adapts to individual behaviors and provides personalized interventions within the home environment, reminding and encouraging PwD to manage their eating and drinking routines. Considering the cognitive impairment of PwD, we design a human-centered AI system based on healthcare theories and caregivers’ insights. It employs reinforcement learning (RL) techniques to deliver personalized interventions. To avoid overwhelming interaction with PwD, we develop an RL-based simulation protocol. This allows us to evaluate different RL algorithms in various simulation scenarios, not only finding the most effective and efficient approach but also validating the robustness of our system before implementation in real-world human experiments. The simulation experimental results demonstrate the promising potential of the adaptive RL for building a human-centered AI system with perceived expressions of empathy to improve dementia care. To further evaluate the system, we plan to conduct real-world user studies.
DOCUMENT
Een parallelsessie over machinelearningprojecten uitgevoerd door Ambient Intelligence.
MULTIFILE
The skillsets of production workers are crucial for the effective adoption of smart technologies which are largely shaped by work design. However, current literature lacks comprehensive insights into the skills and work designs of production workers, hindering the adoption of Industry 5.0. Grounded in work design and skills literature this study explores the required skills of employees and perceived work design characteristics for adoption of AI, AR/VR, and Robotics in Dutch Manufacturing SMEs. This qualitative study involved semi-structured interviews with experts, managers and production workers. Results reveal a need to reassess traditional job profiles, as two distinct production workers roles emerge from AI, AR/VR and robotics adoption. Machine operators face potential deskilling through low feedback from the job, low task variety and low job complexity. Foremanproduction workers require additional skills due to job enlargement and enrichment. However, they seem to be put in this job role due to the lack of various professional and transversal skills to fully utilize smart technologies, and to accommodate a viable return on the technology investment. This highlights the importance of balancing job resources and requirements in work design, training programs for I5.0 skill development, and understanding contextual design elements of manufacturing systems contributing to viable I5.0 adoption in SMEs. Finally, sustainability, self-awareness, and self-reflection skills are not considered by professionals, displaying unawareness of its importance for I5.0 implementation practices.
MULTIFILE