As AI systems become increasingly prevalent in our daily lives and work, it is essential to contemplate their social role and how they interact with us. While functionality and increasingly explainability and trustworthiness are often the primary focus in designing AI systems, little consideration is given to their social role and the effects on human-AI interactions. In this paper, we advocate for paying attention to social roles in AI design. We focus on an AI healthcare application and present three possible social roles of the AI system within it to explore the relationship between the AI system and the user and its implications for designers and practitioners. Our findings emphasise the need to think beyond functionality and highlight the importance of considering the social role of AI systems in shaping meaningful human-AI interactions.
DOCUMENT
The increasing use of AI in industry and society not only expects but demands that we build human-centred competencies into our AI education programmes. The computing education community needs to adapt, and while the adoption of standalone ethics modules into AI programmes or the inclusion of ethical content into traditional applied AI modules is progressing, it is not enough. To foster student competencies to create AI innovations that respect and support the protection of individual rights and society, a novel ground-up approach is needed. This panel presents on one such approach, the development of a Human-Centred AI Masters (HCAIM) as well as the insights and lessons learned from the process. In particular, we discuss the design decisions that have led to the multi-institutional master’s programme. Moreover, this panel allows for discussion on pedagogical and methodological approaches, content knowledge areas and the delivery of such a novel programme, along with challenges faced, to inform and learn from other educators that are considering developing such programmes.
DOCUMENT
Explainable Artificial Intelligence (XAI) aims to provide insights into the inner workings and the outputs of AI systems. Recently, there’s been growing recognition that explainability is inherently human-centric, tied to how people perceive explanations. Despite this, there is no consensus in the research community on whether user evaluation is crucial in XAI, and if so, what exactly needs to be evaluated and how. This systematic literature review addresses this gap by providing a detailed overview of the current state of affairs in human-centered XAI evaluation. We reviewed 73 papers across various domains where XAI was evaluated with users. These studies assessed what makes an explanation “good” from a user’s perspective, i.e., what makes an explanation meaningful to a user of an AI system. We identified 30 components of meaningful explanations that were evaluated in the reviewed papers and categorized them into a taxonomy of human-centered XAI evaluation, based on: (a) the contextualized quality of the explanation, (b) the contribution of the explanation to human-AI interaction, and (c) the contribution of the explanation to human- AI performance. Our analysis also revealed a lack of standardization in the methodologies applied in XAI user studies, with only 19 of the 73 papers applying an evaluation framework used by at least one other study in the sample. These inconsistencies hinder cross-study comparisons and broader insights. Our findings contribute to understanding what makes explanations meaningful to users and how to measure this, guiding the XAI community toward a more unified approach in human-centered explainability.
MULTIFILE
Developing a framework that integrates Advanced Language Models into the qualitative research process.Qualitative research, vital for understanding complex phenomena, is often limited by labour-intensive data collection, transcription, and analysis processes. This hinders scalability, accessibility, and efficiency in both academic and industry contexts. As a result, insights are often delayed or incomplete, impacting decision-making, policy development, and innovation. The lack of tools to enhance accuracy and reduce human error exacerbates these challenges, particularly for projects requiring large datasets or quick iterations. Addressing these inefficiencies through AI-driven solutions like AIDA can empower researchers, enhance outcomes, and make qualitative research more inclusive, impactful, and efficient.The AIDA project enhances qualitative research by integrating AI technologies to streamline transcription, coding, and analysis processes. This innovation enables researchers to analyse larger datasets with greater efficiency and accuracy, providing faster and more comprehensive insights. By reducing manual effort and human error, AIDA empowers organisations to make informed decisions and implement evidence-based policies more effectively. Its scalability supports diverse societal and industry applications, from healthcare to market research, fostering innovation and addressing complex challenges. Ultimately, AIDA contributes to improving research quality, accessibility, and societal relevance, driving advancements across multiple sectors.
Entangled Machines is a project by Mariana Fernández Mora that interrogates the colonial and extractive legacies underpinning artificial intelligence (AI). By introducing slowness and digital kinship as critical frameworks, the project reconceptualises AI as embedded within intricate social and ecological networks, thereby contesting dominant narratives of efficiency and optimisation. Through participatory, practice-based methodologies such as the Material Playground, the project integrates feminist and non-Western epistemologies to articulate alternative models for ethical, sustainable, and equitable AI practices. Over a four-year period, Entangled Machines develops theory, engages diverse communities, and produces artistic outputs to reimagine human-AI interactions. In collaboration with partners including ARIAS Amsterdam, Archival Consciousness, and the Sandberg Institute, the research seeks to foster decolonial and interdisciplinary approaches to AI. Its culmination will be an “Anarchive” – a curated assemblage of artistic, theoretical, and archival outputs – that serves as a resource for rethinking AI’s socio-political and ecological impacts.