Explainable Artificial Intelligence (XAI) aims to provide insights into the inner workings and the outputs of AI systems. Recently, there’s been growing recognition that explainability is inherently human-centric, tied to how people perceive explanations. Despite this, there is no consensus in the research community on whether user evaluation is crucial in XAI, and if so, what exactly needs to be evaluated and how. This systematic literature review addresses this gap by providing a detailed overview of the current state of affairs in human-centered XAI evaluation. We reviewed 73 papers across various domains where XAI was evaluated with users. These studies assessed what makes an explanation “good” from a user’s perspective, i.e., what makes an explanation meaningful to a user of an AI system. We identified 30 components of meaningful explanations that were evaluated in the reviewed papers and categorized them into a taxonomy of human-centered XAI evaluation, based on: (a) the contextualized quality of the explanation, (b) the contribution of the explanation to human-AI interaction, and (c) the contribution of the explanation to human- AI performance. Our analysis also revealed a lack of standardization in the methodologies applied in XAI user studies, with only 19 of the 73 papers applying an evaluation framework used by at least one other study in the sample. These inconsistencies hinder cross-study comparisons and broader insights. Our findings contribute to understanding what makes explanations meaningful to users and how to measure this, guiding the XAI community toward a more unified approach in human-centered explainability.
MULTIFILE
As artificial intelligence (AI) reshapes hiring, organizations increasingly rely on AI-enhanced selection methods such as chatbot-led interviews and algorithmic resume screening. While AI offers efficiency and scalability, concerns persist regarding fairness, transparency, and trust. This qualitative study applies the Artificially Intelligent Device Use Acceptance (AIDUA) model to examine how job applicants perceive and respond to AI-driven hiring. Drawing on semi-structured interviews with 15 professionals, the study explores how social influence, anthropomorphism, and performance expectancy shape applicant acceptance, while concerns about transparency and fairness emerge as key barriers. Participants expressed a strong preference for hybrid AI-human hiring models, emphasizing the importance of explainability and human oversight. The study refines the AIDUA model in the recruitment context and offers practical recommendations for organizations seeking to implement AI ethically and effectively in selection processes.
MULTIFILE
Digital surveillance technologies using artificial intelligence (AI) tools such as computer vision and facial recognition are becoming cheaper and easier to integrate into governance practices worldwide. Morocco serves as an example of how such technologies are becoming key tools of governance in authoritarian contexts. Based on qualitative fieldwork including semi-structured interviews, observation, and extensive desk reviews, this chapter focusses on the role played by AI-enhanced technology in urban surveillance and the control of migration between the Moroccan–Spanish borders. Two cross-cutting issues emerge: first, while international donors provide funding for urban and border surveillance projects, their role in enforcing transparency mechanisms in their implementation remains limited; second, Morocco’s existing legal framework hinders any kind of public oversight. Video surveillance is treated as the sole prerogative of the security apparatus, and so far public actors have avoided to engage directly with the topic. The lack of institutional oversight and public debate on the matter raise serious concerns on the extent to which the deployment of such technologies affects citizens’ rights. AI-enhanced surveillance is thus an intrinsically transnational challenge in which private interests of economic gain and public interests of national security collide with citizens’ human rights across the Global North/Global South divide.
MULTIFILE
This research reviews the current literature on the impact of Artificial Intelligence (AI) in the operation of autonomous Unmanned Aerial Vehicles (UAVs). This paper examines three key aspects in developing the future of Unmanned Aircraft Systems (UAS) and UAV operations: (i) design, (ii) human factors, and (iii) operation process. The use of widely accepted frameworks such as the "Human Factors Analysis and Classification System (HFACS)" and "Observe– Orient–Decide–Act (OODA)" loops are discussed. The comprehensive review of this research found that as autonomy increases, operator cognitive workload decreases and situation awareness improves, but also found a corresponding decline in operator vigilance and an increase in trust in the AI system. These results provide valuable insights and opportunities for improving the safety and efficiency of autonomous UAVs in the future and suggest the need to include human factors in the development process.
DOCUMENT
This research investigates the potential and challenges of using artificial intelligence, specifically the ChatGPT-4 model developed by OpenAI, in grading and providing feedback in an educational setting. By comparing the grading of a human lecturer and ChatGPT-4 in an experiment with 105 students, our study found a strong positive correlation between the scores given by both, despite some mismatches. In addition, we observed that ChatGPT-4's feedback was effectively personalized and understandable for students, contributing to their learning experience. While our findings suggest that AI technologies like ChatGPT-4 can significantly speed up the grading process and enhance feedback provision, the implementation of these systems should be thoughtfully considered. With further research and development, AI can potentially become a valuable tool to support teaching and learning in education. https://saiconference.com/FICC
DOCUMENT
With artificial intelligence (AI) systems entering our working and leisure environments with increasing adaptation and learning capabilities, new opportunities arise for developing hybrid (human-AI) intelligence (HI) systems, comprising new ways of collaboration. However, there is not yet a structured way of specifying design solutions of collaboration for hybrid intelligence (HI) systems and there is a lack of best practices shared across application domains. We address this gap by investigating the generalization of specific design solutions into design patterns that can be shared and applied in different contexts. We present a human-centered bottom-up approach for the specification of design solutions and their abstraction into team design patterns. We apply the proposed approach for 4 concrete HI use cases and show the successful extraction of team design patterns that are generalizable, providing re-usable design components across various domains. This work advances previous research on team design patterns and designing applications of HI systems.
MULTIFILE
This article describes the relation between mental health and academic performance during the start of college and how AI-enhanced chatbot interventions could prevent both study problems and mental health problems.
DOCUMENT
Big data analytics received much attention in the last decade and is viewed as one of the next most important strategic resources for organizations. Yet, the role of employees' data literacy seems to be neglected in current literature. The aim of this study is twofold: (1) it develops data literacy as an organization competency by identifying its dimensions and measurement, and (2) it examines the relationship between data literacy and governmental performance (internal and external). Using data from a survey of 120 Dutch governmental agencies, the proposed model was tested using PLS-SEM. The results empirically support the suggested theoretical framework and corresponding measurement instrument. The results partially support the relationship of data literacy with performance as a significant effect of data literacy on internal performance. However, counter-intuitively, this significant effect is not found in relation to external performance.
MULTIFILE
Although governments are investing heavily in big data analytics, reports show mixed results in terms of performance. Whilst big data analytics capability provided a valuable lens in business and seems useful for the public sector, there is little knowledge of its relationship with governmental performance. This study aims to explain how big data analytics capability led to governmental performance. Using a survey research methodology, an integrated conceptual model is proposed highlighting a comprehensive set of big data analytics resources influencing governmental performance. The conceptual model was developed based on prior literature. Using a PLS-SEM approach, the results strongly support the posited hypotheses. Big data analytics capability has a strong impact on governmental efficiency, effectiveness, and fairness. The findings of this paper confirmed the imperative role of big data analytics capability in governmental performance in the public sector, which earlier studies found in the private sector. This study also validated measures of governmental performance.
MULTIFILE