© 2025 SURF
Explainable Artificial Intelligence (XAI) aims to provide insights into the inner workings and the outputs of AI systems. Recently, there’s been growing recognition that explainability is inherently human-centric, tied to how people perceive explanations. Despite this, there is no consensus in the research community on whether user evaluation is crucial in XAI, and if so, what exactly needs to be evaluated and how. This systematic literature review addresses this gap by providing a detailed overview of the current state of affairs in human-centered XAI evaluation. We reviewed 73 papers across various domains where XAI was evaluated with users. These studies assessed what makes an explanation “good” from a user’s perspective, i.e., what makes an explanation meaningful to a user of an AI system. We identified 30 components of meaningful explanations that were evaluated in the reviewed papers and categorized them into a taxonomy of human-centered XAI evaluation, based on: (a) the contextualized quality of the explanation, (b) the contribution of the explanation to human-AI interaction, and (c) the contribution of the explanation to human- AI performance. Our analysis also revealed a lack of standardization in the methodologies applied in XAI user studies, with only 19 of the 73 papers applying an evaluation framework used by at least one other study in the sample. These inconsistencies hinder cross-study comparisons and broader insights. Our findings contribute to understanding what makes explanations meaningful to users and how to measure this, guiding the XAI community toward a more unified approach in human-centered explainability.
MULTIFILE
In recent years, a step change has been seen in the rate of adoption of Industry 4.0 technologies by manufacturers and industrial organizations alike. This article discusses the current state of the art in the adoption of Industry 4.0 technologies within the construction industry. Increasing complexity in onsite construction projects coupled with the need for higher productivity is leading to increased interest in the potential use of Industry 4.0 technologies. This article discusses the relevance of the following key Industry 4.0 technologies to construction: data analytics and artificial intelligence, robotics and automation, building information management, sensors and wearables, digital twin, and industrial connectivity. Industrial connectivity is a key aspect as it ensures that all Industry 4.0 technologies are interconnected allowing the full benefits to be realized. This article also presents a research agenda for the adoption of Industry 4.0 technologies within the construction sector, a three-phase use of intelligent assets from the point of manufacture up to after build, and a four-staged R&D process for the implementation of smart wearables in a digital enhanced construction site.
Background: The immunization uptake rates in Pakistan are much lower than desired. Major reasons include lack of awareness, parental forgetfulness regarding schedules, and misinformation regarding vaccines. In light of the COVID-19 pandemic and distancing measures, routine childhood immunization (RCI) coverage has been adversely affected, as caregivers avoid tertiary care hospitals or primary health centers. Innovative and cost-effective measures must be taken to understand and deal with the issue of low immunization rates. However, only a few smartphone-based interventions have been carried out in low- and middle-income countries (LMICs) to improve RCI. Objective: The primary objectives of this study are to evaluate whether a personalized mobile app can improve children’s on-time visits at 10 and 14 weeks of age for RCI as compared with standard care and to determine whether an artificial intelligence model can be incorporated into the app. Secondary objectives are to determine the perceptions and attitudes of caregivers regarding childhood vaccinations and to understand the factors that might influence the effect of a mobile phone–based app on vaccination improvement. Methods: A mixed methods randomized controlled trial was designed with intervention and control arms. The study will be conducted at the Aga Khan University Hospital vaccination center. Caregivers of newborns or infants visiting the center for their children’s 6-week vaccination will be recruited. The intervention arm will have access to a smartphone app with text, voice, video, and pictorial messages regarding RCI. This app will be developed based on the findings of the pretrial qualitative component of the study, in addition to no-show study findings, which will explore caregivers’ perceptions about RCI and a mobile phone–based app in improving RCI coverage. Results: Pretrial qualitative in-depth interviews were conducted in February 2020. Enrollment of study participants for the randomized controlled trial is in process. Study exit interviews will be conducted at the 14-week immunization visits, provided the caregivers visit the immunization facility at that time, or over the phone when the children are 18 weeks of age. Conclusions: This study will generate useful insights into the feasibility, acceptability, and usability of an Android-based smartphone app for improving RCI in Pakistan and in LMICs.
The potential for Artificial Intelligence is widely proclaimed. Yet, in everyday educational settings the use of this technology is limited. Particularly, if we consider smart systems that actually interact with learners in a knowledgeable way and as such support the learning process. It illustrates the fact that teaching professionally is a complex challenge that is beyond the capabilities of current autonomous robots. On the other hand, dedicated forms of Artificial Intelligence can be very good at certain things. For example, computers are excellent chess players and automated route planners easily outperform humans. To deploy this potential, experts argue for a hybrid approach in which humans and smart systems collaboratively accomplish goals. How to realize this for education? What does it entail in practice? In this contribution, we investigate the idea of a hybrid approach in secondary education. As a case-study, we focus on learners acquiring systems thinking skills and our recently for this purpose developed pedagogical approach. Particularly, we discuss the kind of Artificial Intelligence that is needed in this situation, as well as which tasks the software can perform well and which tasks are better, or necessarily, left with the teacher.
The healthcare sector has been confronted with rapidly rising healthcare costs and a shortage of medical staff. At the same time, the field of Artificial Intelligence (AI) has emerged as a promising area of research, offering potential benefits for healthcare. Despite the potential of AI to support healthcare, its widespread implementation, especially in healthcare, remains limited. One possible factor contributing to that is the lack of trust in AI algorithms among healthcare professionals. Previous studies have indicated that explainability plays a crucial role in establishing trust in AI systems. This study aims to explore trust in AI and its connection to explainability in a medical setting. A rapid review was conducted to provide an overview of the existing knowledge and research on trust and explainability. Building upon these insights, a dashboard interface was developed to present the output of an AI-based decision-support tool along with explanatory information, with the aim of enhancing explainability of the AI for healthcare professionals. To investigate the impact of the dashboard and its explanations on healthcare professionals, an exploratory case study was conducted. The study encompassed an assessment of participants’ trust in the AI system, their perception of its explainability, as well as their evaluations of perceived ease of use and perceived usefulness. The initial findings from the case study indicate a positive correlation between perceived explainability and trust in the AI system. Our preliminary findings suggest that enhancing the explainability of AI systems could increase trust among healthcare professionals. This may contribute to an increased acceptance and adoption of AI in healthcare. However, a more elaborate experiment with the dashboard is essential.
LINK