From diagnosis to patient scheduling, AI is increasingly being considered across different clinical applications. Despite increasingly powerful clinical AI, uptake into actual clinical workflows remains limited. One of the major challenges is developing appropriate trust with clinicians. In this paper, we investigate trust in clinical AI in a wider perspective beyond user interactions with the AI. We offer several points in the clinical AI development, usage, and monitoring process that can have a significant impact on trust. We argue that the calibration of trust in AI should go beyond explainable AI and focus on the entire process of clinical AI deployment. We illustrate our argument with case studies from practitioners implementing clinical AI in practice to show how trust can be affected by different stages in the deployment cycle.
DOCUMENT
De zorgsector wordt in toenemende mate geconfronteerd met uitdagingen als gevolg van groeiende vraag (o.a. door vergrijzing en complexiteit van zorg) en afnemend aanbod van zorgverleners (o.a. door personeelstekorten). Kunstmatige Intelligentie (AI) wordt als mogelijke oplossing gezien, maar wordt vaak vanuit een technologisch perspectief benaderd. Dit artikel kiest een mensgerichte benadering en bestudeert hoe zorgmedewerkers het werken met AI ervaren. Dit is belangrijk omdat zij uiteindelijk met deze applicaties moeten werken om de uitdagingen in de zorg het hoofd te bieden. Op basis van 21 semigestructureerde interviews met zorgmedewerkers die AI hebben gebruikt, beschrijven we de werkervaringen met AI. Met behulp van het AMO-raamwerk - wat staat voor abilities, motivation en opportunities - laten we zien dat AI een impact heeft op het werk van zorgmedewerkers. Het gebruik van AI vereist nieuwe competenties en de overtuiging dat AI de zorg kan verbeteren. Daarbij is er een noodzaak voor voldoende beschikbaarheid van training en ondersteuning. Tenslotte bediscussiëren we de implicaties voor theorie en geven we aanbevelingen voor HR-professionals.
MULTIFILE
De opkomst van Chat GPT laat zien hoe AI ingrijpt in ons dagelijks leven en het onderwijs. Maar AI is meer dan Chat GPT: van zoekmachines tot de gezichtsherkenning in je telefoon: data en algoritmes veranderen de levens van onze studenten en hun toekomstige werkveld. Wat betekent dit voor de opleidingen in het HBO waar voor wij werken? Voor de inspiratie-sessie De maatschappelijke impact van AI tijdens het HU Onderwijsfestival 2023 hebben wij onze collega’s uitgenodigd om samen met ons mee te denken over de recente AI-ontwikkelingen. We keken niet alleen naar de technologie, maar juist ook naar de maatschappelijke impact en wat de kansen en bedreigingen van AI zijn voor een open, rechtvaardige en duurzame samenleving. Het gesprek voerde we met onze collega’s (zowel docenten als medewerkers van de diensten) aan de hand van drie casussen met. De verzamelde resultaten en inzichten van deze gesprekken zijn samengebracht op een speciaal ontwikkelde poster voor de workshop (zie figuur 1). We hebben deze inzichten gebundeld en hieronder zijn ze te lezen.
DOCUMENT
This study addresses the burgeoning global shortage of healthcare workers and the consequential overburdening of medical professionals, a challenge that is anticipated to intensify by 2030 [1]. It explores the adoption and perceptions of AI-powered mobile medical applications (MMAs) by physicians in the Netherlands, investigating whether doctors discuss or recommend these applications to patients and the frequency of their use in clinical practice. The research reveals a cautious but growing acceptance of MMAs among healthcare providers. Medical mobile applications, with a substantial part of IA-driven applications, are being recognized for their potential to alleviate workload. The findings suggest an emergent trust in AI-driven health technologies, underscored by recommendations from peers, yet tempered by concerns over data security and patient mental health, indicating a need for ongoing assessment and validation of these applications
DOCUMENT
Artificial intelligence (AI) is a technology which is increasingly being utilised in society and the economy worldwide, but there is much disquiet over problematic and dangerous implementations of AI, or indeed even AI itself deciding to do dangerous and problematic actions. These developments have led to concerns about whether and how AI systems currently adhere to and will adhere to ethical standards, stimulating a global and multistakeholder conversation on AI ethics and the production of AI governance initiatives. Such developments form the basis for this chapter, where we give an insight into what is happening in Australia, China, the European Union, India and the United States. We commence with some background to the AI ethics and regulation debates, before proceedings to give an overview of what is happening in different countries and regions, namely Australia, China, the European Union (including national level activities in Germany), India and the United States. We provide an analysis of these country profiles, with particular emphasis on the relationship between ethics and law in each location. Overall we find that AI governance and ethics initiatives are most developed in China and the European Union, but the United States has been catching up in the last eighteen months.
DOCUMENT
As artificial intelligence (AI) reshapes hiring, organizations increasingly rely on AI-enhanced selection methods such as chatbot-led interviews and algorithmic resume screening. While AI offers efficiency and scalability, concerns persist regarding fairness, transparency, and trust. This qualitative study applies the Artificially Intelligent Device Use Acceptance (AIDUA) model to examine how job applicants perceive and respond to AI-driven hiring. Drawing on semi-structured interviews with 15 professionals, the study explores how social influence, anthropomorphism, and performance expectancy shape applicant acceptance, while concerns about transparency and fairness emerge as key barriers. Participants expressed a strong preference for hybrid AI-human hiring models, emphasizing the importance of explainability and human oversight. The study refines the AIDUA model in the recruitment context and offers practical recommendations for organizations seeking to implement AI ethically and effectively in selection processes.
MULTIFILE
Explainable Artificial Intelligence (XAI) aims to provide insights into the inner workings and the outputs of AI systems. Recently, there’s been growing recognition that explainability is inherently human-centric, tied to how people perceive explanations. Despite this, there is no consensus in the research community on whether user evaluation is crucial in XAI, and if so, what exactly needs to be evaluated and how. This systematic literature review addresses this gap by providing a detailed overview of the current state of affairs in human-centered XAI evaluation. We reviewed 73 papers across various domains where XAI was evaluated with users. These studies assessed what makes an explanation “good” from a user’s perspective, i.e., what makes an explanation meaningful to a user of an AI system. We identified 30 components of meaningful explanations that were evaluated in the reviewed papers and categorized them into a taxonomy of human-centered XAI evaluation, based on: (a) the contextualized quality of the explanation, (b) the contribution of the explanation to human-AI interaction, and (c) the contribution of the explanation to human- AI performance. Our analysis also revealed a lack of standardization in the methodologies applied in XAI user studies, with only 19 of the 73 papers applying an evaluation framework used by at least one other study in the sample. These inconsistencies hinder cross-study comparisons and broader insights. Our findings contribute to understanding what makes explanations meaningful to users and how to measure this, guiding the XAI community toward a more unified approach in human-centered explainability.
MULTIFILE
2025 ILC Annual International Conference , 16th & 17 June, 2025, Genoa, Italy, Global Collaboration,Local Action for Fundamentals of Care Innovation. Zie bladzijde 81. An international group of experts has joined forces for the further development of Artificial Intelligence (AI) in relation to the Fundamentals of Care (FoC) framework. AI, or its categories like machine learning and deep learning, offers potential to identify patterns in healthcare data, develop clinical prediction models, and derive insights from large datasets. For example, algorithms can be created to detect the start of the palliative phase based on electronic health records, or to inform nursing decisions based on lifestyle monitoring data for older adults. These AI applications significantly influence nurses' roles, the nurse-client relationship and nurses’ professional identity. Consequently, nurses must take responsibility to ensure that AI applications align with person-centered fundamental care, professional ethics, equity, and social justice. Thus, nursing leadership is essential to lead the development and use of AI applications that support nursing care according to the FoC framework, and enhance patient outcomes. The aim of the current project is to explore nurses’ responsibility for how AI adds value to the FoC framework. Firstly, nurse leaders play a vital role in overseeing the quality and relevance of data collected in daily practice, as these data are foundational for AI algorithms. The elements as articulated in the FoC framework should be the building blocks for any algorithm. These building blocks can be linked to clinical and social conditions, and life stages, building from the basis of the individual's human needs. Secondly, it is crucial for nurses to participate in the interdisciplinary teams that develop AI algorithms. Their participation and expertise ensure that algorithms are co-created with an understanding of the needs of their clients, maximizing the potential for positive outcomes. In addition to education, policy, and regulation, a nurse-led, interdisciplinary research program is needed to investigate the relationship between AI applications, the FoC framework and it’s impact on nurse-client relationships, nurses’ professional identity, and patient outcomes.
DOCUMENT
As more and more older adults prefer to stay in their homes as they age, there’s a need for technology to support this. A relevant technology is Artificial Intelligence (AI)-driven lifestyle monitoring, utilizing data from sensors placed in the home. This technology is not intended to replace nurses but to serve as a support tool. Understanding the specific competencies that nurses require to effectively use it is crucial. The aim of this study is to identify the essential competencies nurses require to work with AI-driven lifestyle monitoring in longterm care. Methods: A three round modified Delphi study was conducted, consisting of two online questionnaires and one focus group. A group of 48 experts participated in the study: nurses, innovators, developers, researchers, managers and educators. In the first two rounds experts assessed clarity and relevance on a proposed list of competencies, with the opportunity to provide suggestions for adjustments or inclusion of new competencies. In the third round the items without consensus were bespoken in a focus group. Findings: After the first round consensus was reached on relevance and clarity on n = 46 (72 %) of the competencies, after the second round on n = 54 (83 %) of the competencies. After the third round a final list of 10 competency domains and 61 sub-competencies was finalized. The 10 competency domains are: Fundamentals of AI, Participation in AI design, Patient-centered needs assessment, Personalisation of AI to patients’ situation, Data reporting, Interpretation of AI output, Integration of AI output into clinical practice, Communication about AI use, Implementation of AI and Evaluation of AI use. These competencies span from basic understanding of AIdriven lifestyle monitoring, to being able to integrate it in daily work, being able to evaluate it and communicate its use to other stakeholders, including patients and informal caregivers. Conclusion: Our study introduces a novel framework highlighting the (sub)competencies, required for nurses to work with AI-driven lifestyle monitoring in long-term care. These findings provide a foundation for developing initial educational programs and lifelong learning activities for nurses in this evolving field. Moreover, the importance that experts attach to AI competencies calls for a broader discussion about a potential shift in nursing responsibilities and tasks as healthcare becomes increasingly technologically advanced and data-driven, possibly leading to new roles within nursing.
DOCUMENT
Background: As more and more older adults prefer to stay in their homes as they age, there’s a need for technology to support this. A relevant technology is Artificial Intelligence (AI)-driven lifestyle monitoring, utilizing data from sensors placed in the home. This technology is not intended to replace nurses but to serve as a support tool. Understanding the specific competencies that nurses require to effectively use it is crucial. The aim of this study is to identify the essential competencies nurses require to work with AI-driven lifestyle monitoring in longterm care. Methods: A three round modified Delphi study was conducted, consisting of two online questionnaires and one focus group. A group of 48 experts participated in the study: nurses, innovators, developers, researchers, managers and educators. In the first two rounds experts assessed clarity and relevance on a proposed list of competencies, with the opportunity to provide suggestions for adjustments or inclusion of new competencies. In the third round the items without consensus were bespoken in a focus group. Findings: After the first round consensus was reached on relevance and clarity on n = 46 (72 %) of the competencies, after the second round on n = 54 (83 %) of the competencies. After the third round a final list of 10 competency domains and 61 sub-competencies was finalized. The 10 competency domains are: Fundamentals of AI, Participation in AI design, Patient-centered needs assessment, Personalisation of AI to patients’ situation, Data reporting, Interpretation of AI output, Integration of AI output into clinical practice, Communication about AI use, Implementation of AI and Evaluation of AI use. These competencies span from basic understanding of AIdriven lifestyle monitoring, to being able to integrate it in daily work, being able to evaluate it and communicate its use to other stakeholders, including patients and informal caregivers. Conclusion: Our study introduces a novel framework highlighting the (sub)competencies, required for nurses to work with AI-driven lifestyle monitoring in long-term care. These findings provide a foundation for developing initial educational programs and lifelong learning activities for nurses in this evolving field. Moreover, the importance that experts attach to AI competencies calls for a broader discussion about a potential shift in nursing responsibilities and tasks as healthcare becomes increasingly technologically advanced and data-driven, possibly leading to new roles within nursing.
LINK