Grounded in the Stereotype Content Model, Risk Perception Theory, Technology Acceptance Model, and Relational Embeddedness Theory, this research delves into the relationship between chatbot conversation styles, customer risk, and the mediating role of chatbot acceptance and tie strength in online shopping. A 2 (warm vs. cold) * 2 (competent vs. incompetent) between-subjects experiment is conducted on 320 participants and the results obtained from two-way ANOVA and PROCESS macro revealed that: (a) customer-perceived risk decreases with conversation warmth rather than conversation competence; (b) customer acceptance of chatbots improves with conversation competence rather than conversation warmth, while not acting as an intermediary factor between the conversation styles and customer-perceived risk; (c) customer perceived tie strength increases with both conversation warmth and conversation competence. The findings contribute to the existing literature about the impact of chatbot anthropomorphism on customer cognitive processes and offer executives insights into the design of customer-friendly chatbots.
LINK
As artificial intelligence (AI) reshapes hiring, organizations increasingly rely on AI-enhanced selection methods such as chatbot-led interviews and algorithmic resume screening. While AI offers efficiency and scalability, concerns persist regarding fairness, transparency, and trust. This qualitative study applies the Artificially Intelligent Device Use Acceptance (AIDUA) model to examine how job applicants perceive and respond to AI-driven hiring. Drawing on semi-structured interviews with 15 professionals, the study explores how social influence, anthropomorphism, and performance expectancy shape applicant acceptance, while concerns about transparency and fairness emerge as key barriers. Participants expressed a strong preference for hybrid AI-human hiring models, emphasizing the importance of explainability and human oversight. The study refines the AIDUA model in the recruitment context and offers practical recommendations for organizations seeking to implement AI ethically and effectively in selection processes.
MULTIFILE
This article describes the relation between mental health and academic performance during the start of college and how AI-enhanced chatbot interventions could prevent both study problems and mental health problems.
DOCUMENT
In this paper, we explore the design of web-based advice robots to enhance users' confidence in acting upon the provided advice. Drawing from research on algorithm acceptance and explainable AI, we hypothesise four design principles that may encourage interactivity and exploration, thus fostering users' confidence to act. Through a value-oriented prototype experiment and value-oriented semi-structured interviews, we tested these principles, confirming three of them and identifying an additional principle. The four resulting principles: (1) put context questions and resulting advice on one page and allow live, iterative exploration, (2) use action or change oriented questions to adjust the input parameters, (3) actively offer alternative scenarios based on counterfactuals, and (4) show all options instead of only the recommended one(s), appear to contribute to the values of agency and trust. Our study integrates the Design Science Research approach with a Value Sensitive Design approach.
MULTIFILE
In dit abstract wordt de ontwikkeling van een online onderwijsmodule beschreven gericht op eHealth voor leefstijlverbetering
MULTIFILE
Explainable Artificial Intelligence (XAI) aims to provide insights into the inner workings and the outputs of AI systems. Recently, there’s been growing recognition that explainability is inherently human-centric, tied to how people perceive explanations. Despite this, there is no consensus in the research community on whether user evaluation is crucial in XAI, and if so, what exactly needs to be evaluated and how. This systematic literature review addresses this gap by providing a detailed overview of the current state of affairs in human-centered XAI evaluation. We reviewed 73 papers across various domains where XAI was evaluated with users. These studies assessed what makes an explanation “good” from a user’s perspective, i.e., what makes an explanation meaningful to a user of an AI system. We identified 30 components of meaningful explanations that were evaluated in the reviewed papers and categorized them into a taxonomy of human-centered XAI evaluation, based on: (a) the contextualized quality of the explanation, (b) the contribution of the explanation to human-AI interaction, and (c) the contribution of the explanation to human- AI performance. Our analysis also revealed a lack of standardization in the methodologies applied in XAI user studies, with only 19 of the 73 papers applying an evaluation framework used by at least one other study in the sample. These inconsistencies hinder cross-study comparisons and broader insights. Our findings contribute to understanding what makes explanations meaningful to users and how to measure this, guiding the XAI community toward a more unified approach in human-centered explainability.
MULTIFILE
Moral food lab: Transforming the food system with crowd-sourced ethics
LINK
Abstract Aims: Medical case vignettes play a crucial role in medical education, yet they often fail to authentically represent diverse patients. Moreover, these vignettes tend to oversimplify the complex relationship between patient characteristics and medical conditions, leading to biased and potentially harmful perspectives among students. Displaying aspects of patient diversity, such as ethnicity, in written cases proves challenging. Additionally, creating these cases places a significant burden on teachers in terms of labour and time. Our objective is to explore the potential of artificial intelligence (AI)-assisted computer-generated clinical cases to expedite case creation and enhance diversity, along with AI-generated patient photographs for more lifelike portrayal. Methods: In this study, we employed ChatGPT (OpenAI, GPT 3.5) to develop diverse and inclusive medical case vignettes. We evaluated various approaches and identified a set of eight consecutive prompts that can be readily customized to accommodate local contexts and specific assignments. To enhance visual representation, we utilized Adobe Firefly beta for image generation. Results: Using the described prompts, we consistently generated cases for various assignments, producing sets of 30 cases at a time. We ensured the inclusion of mandatory checks and formatting, completing the process within approximately 60 min per set. Conclusions: Our approach significantly accelerated case creation and improved diversity, although prioritizing maximum diversity compromised representativeness to some extent. While the optimized prompts are easily reusable, the process itself demands computer skills not all educators possess. To address this, we aim to share all created patients as open educational resources, empowering educators to create cases independently.
DOCUMENT
In the book, 40 experts speak, who explain in clear language what AI is, and what questions, challenges and opportunities the technology brings.
DOCUMENT