Grounded in the Stereotype Content Model, Risk Perception Theory, Technology Acceptance Model, and Relational Embeddedness Theory, this research delves into the relationship between chatbot conversation styles, customer risk, and the mediating role of chatbot acceptance and tie strength in online shopping. A 2 (warm vs. cold) * 2 (competent vs. incompetent) between-subjects experiment is conducted on 320 participants and the results obtained from two-way ANOVA and PROCESS macro revealed that: (a) customer-perceived risk decreases with conversation warmth rather than conversation competence; (b) customer acceptance of chatbots improves with conversation competence rather than conversation warmth, while not acting as an intermediary factor between the conversation styles and customer-perceived risk; (c) customer perceived tie strength increases with both conversation warmth and conversation competence. The findings contribute to the existing literature about the impact of chatbot anthropomorphism on customer cognitive processes and offer executives insights into the design of customer-friendly chatbots.
LINK
As artificial intelligence (AI) reshapes hiring, organizations increasingly rely on AI-enhanced selection methods such as chatbot-led interviews and algorithmic resume screening. While AI offers efficiency and scalability, concerns persist regarding fairness, transparency, and trust. This qualitative study applies the Artificially Intelligent Device Use Acceptance (AIDUA) model to examine how job applicants perceive and respond to AI-driven hiring. Drawing on semi-structured interviews with 15 professionals, the study explores how social influence, anthropomorphism, and performance expectancy shape applicant acceptance, while concerns about transparency and fairness emerge as key barriers. Participants expressed a strong preference for hybrid AI-human hiring models, emphasizing the importance of explainability and human oversight. The study refines the AIDUA model in the recruitment context and offers practical recommendations for organizations seeking to implement AI ethically and effectively in selection processes.
MULTIFILE
This article describes the relation between mental health and academic performance during the start of college and how AI-enhanced chatbot interventions could prevent both study problems and mental health problems.
DOCUMENT
Youth care is under increasing pressure, with rising demand, longer waiting lists, and growing staff shortages. In the Netherlands, one in seven children and adolescents is currently receiving youth care. At the same time, professionals face high workloads, burnout risks, and significant administrative burdens. This combination threatens both the accessibility and quality of care, leading to escalating problems for young people and families. Artificial intelligence (AI) offers promising opportunities to relieve these pressures by supporting professionals in their daily work. However, many AI initiatives in youth care fail to move beyond pilot stages, due to barriers such as lack of user acceptance, ethical concerns, limited professional ownership, and insufficient integration into daily practice. Empirical research on how AI can be responsibly and sustainably embedded in youth care is still scarce. This PD project aims to develop practice-based insights and strategies that strengthen the acceptance and long-term adoption of AI in youth care, in ways that support professional practice and contribute to appropriate care. The focus lies not on the technology itself, but on how professionals can work with AI within complex, high-pressure contexts. The research follows a cyclical, participatory approach, combining three complementary implementation frameworks: the Implementation Guide (Kaptein), the CFIR model (Damschroder), and the NASSS-CAT framework (Greenhalgh). Three case studies serve as core learning environments: (1) a speech-to-text AI tool to support clinical documentation, (2) Microsoft Copilot 365 for organization-wide adoption in support teams, and (3) an AI chatbot for parents in high-conflict divorces. Throughout the project, professionals, clients, ethical experts, and organizational stakeholders collaborate to explore the practical, ethical, and organizational conditions under which AI can responsibly strengthen youth care services.