As artificial intelligence (AI) reshapes hiring, organizations increasingly rely on AI-enhanced selection methods such as chatbot-led interviews and algorithmic resume screening. While AI offers efficiency and scalability, concerns persist regarding fairness, transparency, and trust. This qualitative study applies the Artificially Intelligent Device Use Acceptance (AIDUA) model to examine how job applicants perceive and respond to AI-driven hiring. Drawing on semi-structured interviews with 15 professionals, the study explores how social influence, anthropomorphism, and performance expectancy shape applicant acceptance, while concerns about transparency and fairness emerge as key barriers. Participants expressed a strong preference for hybrid AI-human hiring models, emphasizing the importance of explainability and human oversight. The study refines the AIDUA model in the recruitment context and offers practical recommendations for organizations seeking to implement AI ethically and effectively in selection processes.
MULTIFILE
This study investigates the degree to which children anthropomorphize a robot tutor and whether this anthropomorphism relates to their vocabulary learning in a second-language (L2) tutoring intervention. With this aim, an anthropomorphism questionnaire was administered to 5-year-old children (N = 104) twice: prior to and following a seven-session L2 vocabulary training with a humanoid robot. On average, children tended to anthropomorphize the robot prior to and after the lessons to a similar degree, but many children changed their attributed anthropomorphic features. Boys anthropomorphized the robot less after the lessons than girls. Moreover, there was a weak but significant positive correlation between anthropomorphism as measured before the lessons and scores on a word-knowledge post-test administered the day after the last lesson. There was also a weak but significant positive correlation between the change in anthropomorphism over time and scores on a word-knowledge post-test administered approximately 2 weeks after the last lesson. Our results underscore the need to manage children's expectations in robot-assisted education. Also, future research could explore adaptations to individual children's expectations in child-robot interactions.
LINK
In today's world, understanding different viewpoints is key for societal cohesion and progress. Robots have the potential to provide aid in discussing tough topics like ethnicity and gender. However, comparably to humans, the appearance of a robot can trigger inherent prejudices. This study delves into the interplay between robot appearance and decision-making in ethical dilemmas. Employing a Furhat robot that can change faces in an instant, we looked at how robot appearance affects decision-making and the perception of the robot itself. Pairs of participants were invited to discuss a dilemma presented by a robot, covering sensitive topics of ethnicity or gender. The robot either adopted a first-person or third-person perspective and altered its appearance accordingly. Following the explanation, participants were encouraged to discuss their choice of action in the dilemma situation. We did not find significant influences of robot appearance or dilemma topic on perceived anthropomorphism, animacy, likeability, or intelligence of the robot, partly in line with previous research. However, several participants hearing the dilemma from a first-person perspective changed their opinion because of the robot's appearance. Future work can expand with different measures such as engagement, in order to shed light on the intricate dynamics of human-robot interaction, emphasizing the need for thoughtful consideration in designing robot appearances to promote unbiased engagement in discussions of societal significance
DOCUMENT