As artificial intelligence (AI) reshapes hiring, organizations increasingly rely on AI-enhanced selection methods such as chatbot-led interviews and algorithmic resume screening. While AI offers efficiency and scalability, concerns persist regarding fairness, transparency, and trust. This qualitative study applies the Artificially Intelligent Device Use Acceptance (AIDUA) model to examine how job applicants perceive and respond to AI-driven hiring. Drawing on semi-structured interviews with 15 professionals, the study explores how social influence, anthropomorphism, and performance expectancy shape applicant acceptance, while concerns about transparency and fairness emerge as key barriers. Participants expressed a strong preference for hybrid AI-human hiring models, emphasizing the importance of explainability and human oversight. The study refines the AIDUA model in the recruitment context and offers practical recommendations for organizations seeking to implement AI ethically and effectively in selection processes.
MULTIFILE
Older technologies, such as violins and computers, differ from newer technologies (Internet search engines, robots, chatbots, etc.). Older technologies are primarily tools for achieving our own stated goals, while new technologies are often used by third parties, whereby we ourselves are sometimes used as “instruments”, often without us being aware of it. While we search for something on the Internet, our search behaviour is analysed and we are assigned to an increasingly suitable profile. In other cases, our movement patterns in public space are registered (type of shopper, walker, etc.). Unsolicited we get a digital twin, which can be easily followed by third parties to warn us in time, or, in the wrong hands, to manipulate or even disable us. Anthony Kenny states that “technological anthropomorphism” (approaching an instrument or machine as a person with human qualities) must be avoided. It is argued why and that a robot is not a colleague.
MULTIFILE
This study investigates the degree to which children anthropomorphize a robot tutor and whether this anthropomorphism relates to their vocabulary learning in a second-language (L2) tutoring intervention. With this aim, an anthropomorphism questionnaire was administered to 5-year-old children (N = 104) twice: prior to and following a seven-session L2 vocabulary training with a humanoid robot. On average, children tended to anthropomorphize the robot prior to and after the lessons to a similar degree, but many children changed their attributed anthropomorphic features. Boys anthropomorphized the robot less after the lessons than girls. Moreover, there was a weak but significant positive correlation between anthropomorphism as measured before the lessons and scores on a word-knowledge post-test administered the day after the last lesson. There was also a weak but significant positive correlation between the change in anthropomorphism over time and scores on a word-knowledge post-test administered approximately 2 weeks after the last lesson. Our results underscore the need to manage children's expectations in robot-assisted education. Also, future research could explore adaptations to individual children's expectations in child-robot interactions.
LINK