As artificial intelligence (AI) reshapes hiring, organizations increasingly rely on AI-enhanced selection methods such as chatbot-led interviews and algorithmic resume screening. While AI offers efficiency and scalability, concerns persist regarding fairness, transparency, and trust. This qualitative study applies the Artificially Intelligent Device Use Acceptance (AIDUA) model to examine how job applicants perceive and respond to AI-driven hiring. Drawing on semi-structured interviews with 15 professionals, the study explores how social influence, anthropomorphism, and performance expectancy shape applicant acceptance, while concerns about transparency and fairness emerge as key barriers. Participants expressed a strong preference for hybrid AI-human hiring models, emphasizing the importance of explainability and human oversight. The study refines the AIDUA model in the recruitment context and offers practical recommendations for organizations seeking to implement AI ethically and effectively in selection processes.
MULTIFILE
Digital surveillance technologies using artificial intelligence (AI) tools such as computer vision and facial recognition are becoming cheaper and easier to integrate into governance practices worldwide. Morocco serves as an example of how such technologies are becoming key tools of governance in authoritarian contexts. Based on qualitative fieldwork including semi-structured interviews, observation, and extensive desk reviews, this chapter focusses on the role played by AI-enhanced technology in urban surveillance and the control of migration between the Moroccan–Spanish borders. Two cross-cutting issues emerge: first, while international donors provide funding for urban and border surveillance projects, their role in enforcing transparency mechanisms in their implementation remains limited; second, Morocco’s existing legal framework hinders any kind of public oversight. Video surveillance is treated as the sole prerogative of the security apparatus, and so far public actors have avoided to engage directly with the topic. The lack of institutional oversight and public debate on the matter raise serious concerns on the extent to which the deployment of such technologies affects citizens’ rights. AI-enhanced surveillance is thus an intrinsically transnational challenge in which private interests of economic gain and public interests of national security collide with citizens’ human rights across the Global North/Global South divide.
MULTIFILE
This paper presents a comprehensive study on assisting new AI programmers in making responsible choices while programming. The research focused on developing a process model, incorporating design patterns, and utilizing an IDE-based extension to promote responsible Artificial Intelligence (AI) practices. The experiment evaluated the effectiveness of the process model and extension, specifically examining their impact on the ability to make responsible choices in AI programming. The results revealed that the use of the process model and extension significantly enhanced the programmers' understanding of Responsible AI principles and their ability to apply them in code development. These findings support existing literature highlighting the positive influence of process models and patterns on code development capabilities. The research further confirmed the importance of incorporating Responsible AI values, as asking relevant questions related to these values resulted in responsible AI practices. Furthermore, the study contributes to bridging the gap between theoretical knowledge and practical application by incorporating Responsible AI values into the centre stage of the process model. By doing so, the research not only addresses the existing literature gap, but also ensures the practical implementation of Responsible AI principles.
MULTIFILE