As AI systems become increasingly prevalent in our daily lives and work, it is essential to contemplate their social role and how they interact with us. While functionality and increasingly explainability and trustworthiness are often the primary focus in designing AI systems, little consideration is given to their social role and the effects on human-AI interactions. In this paper, we advocate for paying attention to social roles in AI design. We focus on an AI healthcare application and present three possible social roles of the AI system within it to explore the relationship between the AI system and the user and its implications for designers and practitioners. Our findings emphasise the need to think beyond functionality and highlight the importance of considering the social role of AI systems in shaping meaningful human-AI interactions.
DOCUMENT
Digital surveillance technologies using artificial intelligence (AI) tools such as computer vision and facial recognition are becoming cheaper and easier to integrate into governance practices worldwide. Morocco serves as an example of how such technologies are becoming key tools of governance in authoritarian contexts. Based on qualitative fieldwork including semi-structured interviews, observation, and extensive desk reviews, this chapter focusses on the role played by AI-enhanced technology in urban surveillance and the control of migration between the Moroccan–Spanish borders. Two cross-cutting issues emerge: first, while international donors provide funding for urban and border surveillance projects, their role in enforcing transparency mechanisms in their implementation remains limited; second, Morocco’s existing legal framework hinders any kind of public oversight. Video surveillance is treated as the sole prerogative of the security apparatus, and so far public actors have avoided to engage directly with the topic. The lack of institutional oversight and public debate on the matter raise serious concerns on the extent to which the deployment of such technologies affects citizens’ rights. AI-enhanced surveillance is thus an intrinsically transnational challenge in which private interests of economic gain and public interests of national security collide with citizens’ human rights across the Global North/Global South divide.
MULTIFILE
This research investigates the potential and challenges of using artificial intelligence, specifically the ChatGPT-4 model developed by OpenAI, in grading and providing feedback in an educational setting. By comparing the grading of a human lecturer and ChatGPT-4 in an experiment with 105 students, our study found a strong positive correlation between the scores given by both, despite some mismatches. In addition, we observed that ChatGPT-4's feedback was effectively personalized and understandable for students, contributing to their learning experience. While our findings suggest that AI technologies like ChatGPT-4 can significantly speed up the grading process and enhance feedback provision, the implementation of these systems should be thoughtfully considered. With further research and development, AI can potentially become a valuable tool to support teaching and learning in education. https://saiconference.com/FICC
DOCUMENT
With artificial intelligence (AI) systems entering our working and leisure environments with increasing adaptation and learning capabilities, new opportunities arise for developing hybrid (human-AI) intelligence (HI) systems, comprising new ways of collaboration. However, there is not yet a structured way of specifying design solutions of collaboration for hybrid intelligence (HI) systems and there is a lack of best practices shared across application domains. We address this gap by investigating the generalization of specific design solutions into design patterns that can be shared and applied in different contexts. We present a human-centered bottom-up approach for the specification of design solutions and their abstraction into team design patterns. We apply the proposed approach for 4 concrete HI use cases and show the successful extraction of team design patterns that are generalizable, providing re-usable design components across various domains. This work advances previous research on team design patterns and designing applications of HI systems.
MULTIFILE
Artificial intelligence (AI) is a technology which is increasingly being utilised in society and the economy worldwide, but there is much disquiet over problematic and dangerous implementations of AI, or indeed even AI itself deciding to do dangerous and problematic actions. These developments have led to concerns about whether and how AI systems currently adhere to and will adhere to ethical standards, stimulating a global and multistakeholder conversation on AI ethics and the production of AI governance initiatives. Such developments form the basis for this chapter, where we give an insight into what is happening in Australia, China, the European Union, India and the United States. We commence with some background to the AI ethics and regulation debates, before proceedings to give an overview of what is happening in different countries and regions, namely Australia, China, the European Union (including national level activities in Germany), India and the United States. We provide an analysis of these country profiles, with particular emphasis on the relationship between ethics and law in each location. Overall we find that AI governance and ethics initiatives are most developed in China and the European Union, but the United States has been catching up in the last eighteen months.
DOCUMENT
This article explores the decision-making processes in the ongoing development of an AI-supported youth mental health app. Document analysis reveals decisions taken during the grant proposal and funding phase and reflects upon reasons why AI is incorporated in innovative youth mental health care. An innovative multilogue among the transdisciplinary team of researchers, covering AI-experts, biomedical engineers, ethicists, social scientists, psychiatrists and young experts by experience points out which decisions are taken how. This covers i) the role of a biomedical and exposomic understanding of psychiatry as compared to a phenomenological and experiential perspective, ii) the impact and limits of AI-co-creation by young experts by experience and mental health experts, and iii) the different perspectives regarding the impact of AI on autonomy, empowerment and human relationships. The multilogue does not merely highlight different steps taken during human decision-making in AI-development, it also raises awareness about the many complexities, and sometimes contradictions, when engaging in transdisciplinary work, and it points towards ethical challenges of digitalized youth mental health care.
LINK
Technology has a major impact on the way nurses work. Data-driven technologies, such as artificial intelligence (AI), have particularly strong potential to support nurses in their work. However, their use also introduces ambiguities. An example of such a technology is AI-driven lifestyle monitoring in long-term care for older adults, based on data collected from ambient sensors in an older adult’s home. Designing and implementing this technology in such an intimate setting requires collaboration with nurses experienced in long-term and older adult care. This viewpoint paper emphasizes the need to incorporate nurses and the nursing perspective into every stage of designing, using, and implementing AI-driven lifestyle monitoring in long-term care settings. It is argued that the technology will not replace nurses, but rather act as a new digital colleague, complementing the humane qualities of nurses and seamlessly integrating into nursing workflows. Several advantages of such a collaboration between nurses and technology are highlighted, as are potential risks such as decreased patient empowerment, depersonalization, lack of transparency, and loss of human contact. Finally, practical suggestions are offered to move forward with integrating the digital colleague
DOCUMENT
The smart city infrastructure will soon start to include smart agents, i.e., agentic things, which co-exist and co-perform with human citizens. This near-future scenario explores the flexible types of collaborations and relationships between the human and nonhuman citizens. Drawing on current technology forecasts and AI/robotics literature, we created five fictional concepts for reflecting on themes we deem important for such collaborations: responsibility, delegation, relationship, priority, and adaptation. The promises, challenges and threats of these themes are discussed in this paper, together with the new questions that were opened up through the use of design fiction as a method.
DOCUMENT
2025 ILC Annual International Conference , 16th & 17 June, 2025, Genoa, Italy, Global Collaboration,Local Action for Fundamentals of Care Innovation. Zie bladzijde 81. An international group of experts has joined forces for the further development of Artificial Intelligence (AI) in relation to the Fundamentals of Care (FoC) framework. AI, or its categories like machine learning and deep learning, offers potential to identify patterns in healthcare data, develop clinical prediction models, and derive insights from large datasets. For example, algorithms can be created to detect the start of the palliative phase based on electronic health records, or to inform nursing decisions based on lifestyle monitoring data for older adults. These AI applications significantly influence nurses' roles, the nurse-client relationship and nurses’ professional identity. Consequently, nurses must take responsibility to ensure that AI applications align with person-centered fundamental care, professional ethics, equity, and social justice. Thus, nursing leadership is essential to lead the development and use of AI applications that support nursing care according to the FoC framework, and enhance patient outcomes. The aim of the current project is to explore nurses’ responsibility for how AI adds value to the FoC framework. Firstly, nurse leaders play a vital role in overseeing the quality and relevance of data collected in daily practice, as these data are foundational for AI algorithms. The elements as articulated in the FoC framework should be the building blocks for any algorithm. These building blocks can be linked to clinical and social conditions, and life stages, building from the basis of the individual's human needs. Secondly, it is crucial for nurses to participate in the interdisciplinary teams that develop AI algorithms. Their participation and expertise ensure that algorithms are co-created with an understanding of the needs of their clients, maximizing the potential for positive outcomes. In addition to education, policy, and regulation, a nurse-led, interdisciplinary research program is needed to investigate the relationship between AI applications, the FoC framework and it’s impact on nurse-client relationships, nurses’ professional identity, and patient outcomes.
DOCUMENT
Both because of the shortcomings of existing risk assessment methodologies, as well as newly available tools to predict hazard and risk with machine learning approaches, there has been an emerging emphasis on probabilistic risk assessment. Increasingly sophisticated AI models can be applied to a plethora of exposure and hazard data to obtain not only predictions for particular endpoints but also to estimate the uncertainty of the risk assessment outcome. This provides the basis for a shift from deterministic to more probabilistic approaches but comes at the cost of an increased complexity of the process as it requires more resources and human expertise. There are still challenges to overcome before a probabilistic paradigm is fully embraced by regulators. Based on an earlier white paper (Maertens et al., 2022), a workshop discussed the prospects, challenges and path forward for implementing such AI-based probabilistic hazard assessment. Moving forward, we will see the transition from categorized into probabilistic and dose-dependent hazard outcomes, the application of internal thresholds of toxicological concern for data-poor substances, the acknowledgement of user-friendly open-source software, a rise in the expertise of toxicologists required to understand and interpret artificial intelligence models, and the honest communication of uncertainty in risk assessment to the public.
DOCUMENT