This paper presents a comprehensive study on assisting new AI programmers in making responsible choices while programming. The research focused on developing a process model, incorporating design patterns, and utilizing an IDE-based extension to promote responsible Artificial Intelligence (AI) practices. The experiment evaluated the effectiveness of the process model and extension, specifically examining their impact on the ability to make responsible choices in AI programming. The results revealed that the use of the process model and extension significantly enhanced the programmers' understanding of Responsible AI principles and their ability to apply them in code development. These findings support existing literature highlighting the positive influence of process models and patterns on code development capabilities. The research further confirmed the importance of incorporating Responsible AI values, as asking relevant questions related to these values resulted in responsible AI practices. Furthermore, the study contributes to bridging the gap between theoretical knowledge and practical application by incorporating Responsible AI values into the centre stage of the process model. By doing so, the research not only addresses the existing literature gap, but also ensures the practical implementation of Responsible AI principles.
MULTIFILE
While the technical application domain seems to be to most established field for AI applications, the field is at the very beginning to identify and implement responsible and fair AI applications. Technical, non-user facing services indirectly model user behavior as a consequence of which unexpected issues of privacy, fairness and lack of autonomy may emerge. There is a need for design methods that take the potential impact of AI systems into account.
DOCUMENT
Many have suggested that AI-based interventions could enhance learning by personalization, improving teacher effectiveness, or by optimizing educational processes. However, they could also have unintended or unexpected side-effects, such as undermining learning by enabling procrastination, or reducing social interaction by individualizing learning processes. Responsible scientific experiments are required to map both the potential benefits and the side-effects. Current procedures used to screen experiments by research ethics committees do not take the specific risks and dilemmas that AI poses into account. Previous studies identified sixteen conditions that can be used to judge whether trials with experimental technology are responsible. These conditions, however, were not yet translated into practical procedures, nor do they distinguish between the different types of AI applications and risk categories. This paper explores how those conditions could be further specified into procedures that could help facilitate and organize responsible experiments with AI, while differentiating for the different types of AI applications based on their level of automation. The four procedures that we propose are (1) A process of gradual testing (2) Risk- and side-effect detection (3) Explainability and severity, and (4) Democratic oversight. These procedures can be used by researchers and ethics committees to enable responsible experiment with AI interventions in educational settings. Implementation and compliance will require collaboration between researchers, industry, policy makers, and educational institutions.
DOCUMENT
Artificial Intelligence (AI) is increasingly used in the media industry, for instance, for the automatic creation, personalization, and distribution of media content. This development raises concerns in society and the media sector itself about the responsible use of AI. This study examines how different stakeholders in media organizations perceive ethical issues in their work concerning AI development and application, and how they interpret and put them into practice. We conducted an empirical study consisting of 14 semi-structured qualitative interviews with different stakeholders in public and private media organizations, and mapped the results of the interviews on stakeholder journeys to specify how AI applications are initiated, designed, developed, and deployed in the different media organizations. This results in insights into the current situation and challenges regarding responsible AI practices in media organizations.
LINK
Artificial Intelligence systems are more and more being introduced into first response; however, this introduction needs to be done responsibly. While generic claims on what this entails already exist, more details are required to understand the exact nature of responsible application of AI within the first response domain. The context in which AI systems are applied largely determines the ethical, legal, and societal impact and how to deal with this impact responsibly. For that reason, we empirically investigate relevant human values that are affected by the introduction of a specific AI-based Decision Aid (AIDA), a decision support system under development for Fire Services in the Netherlands. We held 10 expert group sessions and discussed the impact of AIDA on different stakeholders. This paper presents the design and implementation of the study and, as we are still in process of analyzing the sessions in detail, summarizes preliminary insights and steps forward.
MULTIFILE
poster voor de EuSoMII Annual Meeting in Pisa, Italië in oktober 2023. PURPOSE & LEARNING OBJECTIVE Artificial Intelligence (AI) technologies are gaining popularity for their ability to autonomously perform tasks and mimic human reasoning [1, 2]. Especially within the medical industry, the implementation of AI solutions has seen an increasing pace [3]. However, the field of radiology is not yet transformed with the promised value of AI, as knowledge on the effective use and implementation of AI is falling behind due to a number of causes: 1) Reactive/passive modes of learning are dominant 2) Existing developments are fragmented 3) Lack of expertise and differing perspectives 4) Lack of effective learning space Learning communities can help overcome these problems and address the complexities that come with human-technology configurations [4]. As the impact of a technology is dependent on its social management and implementation processes [5], our research question then becomes: How do we design, configure, and manage a Learning Community to maximize the impact of AI solutions in medicine?
DOCUMENT
Er is weinig aandacht voor het precies definiëren van kunstmatige intelligentie, ook wel artificiële intelligentie (AI). Door het bestaan van verschillende interpretaties en definities van AI is niet helemaal duidelijk wat een AI nu wel of niet kan. Met het woord intelligentie in de naam, en de indrukwekkende nieuwe systemen zoals ChatGPT die in 2022 in de markt kwam, ontstaat het beeld dat AI menselijke eigenschappen heeft. Het gevolg is dat AI wordt gebruikt als co-pilot, als digitaal vriendje of zelfs als alwetende. De mens of de AI, wie bepaalt nu eigenlijk? Of is AI als een collega met wie je goed kunt samenwerken? Tijd om nog eens goed te beschrijven wat AI is, hoe AI wordt ingezet en wat nodig is om AI een meerwaarde te geven en verantwoord te maken.
DOCUMENT
This guide was developed for designers and developers of AI systems, with the goal of ensuring that these systems are sufficiently explainable. Sufficient here means that it meets the legal requirements from AI Act and GDPR and that users can use the system properly. Explainability of decisions is an important requirement in many systems and even an important principle for AI systems [HLEG19]. In many AI systems, explainability is not self-evident. AI researchers expect that the challenge of making AI explainable will only increase. For one thing, this comes from the applications: AI will be used more and more often, for larger and more sensitive decisions. On the other hand, organizations are making better and better models, for example, by using more different inputs. With more complex AI models, it is often less clear how a decision was made. Organizations that will deploy AI must take into account users' need for explanations. Systems that use AI should be designed to provide the user with appropriate explanations. In this guide, we first explain the legal requirements for explainability of AI systems. These come from the GDPR and the AI Act. Next, we explain how AI is used in the financial sector and elaborate on one problem in detail. For this problem, we then show how the user interface can be modified to make the AI explainable. These designs serve as prototypical examples that can be adapted to new problems. This guidance is based on explainability of AI systems for the financial sector. However, the advice can also be used in other sectors.
DOCUMENT
Fire fighters operate in a dangerous, dynamic, and complex environment. Artificial Intelligence (AI) systems can contribute to improve fire fighters’ situation awareness and decision-making. However, the introduction of AI systems needs to be done responsibly, taking (human) values into account, especially as the situation in which fire fighters operate is uncertain and decisions have a big impact. In this research, we investigate values that are affected by the introduction of AI systems for fire services by conducting several semi-structured focus group sessions with (operational) fire service personnel. The focus group outcomes are qualitatively analyzed and key values are identified and discussed. This research is a first step in an iterative process towards a generic framework of ethical aspects for the introduction of AI systems in first response, which will give insight into the relevant ethical aspects to take into account when developing AI systems for first responders.
MULTIFILE
While the concept of Responsible Innovation is increasingly common among researchers and policy makers, it is still unknown what it means in a business context. This study aims to identify which aspects of Responsible Innovation are conceptually similar and dissimilar from social- and sustainable innovation. Our conceptual analysis is based on literature reviews of responsible-, social-, and sustainable innovation. The insights obtained are used for conceptualising Responsible Innovation in a business context. The main conclusion is that Responsible Innovation differs from social- and sustainable innovation as it: (1) also considers possible detrimental implications of innovation, (2) includes a mechanism for responding to uncertainties associated with innovation and (3) achieves a democratic governance of the innovation. However, achieving the latter will not be realistic in a business context. The results of this study are relevant for researchers, managers and policy makers who are interested in responsible innovation in the business context.
DOCUMENT