poster voor de EuSoMII Annual Meeting in Pisa, Italië in oktober 2023. PURPOSE & LEARNING OBJECTIVE Artificial Intelligence (AI) technologies are gaining popularity for their ability to autonomously perform tasks and mimic human reasoning [1, 2]. Especially within the medical industry, the implementation of AI solutions has seen an increasing pace [3]. However, the field of radiology is not yet transformed with the promised value of AI, as knowledge on the effective use and implementation of AI is falling behind due to a number of causes: 1) Reactive/passive modes of learning are dominant 2) Existing developments are fragmented 3) Lack of expertise and differing perspectives 4) Lack of effective learning space Learning communities can help overcome these problems and address the complexities that come with human-technology configurations [4]. As the impact of a technology is dependent on its social management and implementation processes [5], our research question then becomes: How do we design, configure, and manage a Learning Community to maximize the impact of AI solutions in medicine?
DOCUMENT
Vandaag de dag loopt de discussie over AI hoog op: wat betekent AI voor verschillende beroepen? Welke competenties zijn straks wellicht niet meer relevant en welke juist des te meer? En wat betekent AI voor het onderwijs? Hoog tijd dus om in het onderwijs aandacht te besteden aan het versterken van AI-geletterdheid. Ofwel de competenties die nodig zijn om AI-technologieën kritisch te kunnen evalueren, er effectief mee te kunnen communiceren en mee samen te werken, zowel thuis als op de werkplek, zodat studenten klaar zijn voor een wereld vol AI Antwoord op deze en andere vragen vind je in deze publicatie van het lectoraat Teaching, Learning & Technology zodat je in zeven minuten weer bent bijgepraat over AI geletterdheid. # AI-geletterdheid #teachinglearningandtechnology #inholland
DOCUMENT
Artificial intelligence (AI) integration in Unmanned Aerial Vehicle (UAV) operations has significantly advanced the field through increased autonomy. However, evaluating the critical aspects of these operations remains a challenge. In order to address this, the current study proposes the use of a combination of the 'Observe-Orient-Decide-Act (OODA)' loop and the 'Analytic Hierarchy Process (AHP)' method for evaluating AI-UAV systems. The integration of the OODA loop into AHP aims to assess and weigh the critical components of AI-UAV operations, including (i) perception, (ii) decision-making, and (iii) adaptation. The research compares the results of the AHP evaluation between different groups of UAV operators. The findings of this research identify areas for improvement in AI-UAV systems and guide the development of new technologies. In conclusion, this combined approach offers a comprehensive evaluation method for the current and future state of AI-UAV operations, focusing on operator group comparison.
DOCUMENT
Fire fighters operate in a dangerous, dynamic, and complex environment. Artificial Intelligence (AI) systems can contribute to improve fire fighters’ situation awareness and decision-making. However, the introduction of AI systems needs to be done responsibly, taking (human) values into account, especially as the situation in which fire fighters operate is uncertain and decisions have a big impact. In this research, we investigate values that are affected by the introduction of AI systems for fire services by conducting several semi-structured focus group sessions with (operational) fire service personnel. The focus group outcomes are qualitatively analyzed and key values are identified and discussed. This research is a first step in an iterative process towards a generic framework of ethical aspects for the introduction of AI systems in first response, which will give insight into the relevant ethical aspects to take into account when developing AI systems for first responders.
MULTIFILE
De opkomst van Chat GPT laat zien hoe AI ingrijpt in ons dagelijks leven en het onderwijs. Maar AI is meer dan Chat GPT: van zoekmachines tot de gezichtsherkenning in je telefoon: data en algoritmes veranderen de levens van onze studenten en hun toekomstige werkveld. Wat betekent dit voor de opleidingen in het HBO waar voor wij werken? Voor de inspiratie-sessie De maatschappelijke impact van AI tijdens het HU Onderwijsfestival 2023 hebben wij onze collega’s uitgenodigd om samen met ons mee te denken over de recente AI-ontwikkelingen. We keken niet alleen naar de technologie, maar juist ook naar de maatschappelijke impact en wat de kansen en bedreigingen van AI zijn voor een open, rechtvaardige en duurzame samenleving. Het gesprek voerde we met onze collega’s (zowel docenten als medewerkers van de diensten) aan de hand van drie casussen met. De verzamelde resultaten en inzichten van deze gesprekken zijn samengebracht op een speciaal ontwikkelde poster voor de workshop (zie figuur 1). We hebben deze inzichten gebundeld en hieronder zijn ze te lezen.
DOCUMENT
This guide was developed for designers and developers of AI systems, with the goal of ensuring that these systems are sufficiently explainable. Sufficient here means that it meets the legal requirements from AI Act and GDPR and that users can use the system properly. Explainability of decisions is an important requirement in many systems and even an important principle for AI systems [HLEG19]. In many AI systems, explainability is not self-evident. AI researchers expect that the challenge of making AI explainable will only increase. For one thing, this comes from the applications: AI will be used more and more often, for larger and more sensitive decisions. On the other hand, organizations are making better and better models, for example, by using more different inputs. With more complex AI models, it is often less clear how a decision was made. Organizations that will deploy AI must take into account users' need for explanations. Systems that use AI should be designed to provide the user with appropriate explanations. In this guide, we first explain the legal requirements for explainability of AI systems. These come from the GDPR and the AI Act. Next, we explain how AI is used in the financial sector and elaborate on one problem in detail. For this problem, we then show how the user interface can be modified to make the AI explainable. These designs serve as prototypical examples that can be adapted to new problems. This guidance is based on explainability of AI systems for the financial sector. However, the advice can also be used in other sectors.
DOCUMENT
De zorgsector wordt in toenemende mate geconfronteerd met uitdagingen als gevolg van groeiende vraag (o.a. door vergrijzing en complexiteit van zorg) en afnemend aanbod van zorgverleners (o.a. door personeelstekorten). Kunstmatige Intelligentie (AI) wordt als mogelijke oplossing gezien, maar wordt vaak vanuit een technologisch perspectief benaderd. Dit artikel kiest een mensgerichte benadering en bestudeert hoe zorgmedewerkers het werken met AI ervaren. Dit is belangrijk omdat zij uiteindelijk met deze applicaties moeten werken om de uitdagingen in de zorg het hoofd te bieden. Op basis van 21 semigestructureerde interviews met zorgmedewerkers die AI hebben gebruikt, beschrijven we de werkervaringen met AI. Met behulp van het AMO-raamwerk - wat staat voor abilities, motivation en opportunities - laten we zien dat AI een impact heeft op het werk van zorgmedewerkers. Het gebruik van AI vereist nieuwe competenties en de overtuiging dat AI de zorg kan verbeteren. Daarbij is er een noodzaak voor voldoende beschikbaarheid van training en ondersteuning. Tenslotte bediscussiëren we de implicaties voor theorie en geven we aanbevelingen voor HR-professionals.
MULTIFILE
Artificial intelligence (AI) is a technology which is increasingly being utilised in society and the economy worldwide, but there is much disquiet over problematic and dangerous implementations of AI, or indeed even AI itself deciding to do dangerous and problematic actions. These developments have led to concerns about whether and how AI systems currently adhere to and will adhere to ethical standards, stimulating a global and multistakeholder conversation on AI ethics and the production of AI governance initiatives. Such developments form the basis for this chapter, where we give an insight into what is happening in Australia, China, the European Union, India and the United States. We commence with some background to the AI ethics and regulation debates, before proceedings to give an overview of what is happening in different countries and regions, namely Australia, China, the European Union (including national level activities in Germany), India and the United States. We provide an analysis of these country profiles, with particular emphasis on the relationship between ethics and law in each location. Overall we find that AI governance and ethics initiatives are most developed in China and the European Union, but the United States has been catching up in the last eighteen months.
DOCUMENT
The growing prevalence of AI systems in society, has also prompted a growth of AI systems in the public sector. There are however ethical concerns over the impact of AI on society and how this technology can impact public values. Previous works do not connect public values and the development of AI. To address this, a method is required to ensure that developers and public servants can signal possible ethical implications of an AI system and are assisted in creating systems that adhere to public values. Using the Research pathway model and Value Sensitive Design, we will develop a toolbox to assist in these challenges and gain insight into how public values can be embedded throughout the development of AI systems.
DOCUMENT
This white paper is the result of a research project by Hogeschool Utrecht, Floryn, Researchable, and De Volksbank in the period November 2021-November 2022. The research project was a KIEM project1 granted by the Taskforce for Applied Research SIA. The goal of the research project was to identify the aspects that play a role in the implementation of the explainability of artificial intelligence (AI) systems in the Dutch financial sector. In this white paper, we present a checklist of the aspects that we derived from this research. The checklist contains checkpoints and related questions that need consideration to make explainability-related choices in different stages of the AI lifecycle. The goal of the checklist is to give designers and developers of AI systems a tool to ensure the AI system will give proper and meaningful explanations to each stakeholder.
MULTIFILE