Artificial intelligence (AI) is a technology which is increasingly being utilised in society and the economy worldwide, but there is much disquiet over problematic and dangerous implementations of AI, or indeed even AI itself deciding to do dangerous and problematic actions. These developments have led to concerns about whether and how AI systems currently adhere to and will adhere to ethical standards, stimulating a global and multistakeholder conversation on AI ethics and the production of AI governance initiatives. Such developments form the basis for this chapter, where we give an insight into what is happening in Australia, China, the European Union, India and the United States. We commence with some background to the AI ethics and regulation debates, before proceedings to give an overview of what is happening in different countries and regions, namely Australia, China, the European Union (including national level activities in Germany), India and the United States. We provide an analysis of these country profiles, with particular emphasis on the relationship between ethics and law in each location. Overall we find that AI governance and ethics initiatives are most developed in China and the European Union, but the United States has been catching up in the last eighteen months.
DOCUMENT
From the article: The ethics guidelines put forward by the AI High Level Expert Group (AI-HLEG) present a list of seven key requirements that Human-centered, trustworthy AI systems should meet. These guidelines are useful for the evaluation of AI systems, but can be complemented by applied methods and tools for the development of trustworthy AI systems in practice. In this position paper we propose a framework for translating the AI-HLEG ethics guidelines into the specific context within which an AI system operates. This approach aligns well with a set of Agile principles commonly employed in software engineering. http://ceur-ws.org/Vol-2659/
DOCUMENT
Artificial Intelligence (AI) is increasingly used in the media industry, for instance, for the automatic creation, personalization, and distribution of media content. This development raises concerns in society and the media sector itself about the responsible use of AI. This study examines how different stakeholders in media organizations perceive ethical issues in their work concerning AI development and application, and how they interpret and put them into practice. We conducted an empirical study consisting of 14 semi-structured qualitative interviews with different stakeholders in public and private media organizations, and mapped the results of the interviews on stakeholder journeys to specify how AI applications are initiated, designed, developed, and deployed in the different media organizations. This results in insights into the current situation and challenges regarding responsible AI practices in media organizations.
LINK
The increasing use of AI in industry and society not only expects but demands that we build human-centred competencies into our AI education programmes. The computing education community needs to adapt, and while the adoption of standalone ethics modules into AI programmes or the inclusion of ethical content into traditional applied AI modules is progressing, it is not enough. To foster student competencies to create AI innovations that respect and support the protection of individual rights and society, a novel ground-up approach is needed. This panel presents on one such approach, the development of a Human-Centred AI Masters (HCAIM) as well as the insights and lessons learned from the process. In particular, we discuss the design decisions that have led to the multi-institutional master’s programme. Moreover, this panel allows for discussion on pedagogical and methodological approaches, content knowledge areas and the delivery of such a novel programme, along with challenges faced, to inform and learn from other educators that are considering developing such programmes.
DOCUMENT
De zorgsector wordt in toenemende mate geconfronteerd met uitdagingen als gevolg van groeiende vraag (o.a. door vergrijzing en complexiteit van zorg) en afnemend aanbod van zorgverleners (o.a. door personeelstekorten). Kunstmatige Intelligentie (AI) wordt als mogelijke oplossing gezien, maar wordt vaak vanuit een technologisch perspectief benaderd. Dit artikel kiest een mensgerichte benadering en bestudeert hoe zorgmedewerkers het werken met AI ervaren. Dit is belangrijk omdat zij uiteindelijk met deze applicaties moeten werken om de uitdagingen in de zorg het hoofd te bieden. Op basis van 21 semigestructureerde interviews met zorgmedewerkers die AI hebben gebruikt, beschrijven we de werkervaringen met AI. Met behulp van het AMO-raamwerk - wat staat voor abilities, motivation en opportunities - laten we zien dat AI een impact heeft op het werk van zorgmedewerkers. Het gebruik van AI vereist nieuwe competenties en de overtuiging dat AI de zorg kan verbeteren. Daarbij is er een noodzaak voor voldoende beschikbaarheid van training en ondersteuning. Tenslotte bediscussiëren we de implicaties voor theorie en geven we aanbevelingen voor HR-professionals.
MULTIFILE
Recent years have seen a massive growth in ethical and legal frameworks to govern data science practices. Yet one of the core questions associated with ethical and legal frameworks is the extent to which they are implemented in practice. A particularly interesting case in this context comes to public officials, for whom higher standards typically exist. We are thus trying to understand how ethical and legal frameworks influence the everyday practices on data and algorithms of public sector data professionals. The following paper looks at two cases: public sector data professionals (1) at municipalities in the Netherlands and (2) at the Netherlands Police. We compare these two cases based on an analytical research framework we develop in this article to help understanding of everyday professional practices. We conclude that there is a wide gap between legal and ethical governance rules and the everyday practices.
MULTIFILE
Whitepaper: The use of AI is on the rise in the financial sector. Utilizing machine learning algorithms to make decisions and predictions based on the available data can be highly valuable. AI offers benefits to both financial service providers and its customers by improving service and reducing costs. Examples of AI use cases in the financial sector are: identity verification in client onboarding, transaction data analysis, fraud detection in claims management, anti-money laundering monitoring, price differentiation in car insurance, automated analysis of legal documents, and the processing of loan applications.
DOCUMENT
This white paper is the result of a research project by Hogeschool Utrecht, Floryn, Researchable, and De Volksbank in the period November 2021-November 2022. The research project was a KIEM project1 granted by the Taskforce for Applied Research SIA. The goal of the research project was to identify the aspects that play a role in the implementation of the explainability of artificial intelligence (AI) systems in the Dutch financial sector. In this white paper, we present a checklist of the aspects that we derived from this research. The checklist contains checkpoints and related questions that need consideration to make explainability-related choices in different stages of the AI lifecycle. The goal of the checklist is to give designers and developers of AI systems a tool to ensure the AI system will give proper and meaningful explanations to each stakeholder.
MULTIFILE
Jo-An Kamp is a lecturer and researcher at Fontys University of Applied Sciences in the Netherlands. She coaches ICT students in the fields of UX, research, (interactive) media, communication, (interaction) design, ethics and innovation. She does research on the impact of technology on humans and society. Jo-An is co-creator of the Technology Impact Cycle Toolkit (www.tict.io), a toolkit designed to make people think and make better decisions about (the implementation of) technology and is a member of the Moral Design Strategy research group.
YOUTUBE
This study provides a comprehensive analysis of the AI-related skills and roles needed to bridge the AI skills gap in Europe. Using a mixed-method research approach, this study investigated the most in-demand AI expertise areas and roles by surveying 409 organizations in Europe, analyzing 2,563 AI-related job advertisements, and conducting 24 focus group sessions with 145 industry and policy experts. The findings underscore the importance of both general technical skills in AI related to big data, machine learning and deep learning, cyber and data security, large language models as well as AI soft skills such as problemsolving and effective communication. This study sets the foundation for future research directions, emphasizing the importance of upskilling initiatives and the evolving nature of AI skills demand, contributing to an EU-wide strategy for future AI skills development.
MULTIFILE