The increasing use of AI in industry and society not only expects but demands that we build human-centred competencies into our AI education programmes. The computing education community needs to adapt, and while the adoption of standalone ethics modules into AI programmes or the inclusion of ethical content into traditional applied AI modules is progressing, it is not enough. To foster student competencies to create AI innovations that respect and support the protection of individual rights and society, a novel ground-up approach is needed. This panel presents on one such approach, the development of a Human-Centred AI Masters (HCAIM) as well as the insights and lessons learned from the process. In particular, we discuss the design decisions that have led to the multi-institutional master’s programme. Moreover, this panel allows for discussion on pedagogical and methodological approaches, content knowledge areas and the delivery of such a novel programme, along with challenges faced, to inform and learn from other educators that are considering developing such programmes.
DOCUMENT
In the past few years, the EU has shown a growing commitment to address the rapid transformations brought about by the latest Artificial Intelligence (AI) developments by increasing efforts in AI regulation. Nevertheless, despite the growing body of technical knowledge and progress, the governance of AI-intensive technologies remains dynamic and challenging. A mounting chorus of experts expresses reservations about an overemphasis on regulation in Europe. Among their core arguments is the concern that such an approach might hinder innovation within the AI arena. This concern resonates particularly strongly compared to the United States and Asia, where AI-driven innovation appears to be surging ahead, potentially leaving Europe behind. This paper emphasizes the need to balance certification and governance in AI to foster ethical innovation and enhance the reliability and competitiveness of European technology. It explores recent AI regulations and upcoming European laws, underscoring Europe’s role in the global AI landscape. The authors analyze European governance approaches and their impact on SMEs and startups, offering a comparative view of global regulatory efforts. The paper highlights significant global AI developments from the past year, focusing on Europe’s contributions. We address the complexities of creating a comprehensive, human-centred AI master’s programme for higher education. Finally, we discuss how Europe can seize opportunities to promote ethical and reliable AI progress through education, fostering a balanced approach to regulation and enhancing young professionals’ understanding of ethical and legal aspects.
LINK
Explainable Artificial Intelligence (XAI) aims to provide insights into the inner workings and the outputs of AI systems. Recently, there’s been growing recognition that explainability is inherently human-centric, tied to how people perceive explanations. Despite this, there is no consensus in the research community on whether user evaluation is crucial in XAI, and if so, what exactly needs to be evaluated and how. This systematic literature review addresses this gap by providing a detailed overview of the current state of affairs in human-centered XAI evaluation. We reviewed 73 papers across various domains where XAI was evaluated with users. These studies assessed what makes an explanation “good” from a user’s perspective, i.e., what makes an explanation meaningful to a user of an AI system. We identified 30 components of meaningful explanations that were evaluated in the reviewed papers and categorized them into a taxonomy of human-centered XAI evaluation, based on: (a) the contextualized quality of the explanation, (b) the contribution of the explanation to human-AI interaction, and (c) the contribution of the explanation to human- AI performance. Our analysis also revealed a lack of standardization in the methodologies applied in XAI user studies, with only 19 of the 73 papers applying an evaluation framework used by at least one other study in the sample. These inconsistencies hinder cross-study comparisons and broader insights. Our findings contribute to understanding what makes explanations meaningful to users and how to measure this, guiding the XAI community toward a more unified approach in human-centered explainability.
MULTIFILE
Artificial intelligence (AI) is a technology which is increasingly being utilised in society and the economy worldwide, but there is much disquiet over problematic and dangerous implementations of AI, or indeed even AI itself deciding to do dangerous and problematic actions. These developments have led to concerns about whether and how AI systems currently adhere to and will adhere to ethical standards, stimulating a global and multistakeholder conversation on AI ethics and the production of AI governance initiatives. Such developments form the basis for this chapter, where we give an insight into what is happening in Australia, China, the European Union, India and the United States. We commence with some background to the AI ethics and regulation debates, before proceedings to give an overview of what is happening in different countries and regions, namely Australia, China, the European Union (including national level activities in Germany), India and the United States. We provide an analysis of these country profiles, with particular emphasis on the relationship between ethics and law in each location. Overall we find that AI governance and ethics initiatives are most developed in China and the European Union, but the United States has been catching up in the last eighteen months.
DOCUMENT
From the article: The ethics guidelines put forward by the AI High Level Expert Group (AI-HLEG) present a list of seven key requirements that Human-centered, trustworthy AI systems should meet. These guidelines are useful for the evaluation of AI systems, but can be complemented by applied methods and tools for the development of trustworthy AI systems in practice. In this position paper we propose a framework for translating the AI-HLEG ethics guidelines into the specific context within which an AI system operates. This approach aligns well with a set of Agile principles commonly employed in software engineering. http://ceur-ws.org/Vol-2659/
DOCUMENT
Digital surveillance technologies using artificial intelligence (AI) tools such as computer vision and facial recognition are becoming cheaper and easier to integrate into governance practices worldwide. Morocco serves as an example of how such technologies are becoming key tools of governance in authoritarian contexts. Based on qualitative fieldwork including semi-structured interviews, observation, and extensive desk reviews, this chapter focusses on the role played by AI-enhanced technology in urban surveillance and the control of migration between the Moroccan–Spanish borders. Two cross-cutting issues emerge: first, while international donors provide funding for urban and border surveillance projects, their role in enforcing transparency mechanisms in their implementation remains limited; second, Morocco’s existing legal framework hinders any kind of public oversight. Video surveillance is treated as the sole prerogative of the security apparatus, and so far public actors have avoided to engage directly with the topic. The lack of institutional oversight and public debate on the matter raise serious concerns on the extent to which the deployment of such technologies affects citizens’ rights. AI-enhanced surveillance is thus an intrinsically transnational challenge in which private interests of economic gain and public interests of national security collide with citizens’ human rights across the Global North/Global South divide.
MULTIFILE
An extensive inventory of 137 Dutch SMEs regarding the most important considerations regarding the use of emerging digital technologies shows that the selection process is difficult. En trepreneurs wonder which AI application suits them best and what the added (innovative) value is and how they can implement it. This outcome is a clear signal from SMEs to researchers in knowledge institutions and to developers of AI services and applications: Help! Which AI should I choose? With a consortium of students, researchers, and SMEs, we are creating an approach that will help SMEs make the most suitable AI choice. The project develops a data-driven advisory tool that helps SMEs choose, develop, implement and use AI applications focusing on four highly ranked topics.
LINK
This guide was developed for designers and developers of AI systems, with the goal of ensuring that these systems are sufficiently explainable. Sufficient here means that it meets the legal requirements from AI Act and GDPR and that users can use the system properly. Explainability of decisions is an important requirement in many systems and even an important principle for AI systems [HLEG19]. In many AI systems, explainability is not self-evident. AI researchers expect that the challenge of making AI explainable will only increase. For one thing, this comes from the applications: AI will be used more and more often, for larger and more sensitive decisions. On the other hand, organizations are making better and better models, for example, by using more different inputs. With more complex AI models, it is often less clear how a decision was made. Organizations that will deploy AI must take into account users' need for explanations. Systems that use AI should be designed to provide the user with appropriate explanations. In this guide, we first explain the legal requirements for explainability of AI systems. These come from the GDPR and the AI Act. Next, we explain how AI is used in the financial sector and elaborate on one problem in detail. For this problem, we then show how the user interface can be modified to make the AI explainable. These designs serve as prototypical examples that can be adapted to new problems. This guidance is based on explainability of AI systems for the financial sector. However, the advice can also be used in other sectors.
DOCUMENT
This white paper is the result of a research project by Hogeschool Utrecht, Floryn, Researchable, and De Volksbank in the period November 2021-November 2022. The research project was a KIEM project1 granted by the Taskforce for Applied Research SIA. The goal of the research project was to identify the aspects that play a role in the implementation of the explainability of artificial intelligence (AI) systems in the Dutch financial sector. In this white paper, we present a checklist of the aspects that we derived from this research. The checklist contains checkpoints and related questions that need consideration to make explainability-related choices in different stages of the AI lifecycle. The goal of the checklist is to give designers and developers of AI systems a tool to ensure the AI system will give proper and meaningful explanations to each stakeholder.
MULTIFILE
Background: During the process of decision-making for long-term care, clients are often dependent on informal support and available information about quality ratings of care services. However, clients do not take ratings into account when considering preferred care, and need assistance to understand their preferences. A tool to elicit preferences for long-term care could be beneficial. Therefore, the aim of this qualitative descriptive study is to understand the user requirements and develop a web-based preference elicitation tool for clients in need of longterm care. Methods: We applied a user-centred design in which end-users influence the development of the tool. The included end-users were clients, relatives, and healthcare professionals. Data collection took place between November 2017 and March 2018 by means of meetings with the development team consisting of four users, walkthrough interviews with 21 individual users, video-audio recordings, field notes, and observations during the use of the tool. Data were collected during three phases of iteration: Look and feel, Navigation, and Content. A deductive and inductive content analysis approach was used for data analysis. Results: The layout was considered accessible and easy during the Look and feel phase, and users asked for neutral images. Users found navigation easy, and expressed the need for concise and shorter text blocks. Users reached consensus about the categories of preferences, wished to adjust the content with propositions about well-being, and discussed linguistic difficulties. Conclusion: By incorporating the requirements of end-users, the user-centred design proved to be useful in progressing from the prototype to the finalized tool ‘What matters to me’. This tool may assist the elicitation of client’s preferences in their search for long-term care.
DOCUMENT