In the past few years, the EU has shown a growing commitment to address the rapid transformations brought about by the latest Artificial Intelligence (AI) developments by increasing efforts in AI regulation. Nevertheless, despite the growing body of technical knowledge and progress, the governance of AI-intensive technologies remains dynamic and challenging. A mounting chorus of experts expresses reservations about an overemphasis on regulation in Europe. Among their core arguments is the concern that such an approach might hinder innovation within the AI arena. This concern resonates particularly strongly compared to the United States and Asia, where AI-driven innovation appears to be surging ahead, potentially leaving Europe behind. This paper emphasizes the need to balance certification and governance in AI to foster ethical innovation and enhance the reliability and competitiveness of European technology. It explores recent AI regulations and upcoming European laws, underscoring Europe’s role in the global AI landscape. The authors analyze European governance approaches and their impact on SMEs and startups, offering a comparative view of global regulatory efforts. The paper highlights significant global AI developments from the past year, focusing on Europe’s contributions. We address the complexities of creating a comprehensive, human-centred AI master’s programme for higher education. Finally, we discuss how Europe can seize opportunities to promote ethical and reliable AI progress through education, fostering a balanced approach to regulation and enhancing young professionals’ understanding of ethical and legal aspects.
LINK
Artificial intelligence (AI) is a technology which is increasingly being utilised in society and the economy worldwide, but there is much disquiet over problematic and dangerous implementations of AI, or indeed even AI itself deciding to do dangerous and problematic actions. These developments have led to concerns about whether and how AI systems currently adhere to and will adhere to ethical standards, stimulating a global and multistakeholder conversation on AI ethics and the production of AI governance initiatives. Such developments form the basis for this chapter, where we give an insight into what is happening in Australia, China, the European Union, India and the United States. We commence with some background to the AI ethics and regulation debates, before proceedings to give an overview of what is happening in different countries and regions, namely Australia, China, the European Union (including national level activities in Germany), India and the United States. We provide an analysis of these country profiles, with particular emphasis on the relationship between ethics and law in each location. Overall we find that AI governance and ethics initiatives are most developed in China and the European Union, but the United States has been catching up in the last eighteen months.
DOCUMENT
Algorithms that significantly impact individuals and society should be transparent, yet they can often function as complex black boxes. Such high-risk AI systems necessitate explainability of their inner workings and decision-making processes, which is also crucial for fostering trust, understanding, and adoption of AI. Explainability is a major topic, not only in literature (Maslej et al. 2024) but also in AI regulation. The EU AI Act imposes explainability requirements on providers and deployers of high-risk AI systems. Additionally, it grants the right to explanation for individuals affected by high-risk AI systems. However, legal literature illustrates a lack of clarity and consensus regarding the definition of explainability and the interpretation of the relevant obligations of the AI Act (See e.g. Bibal et al. 2021; Nannini 2024; Sovrano et al. 2022). The practical implementation also presents further challenges, calling for an interdisciplinary approach (Gyevnar, Ferguson, and Schafer 2023; Nahar et al. 2024, 2110).Explainability can be examined from various perspectives. One such perspective concerns a functional approach, where explanations serve specific functions (Hacker and Passoth 2022). Looking at this functional perspective of explanations, my previous work elaborates on the central functions of explanations interwoven in the AI Act. Through comparative research on the evolution of the explainability provisions in soft and hard law on AI from the High-Level Expert Group on AI, Council of Europe, and OECD, my previous research establishes that explanations in the AI Act primarily serve to provide understanding of the inner workings and output of an AI system, to enable contestation of a decision, to increase usability, and to achieve legal compliance (Van Beem, ongoing work, paper presented at Bileta 2025 conference; submission expected June 2025).Moreover, my previous work reveals that the AI lifecycle is an important concept in AI policy and legal documents. The AI lifecycle includes phases that lead to the design, development, and deployment of an AI system (Silva and Alahakoon 2022). The AI Act requires various explanations in each phase. The provider and deployer shall observe an explainability by design and development approach throughout the entire AI lifecycle, adapting explanations as their AI evolves equally. However, the practical side of balancing between clear, meaningful, legally compliant explanations and technical explanations proves challenging.Assessing this practical side, my current research is a case study in the agricultural sector, where AI plays an increasing role and where explainability is a necessary ingredient for adoption (EPRS 2023). The case study aims to map which legal issues AI providers, deployers, and other AI experts in field crop farming encounter. Secondly, the study explores the role of explainability (and the field of eXplainable AI) in overcoming such legal challenges. The study is conducted through further doctrinal research, case law analysis, and empirical research using interviews, integrating the legal and technical perspectives. Aiming to enhance trustworthiness and adoption of AI in agriculture, this research seeks to contribute to an interdisciplinary debate regarding the practical application of the AI Act's explainability obligations.
DOCUMENT
Kunstmatige intelligentie (AI) wordt steeds vaker toegepast in uiteenlopende sectoren, waaronder de land- en tuinbouw. De Europese Commissie publiceerde in 2024 de definitieve versie van de AI-verordening, een van de eerste uitgebreide wettelijke regelingen voor AI-systemen. Deze regelgeving zal directe gevolgen hebben voor ontwikkelaars en aanbieders van AI-systemen in de agrarische sector, die zich aan de voorschriften van de verordening moeten houden. Het doel van dit afstudeeronderzoek is om inzicht te geven in de voor deze aanbieders relevante vereisten uit de AI-verordening en een be-roepsproduct te ontwikkelen dat deze informatie op een toegankelijke manier presenteert.Disclaimer:De afstudeeropdracht wordt uitgevoerd door een vierdejaarsstudent in het kader van zijn/haar afstuderen bij het Instituut voor Rechtenstudies. De student levert een juridisch beroepsproduct op en doet daartoe onderzoek. De student wordt tijdens de uitvoering van zijn/haar afstudeeropdracht begeleid door een afstudeercoach. De inspanningen van de student en de afstudeercoach zijn erop gericht om een zo goed mogelijk beroepsproduct op te leveren. Dit moet opgevat worden als een product van een (vierdejaars)student en niet van een juridische professional. Mocht ondanks de geleverde inspanningen de informatie of de inhoud van het beroepsproduct onvolledig en/of onjuist zijn, dan kunnen de Hanzehogeschool Groningen, het Instituut voor Rechtenstudies, individuele medewerkers en de student daarvoor geen aansprakelijkheid aanvaarden.
MULTIFILE
This guide was developed for designers and developers of AI systems, with the goal of ensuring that these systems are sufficiently explainable. Sufficient here means that it meets the legal requirements from AI Act and GDPR and that users can use the system properly. Explainability of decisions is an important requirement in many systems and even an important principle for AI systems [HLEG19]. In many AI systems, explainability is not self-evident. AI researchers expect that the challenge of making AI explainable will only increase. For one thing, this comes from the applications: AI will be used more and more often, for larger and more sensitive decisions. On the other hand, organizations are making better and better models, for example, by using more different inputs. With more complex AI models, it is often less clear how a decision was made. Organizations that will deploy AI must take into account users' need for explanations. Systems that use AI should be designed to provide the user with appropriate explanations. In this guide, we first explain the legal requirements for explainability of AI systems. These come from the GDPR and the AI Act. Next, we explain how AI is used in the financial sector and elaborate on one problem in detail. For this problem, we then show how the user interface can be modified to make the AI explainable. These designs serve as prototypical examples that can be adapted to new problems. This guidance is based on explainability of AI systems for the financial sector. However, the advice can also be used in other sectors.
DOCUMENT
Wat betekent de AI act voor advocatenkantoren? Hoe kunnen zij AI verantwoord inzetten? In dit onderzoek heeft een student dat uitgewerkt in een uitgebreid beroepsproduct, bestaande uit een beslisboom om te beoordelen in welke risicocategorie een AI systeem voor de advocatuur valt en welke verplichten er per risicosysteem zijn. Disclaimer:De afstudeeropdracht wordt uitgevoerd door een vierdejaarsstudent in het kader van zijn/haar afstuderen bij het Instituut voor Rechtenstudies. De student levert een juridisch beroepsproduct op en doet daartoe onderzoek. De student wordt tijdens de uitvoering van zijn/haar afstudeeropdracht begeleid door een afstudeercoach. De inspanningen van de student en de afstudeercoach zijn erop gericht om een zo goed mogelijk beroepsproduct op te leveren. Dit moet opgevat worden als een product van een (vierdejaars)student en niet van een juridische professional. Mocht ondanks de geleverde inspanningen de informatie of de inhoud van het beroepsproduct onvolledig en/of onjuist zijn, dan kunnen de Hanzehogeschool Groningen, het Instituut voor Rechtenstudies, individuele medewerkers en de student daarvoor geen aansprakelijkheid aanvaarden.
MULTIFILE
The increasing use of AI in industry and society not only expects but demands that we build human-centred competencies into our AI education programmes. The computing education community needs to adapt, and while the adoption of standalone ethics modules into AI programmes or the inclusion of ethical content into traditional applied AI modules is progressing, it is not enough. To foster student competencies to create AI innovations that respect and support the protection of individual rights and society, a novel ground-up approach is needed. This panel presents on one such approach, the development of a Human-Centred AI Masters (HCAIM) as well as the insights and lessons learned from the process. In particular, we discuss the design decisions that have led to the multi-institutional master’s programme. Moreover, this panel allows for discussion on pedagogical and methodological approaches, content knowledge areas and the delivery of such a novel programme, along with challenges faced, to inform and learn from other educators that are considering developing such programmes.
DOCUMENT
This study provides a comprehensive analysis of the AI-related skills and roles needed to bridge the AI skills gap in Europe. Using a mixed-method research approach, this study investigated the most in-demand AI expertise areas and roles by surveying 409 organizations in Europe, analyzing 2,563 AI-related job advertisements, and conducting 24 focus group sessions with 145 industry and policy experts. The findings underscore the importance of both general technical skills in AI related to big data, machine learning and deep learning, cyber and data security, large language models as well as AI soft skills such as problemsolving and effective communication. This study sets the foundation for future research directions, emphasizing the importance of upskilling initiatives and the evolving nature of AI skills demand, contributing to an EU-wide strategy for future AI skills development.
MULTIFILE
Whitepaper: The use of AI is on the rise in the financial sector. Utilizing machine learning algorithms to make decisions and predictions based on the available data can be highly valuable. AI offers benefits to both financial service providers and its customers by improving service and reducing costs. Examples of AI use cases in the financial sector are: identity verification in client onboarding, transaction data analysis, fraud detection in claims management, anti-money laundering monitoring, price differentiation in car insurance, automated analysis of legal documents, and the processing of loan applications.
DOCUMENT
In the modern day and age, cybersecurity facesnumerous challenges. Computer systems and networks become more and more sophisticated and interconnected, and the attack surface constantly increases. In addition, cyber-attacks keep growing in complexity and scale. In order to address these challenges, security professionals started to employ generative AI (GenAI) to quickly respond to attacks. However, this introduces challenges in terms of how GenAI can be adapted to the security environment and where the legal and ethical responsibilities lie. The Universities of Twente and Groningen and the Hanze University of Applied Sciences have initiated an interdisciplinary research project to investigate the legal and technical aspects of these LLMs in the cybersecurity domain and develop an advanced AI-powered tool.
DOCUMENT