This study provides a comprehensive analysis of the AI-related skills and roles needed to bridge the AI skills gap in Europe. Using a mixed-method research approach, this study investigated the most in-demand AI expertise areas and roles by surveying 409 organizations in Europe, analyzing 2,563 AI-related job advertisements, and conducting 24 focus group sessions with 145 industry and policy experts. The findings underscore the importance of both general technical skills in AI related to big data, machine learning and deep learning, cyber and data security, large language models as well as AI soft skills such as problemsolving and effective communication. This study sets the foundation for future research directions, emphasizing the importance of upskilling initiatives and the evolving nature of AI skills demand, contributing to an EU-wide strategy for future AI skills development.
MULTIFILE
From the article: The ethics guidelines put forward by the AI High Level Expert Group (AI-HLEG) present a list of seven key requirements that Human-centered, trustworthy AI systems should meet. These guidelines are useful for the evaluation of AI systems, but can be complemented by applied methods and tools for the development of trustworthy AI systems in practice. In this position paper we propose a framework for translating the AI-HLEG ethics guidelines into the specific context within which an AI system operates. This approach aligns well with a set of Agile principles commonly employed in software engineering. http://ceur-ws.org/Vol-2659/
DOCUMENT
Whitepaper: The use of AI is on the rise in the financial sector. Utilizing machine learning algorithms to make decisions and predictions based on the available data can be highly valuable. AI offers benefits to both financial service providers and its customers by improving service and reducing costs. Examples of AI use cases in the financial sector are: identity verification in client onboarding, transaction data analysis, fraud detection in claims management, anti-money laundering monitoring, price differentiation in car insurance, automated analysis of legal documents, and the processing of loan applications.
DOCUMENT
Artificial Intelligence (AI) offers organizations unprecedented opportunities. However, one of the risks of using AI is that its outcomes and inner workings are not intelligible. In industries where trust is critical, such as healthcare and finance, explainable AI (XAI) is a necessity. However, the implementation of XAI is not straightforward, as it requires addressing both technical and social aspects. Previous studies on XAI primarily focused on either technical or social aspects and lacked a practical perspective. This study aims to empirically examine the XAI related aspects faced by developers, users, and managers of AI systems during the development process of the AI system. To this end, a multiple case study was conducted in two Dutch financial services companies using four use cases. Our findings reveal a wide range of aspects that must be considered during XAI implementation, which we grouped and integrated into a conceptual model. This model helps practitioners to make informed decisions when developing XAI. We argue that the diversity of aspects to consider necessitates an XAI “by design” approach, especially in high-risk use cases in industries where the stakes are high such as finance, public services, and healthcare. As such, the conceptual model offers a taxonomy for method engineering of XAI related methods, techniques, and tools.
MULTIFILE
While there is much focus on interventions to foster ethical reflection in the design process of AI, there is less focus on fostering ethical reflection for (end)users. Yet, with the rise of genAI, AI technologies are no longer confined to expert users; non-experts are widely using these technologies. In this case study in a governmental organization in the Netherlands, we investigated a bottom-up approach to foster ethical reflection on the use of genAI tools. An approach of guided experimentation, including an intervention with a serious game, allowed civil servants to experiment, to understand the technology and its associated risks. The case study demonstrates that this approach enhances the awareness of possibilities and limitations, and the ethical considerations, of genAI usage. By analyzing usage statistics, we estimated the organization’s energy consumption.
DOCUMENT
An extensive inventory of 137 Dutch SMEs regarding the most important considerations regarding the use of emerging digital technologies shows that the selection process is difficult. En trepreneurs wonder which AI application suits them best and what the added (innovative) value is and how they can implement it. This outcome is a clear signal from SMEs to researchers in knowledge institutions and to developers of AI services and applications: Help! Which AI should I choose? With a consortium of students, researchers, and SMEs, we are creating an approach that will help SMEs make the most suitable AI choice. The project develops a data-driven advisory tool that helps SMEs choose, develop, implement and use AI applications focusing on four highly ranked topics.
LINK
Design schools in digital media and interaction design face the challenge of integrating recent artificial intelligence (AI) advancements into their curriculum. To address this, curricula must teach students to design both "with" and "for" AI. This paper addresses how designing for AI differs from designing for other novel technologies that have entered interaction design education. Future digital designers must develop new solution repertoires for intelligent systems. The paper discusses preparing students for these challenges, suggesting that design schools must choose between a lightweight and heavyweight approach toward the design of AI. The lightweight approach prioritises designing front-end AI applications, focusing on user interfaces, interactions, and immediate user experience impact. This requires adeptness in designing for evolving mental models and ethical considerations but is disconnected from a deep technological understanding of the inner workings of AI. The heavyweight approach emphasises conceptual AI application design, involving users, altering design processes, and fostering responsible practices. While it requires basic technological understanding, the specific knowledge needed for students remains uncertain. The paper compares these approaches, discussing their complementarity.
DOCUMENT
Player behavioural modelling has grown from a means to improve the playing strength of computer programs that play classic games (e.g., chess), to a means for impacting the player experience and satisfaction in video games, as well as in cross-domain applications such as interactive storytelling. In this context, player behavioural modelling is concerned with two goals, namely (1) providing an interesting or effective game AI on the basis of player models and (2) creating a basis for game developers to personalise gameplay as a whole, and creating new user-driven game mechanics. In this article, we provide an overview of player behavioural modelling for video games by detailing four distinct approaches, namely (1) modelling player actions, (2) modelling player tactics, (3) modelling player strategies, and (4) player profiling. We conclude the article with an analysis on the applicability of the approaches for the domain of video games.
DOCUMENT
Using game-based learning (GBL) has a proven potential to be an effective didactical method but it is not easy to implement in practice. Teachers find e.g. difficult to match a particular game dynamics and the curricular goals or to connect with the pedagogical models of particular games.In order to support student-teachers to develop pedagogical knowledge and skills to effectively apply this method we are developing a course about Game Based Pedagogy (GBP) for the teacher education program. This project is a Teaching Fellows Comenius (see (https://www.nro.nl/en/onderzoeksprogrammas/comeniusprogramma/toegekende-projecten).The development and implementation of the course follows a co-creation process in an interdisciplinary team involving high-school teachers, teacher educators and the Smart Education lab for Applied AI.In this workshop we present our first prototype of the course and invite the participants, through hands-on activities to explore some of the games, materials and examples that we developed. This workshop is intended for high school teachers, teacher educators and anyone who is interested in integrating Game-Based Pedagogy into practice.
DOCUMENT
In the book, 40 experts speak, who explain in clear language what AI is, and what questions, challenges and opportunities the technology brings.
DOCUMENT