Many have suggested that AI-based interventions could enhance learning by personalization, improving teacher effectiveness, or by optimizing educational processes. However, they could also have unintended or unexpected side-effects, such as undermining learning by enabling procrastination, or reducing social interaction by individualizing learning processes. Responsible scientific experiments are required to map both the potential benefits and the side-effects. Current procedures used to screen experiments by research ethics committees do not take the specific risks and dilemmas that AI poses into account. Previous studies identified sixteen conditions that can be used to judge whether trials with experimental technology are responsible. These conditions, however, were not yet translated into practical procedures, nor do they distinguish between the different types of AI applications and risk categories. This paper explores how those conditions could be further specified into procedures that could help facilitate and organize responsible experiments with AI, while differentiating for the different types of AI applications based on their level of automation. The four procedures that we propose are (1) A process of gradual testing (2) Risk- and side-effect detection (3) Explainability and severity, and (4) Democratic oversight. These procedures can be used by researchers and ethics committees to enable responsible experiment with AI interventions in educational settings. Implementation and compliance will require collaboration between researchers, industry, policy makers, and educational institutions.
DOCUMENT
Background: Revealing tacit knowledge often is seen as very valuable for organizations, although it is usually challenging to enunciate and share this type of knowledge. Methods: This study uses a participatory design and the application of a board gaming simulation as instruments to extract tacit knowledge. To illustrate this application, the gaming simulation is played with entrepreneurs from horticulture. Horticulture represents a complex social system where tacit knowledge plays a major role in the trade process. A participatory design process is used to explore whether the design and play of gaming simulations enable participants to explicate their tacit knowledge. Participants’ participation in designing the gaming simulation explicated that reconstructing reality was a prerequisite for their commitment. Results: The results from playing simulation sessions show that participants were able to: (1) narrow down the anecdotic behaviour to a few factors; (2) to structure these factors; (3) explore how these factors relate to trade barriers and (4) to explain which tactics are applied to foster trade. Conclusion: The educational value of this study is that it helped entrepreneurs in understanding complex real-life situations.
MULTIFILE
Individuals with autism increasingly enroll in universities, but little is known about predictors for their success. This study developed predictive models for the academic success of autistic bachelor students (N=101) in comparison to students with other health conditions (N=2465) and students with no health conditions (N=25,077). We applied propensity score weighting to balance outcomes. The research showed that autistic students’ academic success was predictable, and these predictions were more accurate than predictions of their peers’ success. For first-year success, study choice issues were the most important predictors (parallel program and application timing). Issues with participation in pre-education (missingness of grades in pre-educational records) and delays at the beginning of autistic students’ studies (reflected in age) were the most influential predictors for the second-year success and delays in the second and final year of their bachelor’s program. In addition, academic performance (average grades) was the strongest predictor for degree completion in 3 years. These insights can enable universities to develop tailored support for autistic students. Using early warning signals from administrative data, institutions can lower dropout risk and increase degree completion for autistic students.
DOCUMENT
poster voor de EuSoMII Annual Meeting in Pisa, Italië in oktober 2023. PURPOSE & LEARNING OBJECTIVE Artificial Intelligence (AI) technologies are gaining popularity for their ability to autonomously perform tasks and mimic human reasoning [1, 2]. Especially within the medical industry, the implementation of AI solutions has seen an increasing pace [3]. However, the field of radiology is not yet transformed with the promised value of AI, as knowledge on the effective use and implementation of AI is falling behind due to a number of causes: 1) Reactive/passive modes of learning are dominant 2) Existing developments are fragmented 3) Lack of expertise and differing perspectives 4) Lack of effective learning space Learning communities can help overcome these problems and address the complexities that come with human-technology configurations [4]. As the impact of a technology is dependent on its social management and implementation processes [5], our research question then becomes: How do we design, configure, and manage a Learning Community to maximize the impact of AI solutions in medicine?
DOCUMENT
Whitepaper: The use of AI is on the rise in the financial sector. Utilizing machine learning algorithms to make decisions and predictions based on the available data can be highly valuable. AI offers benefits to both financial service providers and its customers by improving service and reducing costs. Examples of AI use cases in the financial sector are: identity verification in client onboarding, transaction data analysis, fraud detection in claims management, anti-money laundering monitoring, price differentiation in car insurance, automated analysis of legal documents, and the processing of loan applications.
DOCUMENT
This study addresses the burgeoning global shortage of healthcare workers and the consequential overburdening of medical professionals, a challenge that is anticipated to intensify by 2030 [1]. It explores the adoption and perceptions of AI-powered mobile medical applications (MMAs) by physicians in the Netherlands, investigating whether doctors discuss or recommend these applications to patients and the frequency of their use in clinical practice. The research reveals a cautious but growing acceptance of MMAs among healthcare providers. Medical mobile applications, with a substantial part of IA-driven applications, are being recognized for their potential to alleviate workload. The findings suggest an emergent trust in AI-driven health technologies, underscored by recommendations from peers, yet tempered by concerns over data security and patient mental health, indicating a need for ongoing assessment and validation of these applications
DOCUMENT
Artificial intelligence (AI) is a technology which is increasingly being utilised in society and the economy worldwide, but there is much disquiet over problematic and dangerous implementations of AI, or indeed even AI itself deciding to do dangerous and problematic actions. These developments have led to concerns about whether and how AI systems currently adhere to and will adhere to ethical standards, stimulating a global and multistakeholder conversation on AI ethics and the production of AI governance initiatives. Such developments form the basis for this chapter, where we give an insight into what is happening in Australia, China, the European Union, India and the United States. We commence with some background to the AI ethics and regulation debates, before proceedings to give an overview of what is happening in different countries and regions, namely Australia, China, the European Union (including national level activities in Germany), India and the United States. We provide an analysis of these country profiles, with particular emphasis on the relationship between ethics and law in each location. Overall we find that AI governance and ethics initiatives are most developed in China and the European Union, but the United States has been catching up in the last eighteen months.
DOCUMENT
Design schools in digital media and interaction design face the challenge of integrating recent artificial intelligence (AI) advancements into their curriculum. To address this, curricula must teach students to design both "with" and "for" AI. This paper addresses how designing for AI differs from designing for other novel technologies that have entered interaction design education. Future digital designers must develop new solution repertoires for intelligent systems. The paper discusses preparing students for these challenges, suggesting that design schools must choose between a lightweight and heavyweight approach toward the design of AI. The lightweight approach prioritises designing front-end AI applications, focusing on user interfaces, interactions, and immediate user experience impact. This requires adeptness in designing for evolving mental models and ethical considerations but is disconnected from a deep technological understanding of the inner workings of AI. The heavyweight approach emphasises conceptual AI application design, involving users, altering design processes, and fostering responsible practices. While it requires basic technological understanding, the specific knowledge needed for students remains uncertain. The paper compares these approaches, discussing their complementarity.
DOCUMENT
This guide was developed for designers and developers of AI systems, with the goal of ensuring that these systems are sufficiently explainable. Sufficient here means that it meets the legal requirements from AI Act and GDPR and that users can use the system properly. Explainability of decisions is an important requirement in many systems and even an important principle for AI systems [HLEG19]. In many AI systems, explainability is not self-evident. AI researchers expect that the challenge of making AI explainable will only increase. For one thing, this comes from the applications: AI will be used more and more often, for larger and more sensitive decisions. On the other hand, organizations are making better and better models, for example, by using more different inputs. With more complex AI models, it is often less clear how a decision was made. Organizations that will deploy AI must take into account users' need for explanations. Systems that use AI should be designed to provide the user with appropriate explanations. In this guide, we first explain the legal requirements for explainability of AI systems. These come from the GDPR and the AI Act. Next, we explain how AI is used in the financial sector and elaborate on one problem in detail. For this problem, we then show how the user interface can be modified to make the AI explainable. These designs serve as prototypical examples that can be adapted to new problems. This guidance is based on explainability of AI systems for the financial sector. However, the advice can also be used in other sectors.
DOCUMENT
This study provides a comprehensive analysis of the AI-related skills and roles needed to bridge the AI skills gap in Europe. Using a mixed-method research approach, this study investigated the most in-demand AI expertise areas and roles by surveying 409 organizations in Europe, analyzing 2,563 AI-related job advertisements, and conducting 24 focus group sessions with 145 industry and policy experts. The findings underscore the importance of both general technical skills in AI related to big data, machine learning and deep learning, cyber and data security, large language models as well as AI soft skills such as problemsolving and effective communication. This study sets the foundation for future research directions, emphasizing the importance of upskilling initiatives and the evolving nature of AI skills demand, contributing to an EU-wide strategy for future AI skills development.
MULTIFILE