In het boek komen 40 experts aan het woord, die in duidelijke taal uitleggen wat AI is, en welke vragen, uitdagingen en kansen de technologie met zich meebrengt.
DOCUMENT
In the book, 40 experts speak, who explain in clear language what AI is, and what questions, challenges and opportunities the technology brings.
DOCUMENT
Artificial Intelligence (AI) is increasingly shaping the way we work, live, and interact, leading to significant developments across various sectors of industry, including media, finance, business services, retail and education. In recent years, numerous high-level principles and guidelines for ‘responsible’ or ‘ethical’ AI have been formulated. However, these theoretical efforts often fall short when it comes to addressing the practical challenges of implementing AI in real-world contexts: Responsible Applied AI. The one-day workshop on Responsible Applied Artificial InTelligence (RAAIT) at HHAI 2024: Hybrid Human AI Systems for the Social Good in Malmö, Sweden, brought together researchers studying various dimensions of Responsible AI in practice.This was the second RAAIT workshop, following the first edition at the 2023 European Conference on Artificial Intelligence (ECAI) in Krakow, Poland.
MULTIFILE
Artificial intelligence (AI) is transforming language access services in healthcare, making interpretation and translation faster and more scalable than ever before. Given that there are approximately 281 million international migrants as of 2024 [1], leveraging AI technology for mitigating language barriers in healthcare presents both new opportunities and new challenges. International migrants and others may experience language discordance – when patients and health care providers do not share a common language – which can hamper communication [2], decision making [3], and lead to poor health outcomes for patients [4]. Despite ongoing advancements in AI technology, its potential to improve or deter person-centred clinical care depends on its responsible application. In this article, our multilingual, international, and multidisciplinary members of the Language and Cultural Discordance in Healthcare Communication Special Interest Group of the International Association for Communication in Healthcare (EACH) [5] discuss challenges and opportunities in leveraging AI technology, provide practical applications, and end with recommendations to promote language access in healthcare. We highlight four of Picker’s Eight Principles of Person-Centered Care [6,7] and use them as a framework to address various issues in using AI for language translation and interpretation in healthcare.
DOCUMENT
Abstract Aims: Medical case vignettes play a crucial role in medical education, yet they often fail to authentically represent diverse patients. Moreover, these vignettes tend to oversimplify the complex relationship between patient characteristics and medical conditions, leading to biased and potentially harmful perspectives among students. Displaying aspects of patient diversity, such as ethnicity, in written cases proves challenging. Additionally, creating these cases places a significant burden on teachers in terms of labour and time. Our objective is to explore the potential of artificial intelligence (AI)-assisted computer-generated clinical cases to expedite case creation and enhance diversity, along with AI-generated patient photographs for more lifelike portrayal. Methods: In this study, we employed ChatGPT (OpenAI, GPT 3.5) to develop diverse and inclusive medical case vignettes. We evaluated various approaches and identified a set of eight consecutive prompts that can be readily customized to accommodate local contexts and specific assignments. To enhance visual representation, we utilized Adobe Firefly beta for image generation. Results: Using the described prompts, we consistently generated cases for various assignments, producing sets of 30 cases at a time. We ensured the inclusion of mandatory checks and formatting, completing the process within approximately 60 min per set. Conclusions: Our approach significantly accelerated case creation and improved diversity, although prioritizing maximum diversity compromised representativeness to some extent. While the optimized prompts are easily reusable, the process itself demands computer skills not all educators possess. To address this, we aim to share all created patients as open educational resources, empowering educators to create cases independently.
DOCUMENT
The field of data science and artificial intelligence (AI) is growing at an unprecedented rate. Manual tasks that for thousands of years could only be performed by humans are increasingly being taken over by intelligent machines. But, more importantly, tasks that could never be performed manually by humans, such as analysing big data, can now be automated while generating valuable knowledge for humankind
DOCUMENT
The healthcare sector has been confronted with rapidly rising healthcare costs and a shortage of medical staff. At the same time, the field of Artificial Intelligence (AI) has emerged as a promising area of research, offering potential benefits for healthcare. Despite the potential of AI to support healthcare, its widespread implementation, especially in healthcare, remains limited. One possible factor contributing to that is the lack of trust in AI algorithms among healthcare professionals. Previous studies have indicated that explainability plays a crucial role in establishing trust in AI systems. This study aims to explore trust in AI and its connection to explainability in a medical setting. A rapid review was conducted to provide an overview of the existing knowledge and research on trust and explainability. Building upon these insights, a dashboard interface was developed to present the output of an AI-based decision-support tool along with explanatory information, with the aim of enhancing explainability of the AI for healthcare professionals. To investigate the impact of the dashboard and its explanations on healthcare professionals, an exploratory case study was conducted. The study encompassed an assessment of participants’ trust in the AI system, their perception of its explainability, as well as their evaluations of perceived ease of use and perceived usefulness. The initial findings from the case study indicate a positive correlation between perceived explainability and trust in the AI system. Our preliminary findings suggest that enhancing the explainability of AI systems could increase trust among healthcare professionals. This may contribute to an increased acceptance and adoption of AI in healthcare. However, a more elaborate experiment with the dashboard is essential.
LINK
Artificial Intelligence systems are more and more being introduced into first response; however, this introduction needs to be done responsibly. While generic claims on what this entails already exist, more details are required to understand the exact nature of responsible application of AI within the first response domain. The context in which AI systems are applied largely determines the ethical, legal, and societal impact and how to deal with this impact responsibly. For that reason, we empirically investigate relevant human values that are affected by the introduction of a specific AI-based Decision Aid (AIDA), a decision support system under development for Fire Services in the Netherlands. We held 10 expert group sessions and discussed the impact of AIDA on different stakeholders. This paper presents the design and implementation of the study and, as we are still in process of analyzing the sessions in detail, summarizes preliminary insights and steps forward.
MULTIFILE
This is a composite article which brings together the international perspectives of the editorial board of the Journal of Adventure Education and Outdoor Learning to explore the impacts of artificial intelligence (AI) on the field of adventure education and outdoor learning (AE/OL). Building on the AE/OL profession’s response to the impacts of COVID-19 on outdoor and environmental education in 2020, this article includes authors from 10 countries including Australia, Brazil, Canada, England, Japan, Kenya, the Netherlands, New Zealand, Norway, and Wales. The statements discuss the impacts and opportunities of AI for the AE/OL professions, researchers, the nature of being in and with the outdoors, and Indigenous knowledges. The intention of this article is not to present a definitive summary of the state of the profession, but to provide examples of the ways in which diverse people are responding to the challenges and opportunities of AI. By sharing these views, and identifying some commonalities, we hope that AE/OL educators, practitioners, researchers and managers can creatively and cautiously seize the opportunities of this technological revolution.
LINK
Many have suggested that AI-based interventions could enhance learning by personalization, improving teacher effectiveness, or by optimizing educational processes. However, they could also have unintended or unexpected side-effects, such as undermining learning by enabling procrastination, or reducing social interaction by individualizing learning processes. Responsible scientific experiments are required to map both the potential benefits and the side-effects. Current procedures used to screen experiments by research ethics committees do not take the specific risks and dilemmas that AI poses into account. Previous studies identified sixteen conditions that can be used to judge whether trials with experimental technology are responsible. These conditions, however, were not yet translated into practical procedures, nor do they distinguish between the different types of AI applications and risk categories. This paper explores how those conditions could be further specified into procedures that could help facilitate and organize responsible experiments with AI, while differentiating for the different types of AI applications based on their level of automation. The four procedures that we propose are (1) A process of gradual testing (2) Risk- and side-effect detection (3) Explainability and severity, and (4) Democratic oversight. These procedures can be used by researchers and ethics committees to enable responsible experiment with AI interventions in educational settings. Implementation and compliance will require collaboration between researchers, industry, policy makers, and educational institutions.
DOCUMENT