In the book, 40 experts speak, who explain in clear language what AI is, and what questions, challenges and opportunities the technology brings.
DOCUMENT
Poster KIM voor de ECR is nu online te zien via EPOS: https://epos.myesr.org/poster/esr/ecr2022/C-16092 posternummer: C-16092, ECR 2022 Purpose Artificial Intelligence (AI) has developed at high speed the last few years and will substantially change various disciplines (1,2). These changes are also noticeable in the field of radiology, nuclear medicine and radiotherapy. However, the focus of attention has mainly been on the radiologist profession, whereas the role of the radiographer has been largely ignored (3). As long as AI for radiology was focused on image recognition and diagnosis, the little attention for the radiographer might be justifiable. But with AI becoming more and more a part of the workflow management, treatment planning and image reconstruction for example, the work of the radiographer will change. However, their training (courses Medical Imaging and Radiotherapeutic Techniques) hardly contain any AI education. Radiographers in the Netherlands are therefore not prepared for changes that will come with the introduction of AI into everyday work.
LINK
Editorial on the Research Topic "Leveraging artificial intelligence and open science for toxicological risk assessment"
LINK
Artificial intelligence (AI) is a technology which is increasingly being utilised in society and the economy worldwide, but there is much disquiet over problematic and dangerous implementations of AI, or indeed even AI itself deciding to do dangerous and problematic actions. These developments have led to concerns about whether and how AI systems currently adhere to and will adhere to ethical standards, stimulating a global and multistakeholder conversation on AI ethics and the production of AI governance initiatives. Such developments form the basis for this chapter, where we give an insight into what is happening in Australia, China, the European Union, India and the United States. We commence with some background to the AI ethics and regulation debates, before proceedings to give an overview of what is happening in different countries and regions, namely Australia, China, the European Union (including national level activities in Germany), India and the United States. We provide an analysis of these country profiles, with particular emphasis on the relationship between ethics and law in each location. Overall we find that AI governance and ethics initiatives are most developed in China and the European Union, but the United States has been catching up in the last eighteen months.
DOCUMENT
In this paper, we report on the initial results of an explorative study that aims to investigate the occurrence of cognitive biases when designers use generative AI in the ideation phase of a creative design process. When observing current AI models utilised as creative design tools, potential negative impacts on creativity can be identified, namely deepening already existing cognitive biases but also introducing new ones that might not have been present before. Within our study, we analysed the emergence of several cognitive biases and the possible appearance of a negative synergy when designers use generative AI tools in a creative ideation process. Additionally, we identified a new potential bias that emerges from interacting with AI tools, namely prompt bias.
DOCUMENT
Design schools in digital media and interaction design face the challenge of integrating recent artificial intelligence (AI) advancements into their curriculum. To address this, curricula must teach students to design both "with" and "for" AI. This paper addresses how designing for AI differs from designing for other novel technologies that have entered interaction design education. Future digital designers must develop new solution repertoires for intelligent systems. The paper discusses preparing students for these challenges, suggesting that design schools must choose between a lightweight and heavyweight approach toward the design of AI. The lightweight approach prioritises designing front-end AI applications, focusing on user interfaces, interactions, and immediate user experience impact. This requires adeptness in designing for evolving mental models and ethical considerations but is disconnected from a deep technological understanding of the inner workings of AI. The heavyweight approach emphasises conceptual AI application design, involving users, altering design processes, and fostering responsible practices. While it requires basic technological understanding, the specific knowledge needed for students remains uncertain. The paper compares these approaches, discussing their complementarity.
DOCUMENT
Abstract Aims: Medical case vignettes play a crucial role in medical education, yet they often fail to authentically represent diverse patients. Moreover, these vignettes tend to oversimplify the complex relationship between patient characteristics and medical conditions, leading to biased and potentially harmful perspectives among students. Displaying aspects of patient diversity, such as ethnicity, in written cases proves challenging. Additionally, creating these cases places a significant burden on teachers in terms of labour and time. Our objective is to explore the potential of artificial intelligence (AI)-assisted computer-generated clinical cases to expedite case creation and enhance diversity, along with AI-generated patient photographs for more lifelike portrayal. Methods: In this study, we employed ChatGPT (OpenAI, GPT 3.5) to develop diverse and inclusive medical case vignettes. We evaluated various approaches and identified a set of eight consecutive prompts that can be readily customized to accommodate local contexts and specific assignments. To enhance visual representation, we utilized Adobe Firefly beta for image generation. Results: Using the described prompts, we consistently generated cases for various assignments, producing sets of 30 cases at a time. We ensured the inclusion of mandatory checks and formatting, completing the process within approximately 60 min per set. Conclusions: Our approach significantly accelerated case creation and improved diversity, although prioritizing maximum diversity compromised representativeness to some extent. While the optimized prompts are easily reusable, the process itself demands computer skills not all educators possess. To address this, we aim to share all created patients as open educational resources, empowering educators to create cases independently.
DOCUMENT
poster voor de EuSoMII Annual Meeting in Pisa, Italië in oktober 2023. PURPOSE & LEARNING OBJECTIVE Artificial Intelligence (AI) technologies are gaining popularity for their ability to autonomously perform tasks and mimic human reasoning [1, 2]. Especially within the medical industry, the implementation of AI solutions has seen an increasing pace [3]. However, the field of radiology is not yet transformed with the promised value of AI, as knowledge on the effective use and implementation of AI is falling behind due to a number of causes: 1) Reactive/passive modes of learning are dominant 2) Existing developments are fragmented 3) Lack of expertise and differing perspectives 4) Lack of effective learning space Learning communities can help overcome these problems and address the complexities that come with human-technology configurations [4]. As the impact of a technology is dependent on its social management and implementation processes [5], our research question then becomes: How do we design, configure, and manage a Learning Community to maximize the impact of AI solutions in medicine?
DOCUMENT
This article examines how collaborative design practices in higher education are reshaped through postdigital entanglement with generative artificial intelligence (GenAI). We collectively explore how co-design, an inclusive, iterative, and relational approach to educational design and transformation, expands in meaning, practice, and ontology when GenAI is approached as a collaborator. The article brings together 19 authors and three open reviewers to engage with postdigital inquiry, structured in three parts: (1) a review of literature on co-design, GenAI, and postdigital theory; (2) 11 situated contributions from educators, researchers, and designers worldwide, each offering practice-based accounts of co-design with GenAI; and (3) an explorative discussion of implications for higher education designs and futures. Across these sections, we show how GenAI unsettles assumptions of collaboration, knowing, and agency, foregrounding co-design as a site of ongoing material, ethical, and epistemic negotiation. We argue that postdigital co-design with GenAI reframes educational design as a collective practice of imagining, contesting, and shaping futures that extend beyond human knowing.
MULTIFILE
The field of data science and artificial intelligence (AI) is growing at an unprecedented rate. Manual tasks that for thousands of years could only be performed by humans are increasingly being taken over by intelligent machines. But, more importantly, tasks that could never be performed manually by humans, such as analysing big data, can now be automated while generating valuable knowledge for humankind
DOCUMENT