The healthcare sector has been confronted with rapidly rising healthcare costs and a shortage of medical staff. At the same time, the field of Artificial Intelligence (AI) has emerged as a promising area of research, offering potential benefits for healthcare. Despite the potential of AI to support healthcare, its widespread implementation, especially in healthcare, remains limited. One possible factor contributing to that is the lack of trust in AI algorithms among healthcare professionals. Previous studies have indicated that explainability plays a crucial role in establishing trust in AI systems. This study aims to explore trust in AI and its connection to explainability in a medical setting. A rapid review was conducted to provide an overview of the existing knowledge and research on trust and explainability. Building upon these insights, a dashboard interface was developed to present the output of an AI-based decision-support tool along with explanatory information, with the aim of enhancing explainability of the AI for healthcare professionals. To investigate the impact of the dashboard and its explanations on healthcare professionals, an exploratory case study was conducted. The study encompassed an assessment of participants’ trust in the AI system, their perception of its explainability, as well as their evaluations of perceived ease of use and perceived usefulness. The initial findings from the case study indicate a positive correlation between perceived explainability and trust in the AI system. Our preliminary findings suggest that enhancing the explainability of AI systems could increase trust among healthcare professionals. This may contribute to an increased acceptance and adoption of AI in healthcare. However, a more elaborate experiment with the dashboard is essential.
LINK
Artificial Intelligence (AI) is increasingly shaping the way we work, live, and interact, leading to significant developments across various sectors of industry, including media, finance, business services, retail and education. In recent years, numerous high-level principles and guidelines for ‘responsible’ or ‘ethical’ AI have been formulated. However, these theoretical efforts often fall short when it comes to addressing the practical challenges of implementing AI in real-world contexts: Responsible Applied AI. The one-day workshop on Responsible Applied Artificial InTelligence (RAAIT) at HHAI 2024: Hybrid Human AI Systems for the Social Good in Malmö, Sweden, brought together researchers studying various dimensions of Responsible AI in practice.This was the second RAAIT workshop, following the first edition at the 2023 European Conference on Artificial Intelligence (ECAI) in Krakow, Poland.
MULTIFILE
Artificial Intelligence (AI) biedt kansen. Het biedt mogelijkheden voor vooruitgang in gezondheidszorg, communicatie, bestuur en productie. Het biedt mogelijkheden voor het creëren van tekst, beeld, geluid en kunst. Het helpt om de effecten van de klimaatcrisis op te vangen door intelligente energienetten te ontwikkelen, door infrastructuren te ontwikkelen die geen of nauwelijks CO2 emissie hebben en door klimaatvoorspellingen te modelleren.Niet alles is positief. AI speelt een groeiende rol in de verspreiding van ‘fake news’, ‘deep fakes’ en andere vormen van misinformatie waardoor onze democratische samenleving wordt bedreigd door populisme en polarisatie.Een onduidelijker effect van AI is het ecologisch effect dat het heeft. Daar is de afgelopen paar jaar veel over gepubliceerd, maar het duurt lang voordat berichten daarover in het maatschappelijke bewustzijn indalen.
MULTIFILE
Smart city technologies, including artificial intelligence and computer vision, promise to bring a higher quality of life and more efficiently managed cities. However, developers, designers, and professionals working in urban management have started to realize that implementing these technologies poses numerous ethical challenges. Policy papers now call for human and public values in tech development, ethics guidelines for trustworthy A.I., and cities for digital rights. In a democratic society, these technologies should be understandable for citizens (transparency) and open for scrutiny and critique (accountability). When implementing such public values in smart city technologies, professionals face numerous knowledge gaps. Public administrators find it difficult to translate abstract values like transparency into concrete specifications to design new services. In the private sector, developers and designers still lack a ‘design vocabulary’ and exemplary projects that can inspire them to respond to transparency and accountability demands. Finally, both the public and private sectors see a need to include the public in the development of smart city technologies but haven’t found the right methods. This proposal aims to help these professionals to develop an integrated, value-based and multi-stakeholder design approach for the ethical implementation of smart city technologies. It does so by setting up a research-through-design trajectory to develop a prototype for an ethical ‘scan car’, as a concrete and urgent example for the deployment of computer vision and algorithmic governance in public space. Three (practical) knowledge gaps will be addressed. With civil servants at municipalities, we will create methods enabling them to translate public values such as transparency into concrete specifications and evaluation criteria. With designers, we will explore methods and patterns to answer these value-based requirements. Finally, we will further develop methods to engage civil society in this processes.
The value of data in general has become eminent in recent times. Autonomous vehicles and Connected Intelligent Transport Systems (C-ITS), in particular, are rapidly emerging fields that rely a lot on “big data”. Data acquisition has been an important part of automotive research and development for years even before the advent of Internet of Things (IoT). Most datalogging is done using specialized hardware that stores data in proprietary formats on traditional hard drives in PCs or dedicated managed servers. The use of Artificial Intelligence (AI) throughout the world and specifically in the automotive sector is largely reliant on the data for the development of new and reliable technologies. With the advent of IoT technologies, the reliability of data capture could be enhanced and can improve ease of real-time analytics for analysis/development of C-ITS services and Autonomous systems using vehicle data. Data acquisition for C-ITS applications requires putting together several different domains ranging from hardware, software, communication systems, cloud storage/processing, data analytics, legal and privacy aspects. This requires expertise from different domains that small and medium scale businesses usually lack. This project aims at investigating requirements that have to be met in order to collect data from vehicles. Furthermore, this project also aims at laying foundations required for the development of a unified guidelines required to collect data from vehicles. With these guidelines, businesses that intend to use vehicle data for their applications are not only guided on the technical aspects of data collection but also equally understand how data from vehicles could be harvested in a secure, efficient and responsible manner.