Artificial Intelligence (AI) is increasingly shaping the way we work, live, and interact, leading to significant developments across various sectors of industry, including media, finance, business services, retail and education. In recent years, numerous high-level principles and guidelines for ‘responsible’ or ‘ethical’ AI have been formulated. However, these theoretical efforts often fall short when it comes to addressing the practical challenges of implementing AI in real-world contexts: Responsible Applied AI. The one-day workshop on Responsible Applied Artificial InTelligence (RAAIT) at HHAI 2024: Hybrid Human AI Systems for the Social Good in Malmö, Sweden, brought together researchers studying various dimensions of Responsible AI in practice.This was the second RAAIT workshop, following the first edition at the 2023 European Conference on Artificial Intelligence (ECAI) in Krakow, Poland.
MULTIFILE
The healthcare sector has been confronted with rapidly rising healthcare costs and a shortage of medical staff. At the same time, the field of Artificial Intelligence (AI) has emerged as a promising area of research, offering potential benefits for healthcare. Despite the potential of AI to support healthcare, its widespread implementation, especially in healthcare, remains limited. One possible factor contributing to that is the lack of trust in AI algorithms among healthcare professionals. Previous studies have indicated that explainability plays a crucial role in establishing trust in AI systems. This study aims to explore trust in AI and its connection to explainability in a medical setting. A rapid review was conducted to provide an overview of the existing knowledge and research on trust and explainability. Building upon these insights, a dashboard interface was developed to present the output of an AI-based decision-support tool along with explanatory information, with the aim of enhancing explainability of the AI for healthcare professionals. To investigate the impact of the dashboard and its explanations on healthcare professionals, an exploratory case study was conducted. The study encompassed an assessment of participants’ trust in the AI system, their perception of its explainability, as well as their evaluations of perceived ease of use and perceived usefulness. The initial findings from the case study indicate a positive correlation between perceived explainability and trust in the AI system. Our preliminary findings suggest that enhancing the explainability of AI systems could increase trust among healthcare professionals. This may contribute to an increased acceptance and adoption of AI in healthcare. However, a more elaborate experiment with the dashboard is essential.
LINK
The field of data science and artificial intelligence (AI) is growing at an unprecedented rate. Manual tasks that for thousands of years could only be performed by humans are increasingly being taken over by intelligent machines. But, more importantly, tasks that could never be performed manually by humans, such as analysing big data, can now be automated while generating valuable knowledge for humankind
Electrohydrodynamic Atomization (EHDA), also known as Electrospray (ES), is a technology which uses strong electric fields to manipulate liquid atomization. Among many other areas, electrospray is currently used as an important tool for biomedical applications (droplet encapsulation), water technology (thermal desalination and metal recovery) and material sciences (nanofibers and nano spheres fabrication, metal recovery, selective membranes and batteries). A complete review about the particularities of this technology and its applications was recently published in a special edition of the Journal of Aerosol Sciences [1]. Even though EHDA is already applied in many different industrial processes, there are not many controlling tools commercially available which can be used to remotely operate the system as well as identify some spray characteristics, e.g. droplet size, operational mode, droplet production ratio. The AECTion project proposes the development of an innovative controlling system based on the electrospray current, signal processing & control and artificial intelligence to build a non-visual tool to control and characterize EHDA processes.
Professionals worden steeds vaker ondersteund door AI (Artificial Intelligence, kunstmatige intelligentie). Maar hoe ervaren professionals dat? Welke vorm van ondersteuning versterkt hun professie en wat willen ze vooral niet? In dit project onderzoeken we hoe verschillende rollen voor AI (besluitvormer, adviseur of kennisbron) worden ervaren door aankomend professionals in de preventieve zorg. Doel Krachtige samenwerking professional en AI Met het project willen we inzicht krijgen in welke invloed verschillende vormen van samenwerking met AI heeft op waarden als autonomie en vertrouwen bij professionals. Deze inzichten willen we vertalen naar vormen van samenwerking waarbij de kracht van zowel professional als AI optimaal tot uiting komt. Resultaten Het beoogde resultaat van het project is een set aan concrete richtlijnen voor het context-afhankelijk ontwerpen van mens-AI samenwerkingen die recht doen aan persoonlijke waarden. Looptijd 01 april 2021 - 31 maart 2022 Aanpak We onderzoeken verschillende rollen van AI door middel van Wizard of Oz experimenten. Hierin voeren studenten paramedische studies een preventieve gezondheidscheck uit met behulp van een gesimuleerd AI algoritme. De resulterende richtlijnen toetsen we in focusgroepen met zorg professionals. Relevantie voor beroepspraktijk Het gebruik van AI heeft grote potentie voor de beroepspraktijk. Er zijn echter ook zorgen over de impact van AI op de maatschappij. Met dit project dragen we bij aan een ethisch verantwoorde inzet van AI. Cofinanciering Dit project wordt uitgevoerd als onderdeel van het programma R-DAISES dat wordt uitgevoerd in het kader van NWA route 25 – verantwoorde waardecreatie met big data en is gefinancierd door NWO (Nederlandse Organisatie voor Wetenschappelijk Onderzoek)
Developing a framework that integrates Advanced Language Models into the qualitative research process.Qualitative research, vital for understanding complex phenomena, is often limited by labour-intensive data collection, transcription, and analysis processes. This hinders scalability, accessibility, and efficiency in both academic and industry contexts. As a result, insights are often delayed or incomplete, impacting decision-making, policy development, and innovation. The lack of tools to enhance accuracy and reduce human error exacerbates these challenges, particularly for projects requiring large datasets or quick iterations. Addressing these inefficiencies through AI-driven solutions like AIDA can empower researchers, enhance outcomes, and make qualitative research more inclusive, impactful, and efficient.The AIDA project enhances qualitative research by integrating AI technologies to streamline transcription, coding, and analysis processes. This innovation enables researchers to analyse larger datasets with greater efficiency and accuracy, providing faster and more comprehensive insights. By reducing manual effort and human error, AIDA empowers organisations to make informed decisions and implement evidence-based policies more effectively. Its scalability supports diverse societal and industry applications, from healthcare to market research, fostering innovation and addressing complex challenges. Ultimately, AIDA contributes to improving research quality, accessibility, and societal relevance, driving advancements across multiple sectors.