Many have suggested that AI-based interventions could enhance learning by personalization, improving teacher effective ness, or by optimizing educational processes. However, they could also have unintended or unexpected side-effects, such as undermining learning by enabling procrastination, or reducing social interaction by individualizing learning processes. Responsible scientific experiments are required to map both the potential benefits and the side-effects. Current procedures used to screen experiments by research ethics committees do not take the specific risks and dilemmas that AI poses into account. Previous studies identified sixteen conditions that can be used to judge whether trials with experimental technology are responsible. These conditions, however, were not yet translated into practical procedures, nor do they distinguish between the different types of AI applications and risk categories. This paper explores how those conditions could be further specified into procedures that could help facilitate and organize responsible experiments with AI, while differentiating for the different types of AI applications based on their level of automation. The four procedures that we propose are (1) A process of gradual testing (2) Risk- and side-effect detection (3) Explainability and severity, and (4) Democratic oversight. These procedures can be used by researchers and ethics committees to enable responsible experiment with AI interventions in educational settings. Implementation and compliance will require collaboration between researchers, industry, policy makers, and educational institutions.
In the past few years, the EU has shown a growing commitment to address the rapid transformations brought about by the latest Artificial Intelligence (AI) developments by increasing efforts in AI regulation. Nevertheless, despite the growing body of technical knowledge and progress, the governance of AI-intensive technologies remains dynamic and challenging. A mounting chorus of experts expresses reservations about an overemphasis on regulation in Europe. Among their core arguments is the concern that such an approach might hinder innovation within the AI arena. This concern resonates particularly strongly compared to the United States and Asia, where AI-driven innovation appears to be surging ahead, potentially leaving Europe behind. This paper emphasizes the need to balance certification and governance in AI to foster ethical innovation and enhance the reliability and competitiveness of European technology. It explores recent AI regulations and upcoming European laws, underscoring Europe’s role in the global AI landscape. The authors analyze European governance approaches and their impact on SMEs and startups, offering a comparative view of global regulatory efforts. The paper highlights significant global AI developments from the past year, focusing on Europe’s contributions. We address the complexities of creating a comprehensive, human-centred AI master’s programme for higher education. Finally, we discuss how Europe can seize opportunities to promote ethical and reliable AI progress through education, fostering a balanced approach to regulation and enhancing young professionals’ understanding of ethical and legal aspects.
LINK
Dit artikel legt het belang uit van goede uitleg van kunstmatige intelligentie. Rechten van individuen zullen door ontwerpers van systemen van te voren moeten worden ingebouwd. AI wordt beschouwd als een 'sleuteltechnologie' die de wereld net zo ingrijpend gaat veranderen als de industriele revolutie. Binnen de stroming XAI wordt onderzoek gedaan naar interpretatie van werking van AI.
The project aims to improve palliative care in China through the competence development of Chinese teachers, professionals, and students focusing on the horizontal priority of digital transformation.Palliative care (PC) has been recognised as a public health priority, and during recent years, has seen advances in several aspects. However, severe inequities in the access and availability of PC worldwide remain. Annually, approximately 56.8 million people need palliative care, where 25.7% of the care focuses on the last year of person’s life (Connor, 2020).China has set aims for reaching the health care standards of the developed countries by 2030 through the Healthy China Strategy 2030, where one of the improvement areas in health care includes palliative care, thus continuing the previous efforts.The project provides a constructive, holistic, and innovative set of actions aimed at resulting in lasting outcomes and continued development of palliative care education and services. Raising the awareness of all stakeholders on palliative care, including the public, is highly relevant and needed. Evidence based practice guidelines and education are urgently required for both general and specialised palliative care levels, to increase the competencies for health educators, professionals, and students. This is to improve the availability and quality of person-centered palliative care in China. Considering the aging population, increase in various chronic illnesses, the challenging care environment, and the moderate health care resources, competence development and the utilisation of digitalisation in palliative care are paramount in supporting the transition of experts into the palliative care practice environment.General objective of the project is to enhance the competences in palliative care in China through education and training to improve the quality of life for citizens. Project develops the competences of current and future health care professionals in China to transform the palliative care theory and practice to impact the target groups and the society in the long-term. As recognised by the European Association for Palliative Care (EAPC), palliative care competences need to be developed in collaboration. This includes shared willingness to learn from each other to improve the sought outcomes in palliative care (EAPC 2019). Since all individuals have a right to health care, project develops person-centered and culturally sensitive practices taking into consideration ethics and social norms. As concepts around palliative care can focus on physical, psychological, social, or spiritual related illnesses (WHO 2020), project develops innovative pedagogy focusing on evidence-based practice, communication, and competence development utilising digital methods and tools. Concepts of reflection, values and views are in the forefront to improve palliative care for the future. Important aspects in project development include health promotion, digital competences and digital health literacy skills of professionals, patients, and their caregivers. Project objective is tied to the principles of the European Commission’s (EU) Digital Decade that stresses the importance of placing people and their rights in the forefront of the digital transformation, while enhancing solidarity, inclusion, freedom of choice and participation. In addition, concepts of safety, security, empowerment, and the promotion of sustainable actions are valued. (European Commission: Digital targets for 2030).Through the existing collaboration, strategic focus areas of the partners, and the principles of the call, the PalcNet project consortium was formed by the following partners: JAMK University of Applied Sciences (JAMK ), Ramon Llull University (URL), Hanze University of Applied Sciences (HUAS), Beijing Union Medical College Hospital (PUMCH), Guangzhou Health Science College (GHSC), Beihua University (BHU), and Harbin Medical University (HMU). As project develops new knowledge, innovations and practice through capacity building, finalisation of the consortium considered partners development strategy regarding health care, (especially palliative care), ability to create long-term impact, including the focus on enhancing higher education according to the horizontal priority. In addition, partners’ expertise and geographical location was also considered important to facilitate long-term impact of the results.Primary target groups of the project include partner country’s (China) staff members, teachers, researchers, health care professionals and bachelor level students engaging in project implementation. Secondary target groups include those groups who will use the outputs and results and continue in further development in palliative care upon the lifetime of the project.
Artificiële Intelligentie (AI) speelt een steeds belangrijkere rol in mediaorganisaties bij de automatische creatie, personalisatie, distributie en archivering van mediacontent. Dit gaat gepaard met vragen en bezorgdheid in de maatschappij en de mediasector zelf over verantwoord gebruik van AI. Zo zijn er zorgen over discriminatie van bepaalde groepen door bias in algoritmes, over toenemende polarisatie door de verspreiding van radicale content en desinformatie door algoritmes en over schending van privacy bij een niet transparante omgang met data. Veel mediaorganisaties worstelen met de vraag hoe ze verantwoord met AI-toepassingen om moeten gaan. Mediaorganisaties geven aan dat bestaande ethische instrumenten voor verantwoorde AI, zoals de EU “Ethics Guidelines for trustworthy AI” (European Commission, 2019) en de “AI Impact Assessment” (ECP, 2018) onvoldoende houvast bieden voor het ontwerp en de inzet van verantwoorde AI, omdat deze instrumenten niet specifiek zijn toegespitst op het mediadomein. Hierdoor worden deze ethische instrumenten nog nauwelijks toegepast in de mediasector, terwijl mediaorganisaties aangeven dat daar wel behoefte aan is. Het doel van dit project is om mediaorganisaties te ondersteunen en begeleiden bij het inbedden van verantwoorde AI in hun organisaties en bij het ontwerpen, ontwikkelen en inzetten van verantwoorde AI-toepassingen, door domeinspecifieke ethische instrumenten te ontwikkelen. Dit gebeurt aan de hand van drie praktijkcasussen die zijn aangedragen door mediaorganisaties: pluriforme aanbevelingssystemen, inclusieve spraakherkenningssystemen voor de Nederlandse taal en collaboratieve productie-ondersteuningssystemen. De ontwikkeling van de ethische instrumenten wordt uitgevoerd met een Research-through-Design aanpak met meerdere iteraties van informatie verzamelen, analyseren prototypen en testen. De beoogde resultaten van dit praktijkgerichte onderzoek zijn: 1) nieuwe kennis over het ontwerpen van verantwoorde AI in mediatoepassingen, 2) op media toegespitste ethische instrumenten, en 3) verandering in de deelnemende mediaorganisaties ten aanzien van verantwoorde AI door nauwe samenwerking met praktijkpartners in het onderzoek.
The IMPULS-2020 project DIGIREAL (BUas, 2021) aims to significantly strengthen BUAS’ Research and Development (R&D) on Digital Realities for the benefit of innovation in our sectoral industries. The project will furthermore help BUas to position itself in the emerging innovation ecosystems on Human Interaction, AI and Interactive Technologies. The pandemic has had a tremendous negative impact on BUas industrial sectors of research: Tourism, Leisure and Events, Hospitality and Facility, Built Environment and Logistics. Our partner industries are in great need of innovative responses to the crises. Data, AI combined with Interactive and Immersive Technologies (Games, VR/AR) can provide a partial solution, in line with the key-enabling technologies of the Smart Industry agenda. DIGIREAL builds upon our well-established expertise and capacity in entertainment and serious games and digital media (VR/AR). It furthermore strengthens our initial plans to venture into Data and Applied AI. Digital Realities offer great opportunities for sectoral industry research and innovation, such as experience measurement in Leisure and Hospitality, data-driven decision-making for (sustainable) tourism, geo-data simulations for Logistics and Digital Twins for Spatial Planning. Although BUas already has successful R&D projects in these areas, the synergy can and should significantly be improved. We propose a coherent one-year Impuls funded package to develop (in 2021): 1. A multi-year R&D program on Digital Realities, that leads to, 2. Strategic R&D proposals, in particular a SPRONG/sleuteltechnologie proposal; 3. Partnerships in the regional and national innovation ecosystem, in particular Mind Labs and Data Development Lab (DDL); 4. A shared Digital Realities Lab infrastructure, in particular hardware/software/peopleware for Augmented and Mixed Reality; 5. Leadership, support and operational capacity to achieve and support the above. The proposal presents a work program and management structure, with external partners in an advisory role.