In this short article the author reflects on AI’s role in education by posing three questions about its application: choosing a partner, grading assignments, and replacing teachers. These questions prompt discussions on AI’s objectivity versus human emotional depth and creativity. The author argues that AI won’t replace teachers but will enhance those who embrace its potential while understanding its limits. True education, the author asserts, is about inspiring renewal and creativity, not merely transmitting knowledge, and cautions against letting AI define humanity’s future.
LINK
Vandaag de dag loopt de discussie over AI hoog op: wat betekent AI voor verschillende beroepen? Welke competenties zijn straks wellicht niet meer relevant en welke juist des te meer? En wat betekent AI voor het onderwijs? Hoog tijd dus om in het onderwijs aandacht te besteden aan het versterken van AI-geletterdheid. Ofwel de competenties die nodig zijn om AI-technologieën kritisch te kunnen evalueren, er effectief mee te kunnen communiceren en mee samen te werken, zowel thuis als op de werkplek, zodat studenten klaar zijn voor een wereld vol AI Antwoord op deze en andere vragen vind je in deze publicatie van het lectoraat Teaching, Learning & Technology zodat je in zeven minuten weer bent bijgepraat over AI geletterdheid. # AI-geletterdheid #teachinglearningandtechnology #inholland
DOCUMENT
poster voor de EuSoMII Annual Meeting in Pisa, Italië in oktober 2023. PURPOSE & LEARNING OBJECTIVE Artificial Intelligence (AI) technologies are gaining popularity for their ability to autonomously perform tasks and mimic human reasoning [1, 2]. Especially within the medical industry, the implementation of AI solutions has seen an increasing pace [3]. However, the field of radiology is not yet transformed with the promised value of AI, as knowledge on the effective use and implementation of AI is falling behind due to a number of causes: 1) Reactive/passive modes of learning are dominant 2) Existing developments are fragmented 3) Lack of expertise and differing perspectives 4) Lack of effective learning space Learning communities can help overcome these problems and address the complexities that come with human-technology configurations [4]. As the impact of a technology is dependent on its social management and implementation processes [5], our research question then becomes: How do we design, configure, and manage a Learning Community to maximize the impact of AI solutions in medicine?
DOCUMENT
De opkomst van Chat GPT laat zien hoe AI ingrijpt in ons dagelijks leven en het onderwijs. Maar AI is meer dan Chat GPT: van zoekmachines tot de gezichtsherkenning in je telefoon: data en algoritmes veranderen de levens van onze studenten en hun toekomstige werkveld. Wat betekent dit voor de opleidingen in het HBO waar voor wij werken? Voor de inspiratie-sessie De maatschappelijke impact van AI tijdens het HU Onderwijsfestival 2023 hebben wij onze collega’s uitgenodigd om samen met ons mee te denken over de recente AI-ontwikkelingen. We keken niet alleen naar de technologie, maar juist ook naar de maatschappelijke impact en wat de kansen en bedreigingen van AI zijn voor een open, rechtvaardige en duurzame samenleving. Het gesprek voerde we met onze collega’s (zowel docenten als medewerkers van de diensten) aan de hand van drie casussen met. De verzamelde resultaten en inzichten van deze gesprekken zijn samengebracht op een speciaal ontwikkelde poster voor de workshop (zie figuur 1). We hebben deze inzichten gebundeld en hieronder zijn ze te lezen.
DOCUMENT
Design schools in digital media and interaction design face the challenge of integrating recent artificial intelligence (AI) advancements into their curriculum. To address this, curricula must teach students to design both "with" and "for" AI. This paper addresses how designing for AI differs from designing for other novel technologies that have entered interaction design education. Future digital designers must develop new solution repertoires for intelligent systems. The paper discusses preparing students for these challenges, suggesting that design schools must choose between a lightweight and heavyweight approach toward the design of AI. The lightweight approach prioritises designing front-end AI applications, focusing on user interfaces, interactions, and immediate user experience impact. This requires adeptness in designing for evolving mental models and ethical considerations but is disconnected from a deep technological understanding of the inner workings of AI. The heavyweight approach emphasises conceptual AI application design, involving users, altering design processes, and fostering responsible practices. While it requires basic technological understanding, the specific knowledge needed for students remains uncertain. The paper compares these approaches, discussing their complementarity.
DOCUMENT
De zorgsector wordt in toenemende mate geconfronteerd met uitdagingen als gevolg van groeiende vraag (o.a. door vergrijzing en complexiteit van zorg) en afnemend aanbod van zorgverleners (o.a. door personeelstekorten). Kunstmatige Intelligentie (AI) wordt als mogelijke oplossing gezien, maar wordt vaak vanuit een technologisch perspectief benaderd. Dit artikel kiest een mensgerichte benadering en bestudeert hoe zorgmedewerkers het werken met AI ervaren. Dit is belangrijk omdat zij uiteindelijk met deze applicaties moeten werken om de uitdagingen in de zorg het hoofd te bieden. Op basis van 21 semigestructureerde interviews met zorgmedewerkers die AI hebben gebruikt, beschrijven we de werkervaringen met AI. Met behulp van het AMO-raamwerk - wat staat voor abilities, motivation en opportunities - laten we zien dat AI een impact heeft op het werk van zorgmedewerkers. Het gebruik van AI vereist nieuwe competenties en de overtuiging dat AI de zorg kan verbeteren. Daarbij is er een noodzaak voor voldoende beschikbaarheid van training en ondersteuning. Tenslotte bediscussiëren we de implicaties voor theorie en geven we aanbevelingen voor HR-professionals.
MULTIFILE
This study provides a comprehensive analysis of the AI-related skills and roles needed to bridge the AI skills gap in Europe. Using a mixed-method research approach, this study investigated the most in-demand AI expertise areas and roles by surveying 409 organizations in Europe, analyzing 2,563 AI-related job advertisements, and conducting 24 focus group sessions with 145 industry and policy experts. The findings underscore the importance of both general technical skills in AI related to big data, machine learning and deep learning, cyber and data security, large language models as well as AI soft skills such as problemsolving and effective communication. This study sets the foundation for future research directions, emphasizing the importance of upskilling initiatives and the evolving nature of AI skills demand, contributing to an EU-wide strategy for future AI skills development.
MULTIFILE
In the past few years, the EU has shown a growing commitment to address the rapid transformations brought about by the latest Artificial Intelligence (AI) developments by increasing efforts in AI regulation. Nevertheless, despite the growing body of technical knowledge and progress, the governance of AI-intensive technologies remains dynamic and challenging. A mounting chorus of experts expresses reservations about an overemphasis on regulation in Europe. Among their core arguments is the concern that such an approach might hinder innovation within the AI arena. This concern resonates particularly strongly compared to the United States and Asia, where AI-driven innovation appears to be surging ahead, potentially leaving Europe behind. This paper emphasizes the need to balance certification and governance in AI to foster ethical innovation and enhance the reliability and competitiveness of European technology. It explores recent AI regulations and upcoming European laws, underscoring Europe’s role in the global AI landscape. The authors analyze European governance approaches and their impact on SMEs and startups, offering a comparative view of global regulatory efforts. The paper highlights significant global AI developments from the past year, focusing on Europe’s contributions. We address the complexities of creating a comprehensive, human-centred AI master’s programme for higher education. Finally, we discuss how Europe can seize opportunities to promote ethical and reliable AI progress through education, fostering a balanced approach to regulation and enhancing young professionals’ understanding of ethical and legal aspects.
LINK
The increasing use of AI in industry and society not only expects but demands that we build human-centred competencies into our AI education programmes. The computing education community needs to adapt, and while the adoption of standalone ethics modules into AI programmes or the inclusion of ethical content into traditional applied AI modules is progressing, it is not enough. To foster student competencies to create AI innovations that respect and support the protection of individual rights and society, a novel ground-up approach is needed. This panel presents on one such approach, the development of a Human-Centred AI Masters (HCAIM) as well as the insights and lessons learned from the process. In particular, we discuss the design decisions that have led to the multi-institutional master’s programme. Moreover, this panel allows for discussion on pedagogical and methodological approaches, content knowledge areas and the delivery of such a novel programme, along with challenges faced, to inform and learn from other educators that are considering developing such programmes.
DOCUMENT
Artificial intelligence (AI) is a technology which is increasingly being utilised in society and the economy worldwide, but there is much disquiet over problematic and dangerous implementations of AI, or indeed even AI itself deciding to do dangerous and problematic actions. These developments have led to concerns about whether and how AI systems currently adhere to and will adhere to ethical standards, stimulating a global and multistakeholder conversation on AI ethics and the production of AI governance initiatives. Such developments form the basis for this chapter, where we give an insight into what is happening in Australia, China, the European Union, India and the United States. We commence with some background to the AI ethics and regulation debates, before proceedings to give an overview of what is happening in different countries and regions, namely Australia, China, the European Union (including national level activities in Germany), India and the United States. We provide an analysis of these country profiles, with particular emphasis on the relationship between ethics and law in each location. Overall we find that AI governance and ethics initiatives are most developed in China and the European Union, but the United States has been catching up in the last eighteen months.
DOCUMENT