In this short article the author reflects on AI’s role in education by posing three questions about its application: choosing a partner, grading assignments, and replacing teachers. These questions prompt discussions on AI’s objectivity versus human emotional depth and creativity. The author argues that AI won’t replace teachers but will enhance those who embrace its potential while understanding its limits. True education, the author asserts, is about inspiring renewal and creativity, not merely transmitting knowledge, and cautions against letting AI define humanity’s future.
LINK
Artificial intelligence (AI) is a technology which is increasingly being utilised in society and the economy worldwide, but there is much disquiet over problematic and dangerous implementations of AI, or indeed even AI itself deciding to do dangerous and problematic actions. These developments have led to concerns about whether and how AI systems currently adhere to and will adhere to ethical standards, stimulating a global and multistakeholder conversation on AI ethics and the production of AI governance initiatives. Such developments form the basis for this chapter, where we give an insight into what is happening in Australia, China, the European Union, India and the United States. We commence with some background to the AI ethics and regulation debates, before proceedings to give an overview of what is happening in different countries and regions, namely Australia, China, the European Union (including national level activities in Germany), India and the United States. We provide an analysis of these country profiles, with particular emphasis on the relationship between ethics and law in each location. Overall we find that AI governance and ethics initiatives are most developed in China and the European Union, but the United States has been catching up in the last eighteen months.
DOCUMENT
Concerns have been raised over the increased prominence ofgenerative AI in art. Some fear that generative models could replace theviability for humans to create art and oppose developers training generative models on media without the artist's permission. Proponents of AI art point to the potential increase in accessibility. Is there an approach to address the concerns artists raise while still utilizing the potential these models bring? Current models often aim for autonomous music generation. This, however, makes the model a black box that users can't interact with. By utilizing an AI pipeline combining symbolic music generation and a proposed sample creation system trained on Creative Commons data, a musical looping application has been created to provide non-expert music users with a way to start making their own music. The first results show that it assists users in creating musical loops and shows promise for future research into human-AI interaction in art.
DOCUMENT
The increasing use of AI in industry and society not only expects but demands that we build human-centred competencies into our AI education programmes. The computing education community needs to adapt, and while the adoption of standalone ethics modules into AI programmes or the inclusion of ethical content into traditional applied AI modules is progressing, it is not enough. To foster student competencies to create AI innovations that respect and support the protection of individual rights and society, a novel ground-up approach is needed. This panel presents on one such approach, the development of a Human-Centred AI Masters (HCAIM) as well as the insights and lessons learned from the process. In particular, we discuss the design decisions that have led to the multi-institutional master’s programme. Moreover, this panel allows for discussion on pedagogical and methodological approaches, content knowledge areas and the delivery of such a novel programme, along with challenges faced, to inform and learn from other educators that are considering developing such programmes.
DOCUMENT
Small and medium-sized businesses (SMBs) face unique challenges in developing AI-enabled products and services, with traditional innovation processes proving too resource-intensive and poorly adapted to AI's complexities. Following design science research methodology, this paper introduces Innovation Process for AI-enabled Products and Services (IPAPS), a framework specifically designed for SMBs developing AI-enabled solutions. Built on a semi-formal ontology that synthesizes literature on innovation processes, technology development frameworks, and AI-specific challenges, IPAPS guides organizations through five structured phases from use case identification to market launch. The framework integrates established innovation principles with AI-specific requirements while emphasizing iterative development through agile, lean startup, and design thinking approaches. Through polar theoretical sampling, we conducted ex-post analysis of two contrasting cases. Analysis revealed that the successful case naturally aligned with IPAPS principles, while the unsuccessful case showed significant deviations, providing preliminary evidence supporting IPAPS as a potentially valid innovation process for resource-constrained organizations.
MULTIFILE
In my previous post on AI engineering I defined the concepts involved in this new discipline and explained that with the current state of the practice, AI engineers could also be named machine learning (ML) engineers. In this post I would like to 1) define our view on the profession of applied AI engineer and 2) present the toolbox of an AI engineer with tools, methods and techniques to defy the challenges AI engineers typically face. I end this post with a short overview of related work and future directions. Attached to it is an extensive list of references and additional reading material.
LINK
Dit artikel legt het belang uit van goede uitleg van kunstmatige intelligentie. Rechten van individuen zullen door ontwerpers van systemen van te voren moeten worden ingebouwd. AI wordt beschouwd als een 'sleuteltechnologie' die de wereld net zo ingrijpend gaat veranderen als de industriele revolutie. Binnen de stroming XAI wordt onderzoek gedaan naar interpretatie van werking van AI.
DOCUMENT
Recently, the job market for Artificial Intelligence (AI) engineers has exploded. Since the role of AI engineer is relatively new, limited research has been done on the requirements as set by the industry. Moreover, the definition of an AI engineer is less established than for a data scientist or a software engineer. In this study we explore, based on job ads, the requirements from the job market for the position of AI engineer in The Netherlands. We retrieved job ad data between April 2018 and April 2021 from a large job ad database, Jobfeed from TextKernel. The job ads were selected with a process similar to the selection of primary studies in a literature review. We characterize the 367 resulting job ads based on meta-data such as publication date, industry/sector, educational background and job titles. To answer our research questions we have further coded 125 job ads manually. The job tasks of AI engineers are concentrated in five categories: business understanding, data engineering, modeling, software development and operations engineering. Companies ask for AI engineers with different profiles: 1) data science engineer with focus on modeling, 2) AI software engineer with focus on software development , 3) generalist AI engineer with focus on both models and software. Furthermore, we present the tools and technologies mentioned in the selected job ads, and the soft skills. Our research helps to understand the expectations companies have for professionals building AI-enabled systems. Understanding these expectations is crucial both for prospective AI engineers and educational institutions in charge of training those prospective engineers. Our research also helps to better define the profession of AI engineering. We do this by proposing an extended AI engineering life-cycle that includes a business understanding phase.
LINK
Design schools in digital media and interaction design face the challenge of integrating recent artificial intelligence (AI) advancements into their curriculum. To address this, curricula must teach students to design both "with" and "for" AI. This paper addresses how designing for AI differs from designing for other novel technologies that have entered interaction design education. Future digital designers must develop new solution repertoires for intelligent systems. The paper discusses preparing students for these challenges, suggesting that design schools must choose between a lightweight and heavyweight approach toward the design of AI. The lightweight approach prioritises designing front-end AI applications, focusing on user interfaces, interactions, and immediate user experience impact. This requires adeptness in designing for evolving mental models and ethical considerations but is disconnected from a deep technological understanding of the inner workings of AI. The heavyweight approach emphasises conceptual AI application design, involving users, altering design processes, and fostering responsible practices. While it requires basic technological understanding, the specific knowledge needed for students remains uncertain. The paper compares these approaches, discussing their complementarity.
DOCUMENT
In this paper, we report on the initial results of an explorative study that aims to investigate the occurrence of cognitive biases when designers use generative AI in the ideation phase of a creative design process. When observing current AI models utilised as creative design tools, potential negative impacts on creativity can be identified, namely deepening already existing cognitive biases but also introducing new ones that might not have been present before. Within our study, we analysed the emergence of several cognitive biases and the possible appearance of a negative synergy when designers use generative AI tools in a creative ideation process. Additionally, we identified a new potential bias that emerges from interacting with AI tools, namely prompt bias.
DOCUMENT