Small and medium-sized businesses (SMBs) face unique challenges in developing AI-enabled products and services, with traditional innovation processes proving too resource-intensive and poorly adapted to AI's complexities. Following design science research methodology, this paper introduces Innovation Process for AI-enabled Products and Services (IPAPS), a framework specifically designed for SMBs developing AI-enabled solutions. Built on a semi-formal ontology that synthesizes literature on innovation processes, technology development frameworks, and AI-specific challenges, IPAPS guides organizations through five structured phases from use case identification to market launch. The framework integrates established innovation principles with AI-specific requirements while emphasizing iterative development through agile, lean startup, and design thinking approaches. Through polar theoretical sampling, we conducted ex-post analysis of two contrasting cases. Analysis revealed that the successful case naturally aligned with IPAPS principles, while the unsuccessful case showed significant deviations, providing preliminary evidence supporting IPAPS as a potentially valid innovation process for resource-constrained organizations.
MULTIFILE
poster voor de EuSoMII Annual Meeting in Pisa, Italië in oktober 2023. PURPOSE & LEARNING OBJECTIVE Artificial Intelligence (AI) technologies are gaining popularity for their ability to autonomously perform tasks and mimic human reasoning [1, 2]. Especially within the medical industry, the implementation of AI solutions has seen an increasing pace [3]. However, the field of radiology is not yet transformed with the promised value of AI, as knowledge on the effective use and implementation of AI is falling behind due to a number of causes: 1) Reactive/passive modes of learning are dominant 2) Existing developments are fragmented 3) Lack of expertise and differing perspectives 4) Lack of effective learning space Learning communities can help overcome these problems and address the complexities that come with human-technology configurations [4]. As the impact of a technology is dependent on its social management and implementation processes [5], our research question then becomes: How do we design, configure, and manage a Learning Community to maximize the impact of AI solutions in medicine?
DOCUMENT
This study provides a comprehensive analysis of the AI-related skills and roles needed to bridge the AI skills gap in Europe. Using a mixed-method research approach, this study investigated the most in-demand AI expertise areas and roles by surveying 409 organizations in Europe, analyzing 2,563 AI-related job advertisements, and conducting 24 focus group sessions with 145 industry and policy experts. The findings underscore the importance of both general technical skills in AI related to big data, machine learning and deep learning, cyber and data security, large language models as well as AI soft skills such as problemsolving and effective communication. This study sets the foundation for future research directions, emphasizing the importance of upskilling initiatives and the evolving nature of AI skills demand, contributing to an EU-wide strategy for future AI skills development.
MULTIFILE
Artificial intelligence (AI) is a technology which is increasingly being utilised in society and the economy worldwide, but there is much disquiet over problematic and dangerous implementations of AI, or indeed even AI itself deciding to do dangerous and problematic actions. These developments have led to concerns about whether and how AI systems currently adhere to and will adhere to ethical standards, stimulating a global and multistakeholder conversation on AI ethics and the production of AI governance initiatives. Such developments form the basis for this chapter, where we give an insight into what is happening in Australia, China, the European Union, India and the United States. We commence with some background to the AI ethics and regulation debates, before proceedings to give an overview of what is happening in different countries and regions, namely Australia, China, the European Union (including national level activities in Germany), India and the United States. We provide an analysis of these country profiles, with particular emphasis on the relationship between ethics and law in each location. Overall we find that AI governance and ethics initiatives are most developed in China and the European Union, but the United States has been catching up in the last eighteen months.
DOCUMENT
The increasing use of AI in industry and society not only expects but demands that we build human-centred competencies into our AI education programmes. The computing education community needs to adapt, and while the adoption of standalone ethics modules into AI programmes or the inclusion of ethical content into traditional applied AI modules is progressing, it is not enough. To foster student competencies to create AI innovations that respect and support the protection of individual rights and society, a novel ground-up approach is needed. This panel presents on one such approach, the development of a Human-Centred AI Masters (HCAIM) as well as the insights and lessons learned from the process. In particular, we discuss the design decisions that have led to the multi-institutional master’s programme. Moreover, this panel allows for discussion on pedagogical and methodological approaches, content knowledge areas and the delivery of such a novel programme, along with challenges faced, to inform and learn from other educators that are considering developing such programmes.
DOCUMENT
In this paper, we report on the initial results of an explorative study that aims to investigate the occurrence of cognitive biases when designers use generative AI in the ideation phase of a creative design process. When observing current AI models utilised as creative design tools, potential negative impacts on creativity can be identified, namely deepening already existing cognitive biases but also introducing new ones that might not have been present before. Within our study, we analysed the emergence of several cognitive biases and the possible appearance of a negative synergy when designers use generative AI tools in a creative ideation process. Additionally, we identified a new potential bias that emerges from interacting with AI tools, namely prompt bias.
DOCUMENT
In this short article the author reflects on AI’s role in education by posing three questions about its application: choosing a partner, grading assignments, and replacing teachers. These questions prompt discussions on AI’s objectivity versus human emotional depth and creativity. The author argues that AI won’t replace teachers but will enhance those who embrace its potential while understanding its limits. True education, the author asserts, is about inspiring renewal and creativity, not merely transmitting knowledge, and cautions against letting AI define humanity’s future.
LINK
As artificial intelligence (AI) reshapes hiring, organizations increasingly rely on AI-enhanced selection methods such as chatbot-led interviews and algorithmic resume screening. While AI offers efficiency and scalability, concerns persist regarding fairness, transparency, and trust. This qualitative study applies the Artificially Intelligent Device Use Acceptance (AIDUA) model to examine how job applicants perceive and respond to AI-driven hiring. Drawing on semi-structured interviews with 15 professionals, the study explores how social influence, anthropomorphism, and performance expectancy shape applicant acceptance, while concerns about transparency and fairness emerge as key barriers. Participants expressed a strong preference for hybrid AI-human hiring models, emphasizing the importance of explainability and human oversight. The study refines the AIDUA model in the recruitment context and offers practical recommendations for organizations seeking to implement AI ethically and effectively in selection processes.
MULTIFILE
An extensive inventory of 137 Dutch SMEs regarding the most important considerations regarding the use of emerging digital technologies shows that the selection process is difficult. En trepreneurs wonder which AI application suits them best and what the added (innovative) value is and how they can implement it. This outcome is a clear signal from SMEs to researchers in knowledge institutions and to developers of AI services and applications: Help! Which AI should I choose? With a consortium of students, researchers, and SMEs, we are creating an approach that will help SMEs make the most suitable AI choice. The project develops a data-driven advisory tool that helps SMEs choose, develop, implement and use AI applications focusing on four highly ranked topics.
LINK
Whitepaper: The use of AI is on the rise in the financial sector. Utilizing machine learning algorithms to make decisions and predictions based on the available data can be highly valuable. AI offers benefits to both financial service providers and its customers by improving service and reducing costs. Examples of AI use cases in the financial sector are: identity verification in client onboarding, transaction data analysis, fraud detection in claims management, anti-money laundering monitoring, price differentiation in car insurance, automated analysis of legal documents, and the processing of loan applications.
DOCUMENT