Why cities need economic intelligenceThe economies of Europe’s cities are changingfast, and it is not easy to predict which segmentsof the local economy will grow and which oneswill decline. Yet, cities must make decisions as towhere to invest, and face a number of questionsthat are difficultto answer:Where dowe putour bets? Should we go for biotech, ICT, or anyother sector that may have growth potential?Do we want to attract large foreign companies,or rather support our local indigenous smallerfirms, ormustwe promotethestart-up scene?Or is it better not to go for any particularindustry but just improve the quality of lifein the city, hoping that this will help to retainskilled people and attract high tech firms?
MULTIFILE
In het boek komen 40 experts aan het woord, die in duidelijke taal uitleggen wat AI is, en welke vragen, uitdagingen en kansen de technologie met zich meebrengt.
DOCUMENT
In the recent ‘Regional outlook’, the OECD (2014) convincingly argues that cities can be the drivers of national growth and recovery: in principle, their diversity and density makes people and companies more productive and innovative. This is not only a tale of large cities: over the last decade, as recent studies demonstrate (e.g. Dijkstra, 2013) many smaller and medium-sized cities across Europe were important economic engines. But this did not happen automatically: to make that happen, ‘getting cities right’ is the key challenge, and action on the city level matters! As demonstrated by recent OECD data (OECD, 2014), poorly organised cities fail to reap their economic potential.
MULTIFILE
With artificial intelligence (AI) systems entering our working and leisure environments with increasing adaptation and learning capabilities, new opportunities arise for developing hybrid (human-AI) intelligence (HI) systems, comprising new ways of collaboration. However, there is not yet a structured way of specifying design solutions of collaboration for hybrid intelligence (HI) systems and there is a lack of best practices shared across application domains. We address this gap by investigating the generalization of specific design solutions into design patterns that can be shared and applied in different contexts. We present a human-centered bottom-up approach for the specification of design solutions and their abstraction into team design patterns. We apply the proposed approach for 4 concrete HI use cases and show the successful extraction of team design patterns that are generalizable, providing re-usable design components across various domains. This work advances previous research on team design patterns and designing applications of HI systems.
MULTIFILE
People tend to be hesitant toward algorithmic tools, and this aversion potentially affects how innovations in artificial intelligence (AI) are effectively implemented. Explanatory mechanisms for aversion are based on individual or structural issues but often lack reflection on real-world contexts. Our study addresses this gap through a mixed-method approach, analyzing seven cases of AI deployment and their public reception on social media and in news articles. Using the Contextual Integrity framework, we argue that most often it is not the AI technology that is perceived as problematic, but that processes related to transparency, consent, and lack of influence by individuals raise aversion. Future research into aversion should acknowledge that technologies cannot be extricated from their contexts if they aim to understand public perceptions of AI innovation.
LINK
The way that innovation is currently done requires a new research methodology that enables co-creation and frequent, iterative evaluation in realworld settings. This paper describes the employment of the living lab methodology that corresponds to this need. Particularly, this paper presents the way that the Amsterdam University of Applies Sciences (HvA) incorporates living labs in its educational program with a particular focus on ambient intelligence. A number of examples are given to illustrate its place in the university’s curriculum. Drawing on from this, problems and solutions are highlighted in a ‘lessons learned’ section.
DOCUMENT
In the book, 40 experts speak, who explain in clear language what AI is, and what questions, challenges and opportunities the technology brings.
DOCUMENT
Ambient intelligence technologies are a means to support ageing-in-place by monitoring clients in the home. In this study, monitoring is applied for the purpose of raising an alarm in an emergency situation, and thereby, providing an increased sense of safety and security. Apart from these technological solutions, there are numerous environmental interventions in the home environment that can support people to age-in-place. The aim of this study was to investigate the needs and motives, related to ageing-in-place, of the respondents receiving ambient intelligence technologies, and to investigate whether, and how, these technologies contributed to aspects of ageing-in-place. This paper presents the results of a qualitative study comprised of interviews and observations of technology and environmental interventions in the home environment among 18 community-dwelling older adults with a complex demand for care.
DOCUMENT
The healthcare sector has been confronted with rapidly rising healthcare costs and a shortage of medical staff. At the same time, the field of Artificial Intelligence (AI) has emerged as a promising area of research, offering potential benefits for healthcare. Despite the potential of AI to support healthcare, its widespread implementation, especially in healthcare, remains limited. One possible factor contributing to that is the lack of trust in AI algorithms among healthcare professionals. Previous studies have indicated that explainability plays a crucial role in establishing trust in AI systems. This study aims to explore trust in AI and its connection to explainability in a medical setting. A rapid review was conducted to provide an overview of the existing knowledge and research on trust and explainability. Building upon these insights, a dashboard interface was developed to present the output of an AI-based decision-support tool along with explanatory information, with the aim of enhancing explainability of the AI for healthcare professionals. To investigate the impact of the dashboard and its explanations on healthcare professionals, an exploratory case study was conducted. The study encompassed an assessment of participants’ trust in the AI system, their perception of its explainability, as well as their evaluations of perceived ease of use and perceived usefulness. The initial findings from the case study indicate a positive correlation between perceived explainability and trust in the AI system. Our preliminary findings suggest that enhancing the explainability of AI systems could increase trust among healthcare professionals. This may contribute to an increased acceptance and adoption of AI in healthcare. However, a more elaborate experiment with the dashboard is essential.
LINK
The field of data science and artificial intelligence (AI) is growing at an unprecedented rate. Manual tasks that for thousands of years could only be performed by humans are increasingly being taken over by intelligent machines. But, more importantly, tasks that could never be performed manually by humans, such as analysing big data, can now be automated while generating valuable knowledge for humankind
DOCUMENT