In this paper, we report on the initial results of an explorative study that aims to investigate the occurrence of cognitive biases when designers use generative AI in the ideation phase of a creative design process. When observing current AI models utilised as creative design tools, potential negative impacts on creativity can be identified, namely deepening already existing cognitive biases but also introducing new ones that might not have been present before. Within our study, we analysed the emergence of several cognitive biases and the possible appearance of a negative synergy when designers use generative AI tools in a creative ideation process. Additionally, we identified a new potential bias that emerges from interacting with AI tools, namely prompt bias.
DOCUMENT
The concept of biodiversity, which usually serves as a shorthand to refer to the diversity of life on Earth at different levels (ecosystems, species, genes), was coined in the 1980s by conservation biologists worried over the degradation of ecosystems and the loss of species, and willing to make a case for the protection of nature – while avoiding this “politically loaded” term (Takacs, 1996). Since then, the concept has been embedded in the work of the Convention on Biological Diversity (CBD, established in 1992) and of the Intergovernmental science-policy Platform on Biodiversity and Ecosystem Services (IPBES, aka ‘the IPCC for biodiversity’, established in 2012). While the concept has gained policy traction, it is still unclear to which extent it has captured the public imagination. Biodiversity loss has not triggered the same amount of attention or controversy as climate change globally (with some exceptions). This project, titled Prompting for biodiversity, investigates how this issue is mediated by generative visual AI, directing attention to both how ‘biodiversity’ is known and imagined by AI and to how this may shape public ideas around biodiversity loss and living with other species.
LINK
poster voor de EuSoMII Annual Meeting in Pisa, Italië in oktober 2023. PURPOSE & LEARNING OBJECTIVE Artificial Intelligence (AI) technologies are gaining popularity for their ability to autonomously perform tasks and mimic human reasoning [1, 2]. Especially within the medical industry, the implementation of AI solutions has seen an increasing pace [3]. However, the field of radiology is not yet transformed with the promised value of AI, as knowledge on the effective use and implementation of AI is falling behind due to a number of causes: 1) Reactive/passive modes of learning are dominant 2) Existing developments are fragmented 3) Lack of expertise and differing perspectives 4) Lack of effective learning space Learning communities can help overcome these problems and address the complexities that come with human-technology configurations [4]. As the impact of a technology is dependent on its social management and implementation processes [5], our research question then becomes: How do we design, configure, and manage a Learning Community to maximize the impact of AI solutions in medicine?
DOCUMENT
Past two years, significant rise in text-to-video Gen-AI software which warrants unexplored implications to Media Studies and Production (Caramiaux et al.). Jones et al. posit Gen-AI as a “cultural technology” where the combination of its outreach, access, distribution, user-engagement via machine cocreation, and diversity of outputs is significant and distinctive in the context of the history of media technology (19). Novel form of technology that juxtaposes cost saving advantages and artistic/creative innovation and quality. Crucial problems that may arise here is how audiovisual Gen-AI can be (un)intentionally misused, leading to the perpetuation and instillment of racial and social biases through visual representation (Bianchi et al.); the non-consensual use of existing media work leading to an infringement on intellectual property (Appel et al.); and its destructive impact on the environment (Hogan).
DOCUMENT
Concerns have been raised over the increased prominence ofgenerative AI in art. Some fear that generative models could replace theviability for humans to create art and oppose developers training generative models on media without the artist's permission. Proponents of AI art point to the potential increase in accessibility. Is there an approach to address the concerns artists raise while still utilizing the potential these models bring? Current models often aim for autonomous music generation. This, however, makes the model a black box that users can't interact with. By utilizing an AI pipeline combining symbolic music generation and a proposed sample creation system trained on Creative Commons data, a musical looping application has been created to provide non-expert music users with a way to start making their own music. The first results show that it assists users in creating musical loops and shows promise for future research into human-AI interaction in art.
DOCUMENT
This study provides a comprehensive analysis of the AI-related skills and roles needed to bridge the AI skills gap in Europe. Using a mixed-method research approach, this study investigated the most in-demand AI expertise areas and roles by surveying 409 organizations in Europe, analyzing 2,563 AI-related job advertisements, and conducting 24 focus group sessions with 145 industry and policy experts. The findings underscore the importance of both general technical skills in AI related to big data, machine learning and deep learning, cyber and data security, large language models as well as AI soft skills such as problemsolving and effective communication. This study sets the foundation for future research directions, emphasizing the importance of upskilling initiatives and the evolving nature of AI skills demand, contributing to an EU-wide strategy for future AI skills development.
MULTIFILE
Ik heb een ontzettende hekel aan de term 'AI'. Het is een containerbegrip dat utopische en dystopische scenario’s oproept. Daardoor wordt het moeilijk om serieuze gesprekken te voeren over de mogelijkheden en risico’s van automatisering. Discussies over artificial intelligence (AI) worden pas interessant als we naar een specifiek gebruik ervan kijken. Bijvoorbeeld: wat betekent het gebruik van generatieve AI-tools voor het creatieve proces? Die focus zag ik terug tijdens twee conferenties waar ik deze maand was: The Synthetic City Conference en een AI summit in Lancaster waar academici en de creatieve sector samenkwamen.
LINK
The article emphasizes that future marketers need to focus on sustainable value creation for shareholders, society, and the planet. They should be adept at using data responsibly to make informed decisions and leverage technological innovations like AI, AR, VR, and robotics. Generative AI will transform content creation, market research, and strategic marketing processes. Marketers must also understand AI's limitations and contextualize its use to maintain human connections and interactions.
LINK
Artificial intelligence (AI) is a technology which is increasingly being utilised in society and the economy worldwide, but there is much disquiet over problematic and dangerous implementations of AI, or indeed even AI itself deciding to do dangerous and problematic actions. These developments have led to concerns about whether and how AI systems currently adhere to and will adhere to ethical standards, stimulating a global and multistakeholder conversation on AI ethics and the production of AI governance initiatives. Such developments form the basis for this chapter, where we give an insight into what is happening in Australia, China, the European Union, India and the United States. We commence with some background to the AI ethics and regulation debates, before proceedings to give an overview of what is happening in different countries and regions, namely Australia, China, the European Union (including national level activities in Germany), India and the United States. We provide an analysis of these country profiles, with particular emphasis on the relationship between ethics and law in each location. Overall we find that AI governance and ethics initiatives are most developed in China and the European Union, but the United States has been catching up in the last eighteen months.
DOCUMENT
In the modern day and age, cybersecurity facesnumerous challenges. Computer systems and networks become more and more sophisticated and interconnected, and the attack surface constantly increases. In addition, cyber-attacks keep growing in complexity and scale. In order to address these challenges, security professionals started to employ generative AI (GenAI) to quickly respond to attacks. However, this introduces challenges in terms of how GenAI can be adapted to the security environment and where the legal and ethical responsibilities lie. The Universities of Twente and Groningen and the Hanze University of Applied Sciences have initiated an interdisciplinary research project to investigate the legal and technical aspects of these LLMs in the cybersecurity domain and develop an advanced AI-powered tool.
DOCUMENT