Background: App-based mobile health exercise interventions can motivate individuals to engage in more physical activity (PA). According to the Fogg Behavior Model, it is important that the individual receive prompts at the right time to be successfully persuaded into PA. These are referred to as just-in-time (JIT) interventions. The Playful Active Urban Living (PAUL) app is among the first to include 2 types of JIT prompts: JIT adaptive reminder messages to initiate a run or walk and JIT strength exercise prompts during a walk or run (containing location-based instruction videos). This paper reports on the feasibility of the PAUL app and its JIT prompts.Objective: The main objective of this study was to examine user experience, app engagement, and users' perceptions and opinions regarding the PAUL app and its JIT prompts and to explore changes in the PA behavior, intrinsic motivation, and the perceived capability of the PA behavior of the participants.Methods: In total, 2 versions of the closed-beta version of the PAUL app were evaluated: a basic version (Basic PAUL) and a JIT adaptive version (Smart PAUL). Both apps send JIT exercise prompts, but the versions differ in that the Smart PAUL app sends JIT adaptive reminder messages to initiate running or walking behavior, whereas the Basic PAUL app sends reminder messages at randomized times. A total of 23 participants were randomized into 1 of the 2 intervention arms. PA behavior (accelerometer-measured), intrinsic motivation, and the perceived capability of PA behavior were measured before and after the intervention. After the intervention, participants were also asked to complete a questionnaire on user experience, and they were invited for an exit interview to assess user perceptions and opinions of the app in depth.Results: No differences in PA behavior were observed (Z=-1.433; P=.08), but intrinsic motivation for running and walking and for performing strength exercises significantly increased (Z=-3.342; P<.001 and Z=-1.821; P=.04, respectively). Furthermore, participants increased their perceived capability to perform strength exercises (Z=2.231; P=.01) but not to walk or run (Z=-1.221; P=.12). The interviews indicated that the participants were enthusiastic about the strength exercise prompts. These were perceived as personal, fun, and relevant to their health. The reminders were perceived as important initiators for PA, but participants from both app groups explained that the reminder messages were often not sent at times they could exercise. Although the participants were enthusiastic about the functionalities of the app, technical issues resulted in a low user experience.Conclusions: The preliminary findings suggest that the PAUL apps are promising and innovative interventions for promoting PA. Users perceived the strength exercise prompts as a valuable addition to exercise apps. However, to be a feasible intervention, the app must be more stable.
DOCUMENT
Prompt design can be understood similarly to query design, as a prompt aiming to understand cultural dimensions in visual research, forcing the AI to make sense of ambiguity as a way to understand its training dataset and biases ( Niederer, S. and Colombo, G., ‘Visual Methods for Digital Research’). It moves away from prompting engineering and efforts to make “code-like” prompts that suppress ambiguity and prevent the AI from bringing biases to the surface. Our idea is to keep the ambiguity present in the image descriptions like in natural language and let it flow through different stages (degrees) of the broken telephone dynamics. This way we have less control over the result or selection of the ideal result and more questions about the dynamics implicit in the biases present in the results obtained.Different from textual or mathematical results, in which prompt chains or asking the AI to explain how it got the result might be enough, images and visual methods assisted by AI demand new methods to deal with that. Exploring and developing a new approach for it is the main goal of this research project, particularly interested in possible biases and unexplored patterns in AI’s image affordances.How could we detect small biases in describing images and creating based on descriptions when it comes to AI? What exactly do the words written by AI when describing an image stand for? When detecting a ‘human’ or ‘science’, for example, what elements or archetypes are invisible between prompting, and the image created or described?Turning an AI’s image description into a new image could help us to have a glimpse behind the scenes. In the broken telephone game, small misperceptions between telling and hearing, coding and decoding, produce big divergences in the final result - and the cultural factors in between have been largely studied. To amplify and understand possible biases, we could check how this new image would be described by AI, starting a broken telephone cycle. This process could shed light not just into the gap between AI image description and its capacity to reconstruct images using this description as part of prompts, but also illuminate biases and patterns in AI image description and creation based on description.It is in line with previous projects on image clustering and image prompt analysis (see reference links), and questions such as identification of AI image biases, cross AI models analysis, reverse engineering through prompts, image clustering, and analysis of large datasets of images from online image and video-based platforms.The experiment becomes even more relevant in light of the results from recent studies (Shumailov et al., 2024) that show that AI models trained on AI generated data will eventually collapse.To frame this analysis, the proposal from Munn, Magee and Arora (2023) titled Unmaking AI Imagemaking introduces three methodological approaches for investigating AI image models: Unmaking the ecosystem, Unmaking the data and Unmaking the outputs.First, the idea of ecosystem was taken for these authors to describe socio-technical implications that surround the AI models: the place where they have been developed; the owners, partners, or supporters; and their interests, goals, and impositions. “Research has already identified how these image models internalize toxic stereotypes (Birnhane 2021) and reproduce forms of gendered and ethnic bias (Luccioni 2023), to name just two issues” (Munn et al., 2023, p. 2).There are also differences between the different models that currently dominate the market. Although Stable Diffusion seems to be the most open due to its origin, when working with images with this model, biases appear even more quickly than in other models. “In this framing, Stable Diffusion becomes an internet-based tool, which can be used and abused by “the people,” rather than a corporate product, where responsibility is clear, quality must be ensured, and toxicity must be mitigated” (Munn et al., 2023, p. 5).To unmaking the data, it is important to ask ourselves about the source and interests for the extraction of the data used. According to the description of their project “Creating an Ad Library Political Observatory”, “This project aims to explore diverse approaches to analyze and visualize the data from Meta’s ad library, which includes Instagram, Facebook, and other Meta products, using LLMs. The ultimate goal is to enhance the Ad Library Political Observatory, a tool we are developing to monitor Meta’s ad business.” That is to say, the images were taken from political advertising on the social network Facebook, as part of an observation process that seeks to make evident the investments in advertising around politics. These are prepared images in terms of what is seen in the background of the image, the position and posture of the characters, the visible objects. In general, we could say that we are dealing with staged images. This is important since the initial information that describes the AI is in itself a representation, a visual creation.
LINK
Building on the Minds-On project, this study developed the online module “Celestial Bodies” to enhance hands-on and minds-on learning, providing students with individualised feedback prompts to monitor and identify weaknesses in their understanding. The lesson centred on classifying 14 celestial bodies based on three properties, with the guidance of the online module and a map and cards. This study aimed to (1) enhance student engagement with the software, and (2) asses the impact of guided instructions and feedback prompts. We introduce our interactive lesson, present findings, and discuss their benefits in upper primary education classes to enhance student engagement, concept learning, emphasising enhanced integration of minds-on and hands-on activities.
MULTIFILE
Kunstmatige intelligentie (AI) speelt een steeds grotere rol in ons dagelijks leven en heeft impact op alle werkvelden — van wetenschap en onderwijs tot dienstverlening en industrie. Ook in de creatieve sector laat AI zich steeds nadrukkelijker gelden: algoritmes genereren muziek, tekst en beelden op basis van eenvoudige beschrijvingen, zogeheten prompts. Hoewel deze algoritmes de creatieve gereedschapskist verrijken, vormen zij ook een existentieel risico voor makers: ze respecteren hun auteursrechten niet en nemen creatief werk van hen over. In dit project bundelen de Amsterdamse Hogeschool voor de Kunsten en de Universiteit van Amsterdam hun krachten met Sonic Acts en individuele kunstenaars om te onderzoeken hoe AI op een betekenisvolle manier geïntegreerd kan worden in het creatieve proces. In plaats van in te zetten op autonome AI-systemen die het werk van kunstenaars overnemen, experimenteren we met artistieke methodes die een intuïtieve, fysieke en gelijkwaardige samenwerking met AI mogelijk maken. We laten ons inspireren door documenterend onderzoek naar het werk van vroege AI-kunstenaars uit de twintigste eeuw om te exploreren hoe embodied en temporele interactie met AI kunnen leiden tot nieuwe expressievormen die ingezet kunnen worden in live-performances. Tegelijkertijd biedt dit experimenteren fundamentele inzichten in hoe AI leert en de wereld ‘waarneemt’. Zo ontwikkelen we een gelijkwaardige co-creatie tussen mens en AI die verder gaat dan de prompt. Het onderzoek laat zien hoe deze technologie een verrijking kan vormen — niet alleen voor kunstenaars en het kunstonderwijs, maar ook voor de bredere maatschappij, die hiermee een krachtig voorbeeld krijgt van een creatieve dialoog tussen mens en machine.
Mycelium biocomposites (MBCs) are a fairly new group of materials. MBCs are non-toxic and carbon-neutral cutting-edge circular materials obtained from agricultural residues and fungal mycelium, the vegetative part of fungi. Growing within days without complex processes, they offer versatile and effective solutions for diverse applications thanks to their customizable textures and characteristics achieved through controlled environmental conditions. This project involves a collaboration between MNEXT and First Circular Insulation (FC-I) to tackle challenges in MBC manufacturing, particularly the extended time and energy-intensive nature of the fungal incubation and drying phases. FC-I proposes an innovative deactivation method involving electrical discharges to expedite these processes, currently awaiting patent approval. However, a critical gap in scientific validation prompts the partnership with MNEXT, leveraging their expertise in mycelium research and MBCs. The research project centers on evaluating the efficacy of the innovative mycelium growth deactivation strategy proposed by FC-I. This one-year endeavor permits a thorough investigation, implementation, and validation of potential solutions, specifically targeting issues related to fungal regrowth and the preservation of sustained material properties. The collaboration synergizes academic and industrial expertise, with the dual purpose of achieving immediate project objectives and establishing a foundation for future advancements in mycelium materials.
Vogels verspreiden zaden, bestuiven planten en ruimen de natuur op; ze zijn onmisbaar voor een gezond ecosysteem. Van groot maatschappelijk belang is het beschermen van bedreigde dieren; biodiversiteit zorgt voor een gezond klimaat in Nederland. Voor de bescherming van vogels worden nesten gedetecteerd en geregistreerd. Boeren worden vervolgens geïnformeerd over de aanwezigheid van nesten op hun land zodat ze de nesten niet vernietigen tijdens hun agrarische werkzaamheden. Boeren worden in Nederland gecompenseerd voor de bescherming van nesten waardoor economische belangen samenkomen met het behoud van de natuur. In dit project wordt met behulp van technologische innovatie de samenwerking tussen boeren en natuur- en vogelbescherming verstevigd: drones worden gecombineerd met artificiële intelligentie om in samenwerking met vrijwilligers de monitoring van nesten uit te voeren. Dit helpt de Bond Friese VogelWachten (BFVW) om met het huidige aantal vogelwachters meer nesten te kunnen opsporen, de natuur doordat meer detectie leidt tot hogere broedsucces van vogels, en de boer kan met de drone meer financiële compensatie bemachtigen. Het consortium bestaat uit BFVW, NHL Stenden Lectoraat Computer Vision & Data Science en het drone bedrijf Aeroscan, die gezamenlijk de technische haarbaarheid willen onderzoeken om de business-case te ondersteunen. Met deze technologie kan de BFVW efficiënter en vooral effectiever nesten in kaart brengen. In de toekomst worden de resultaten van dit project breder ingezet door dit consortium. Binnen natuurbehoud en biodiversiteit zijn er veel andere uitdagingen waarvoor de, in dit project ontwikkelde, kennis ingezet kan worden.