Background: App-based mobile health exercise interventions can motivate individuals to engage in more physical activity (PA). According to the Fogg Behavior Model, it is important that the individual receive prompts at the right time to be successfully persuaded into PA. These are referred to as just-in-time (JIT) interventions. The Playful Active Urban Living (PAUL) app is among the first to include 2 types of JIT prompts: JIT adaptive reminder messages to initiate a run or walk and JIT strength exercise prompts during a walk or run (containing location-based instruction videos). This paper reports on the feasibility of the PAUL app and its JIT prompts.Objective: The main objective of this study was to examine user experience, app engagement, and users' perceptions and opinions regarding the PAUL app and its JIT prompts and to explore changes in the PA behavior, intrinsic motivation, and the perceived capability of the PA behavior of the participants.Methods: In total, 2 versions of the closed-beta version of the PAUL app were evaluated: a basic version (Basic PAUL) and a JIT adaptive version (Smart PAUL). Both apps send JIT exercise prompts, but the versions differ in that the Smart PAUL app sends JIT adaptive reminder messages to initiate running or walking behavior, whereas the Basic PAUL app sends reminder messages at randomized times. A total of 23 participants were randomized into 1 of the 2 intervention arms. PA behavior (accelerometer-measured), intrinsic motivation, and the perceived capability of PA behavior were measured before and after the intervention. After the intervention, participants were also asked to complete a questionnaire on user experience, and they were invited for an exit interview to assess user perceptions and opinions of the app in depth.Results: No differences in PA behavior were observed (Z=-1.433; P=.08), but intrinsic motivation for running and walking and for performing strength exercises significantly increased (Z=-3.342; P<.001 and Z=-1.821; P=.04, respectively). Furthermore, participants increased their perceived capability to perform strength exercises (Z=2.231; P=.01) but not to walk or run (Z=-1.221; P=.12). The interviews indicated that the participants were enthusiastic about the strength exercise prompts. These were perceived as personal, fun, and relevant to their health. The reminders were perceived as important initiators for PA, but participants from both app groups explained that the reminder messages were often not sent at times they could exercise. Although the participants were enthusiastic about the functionalities of the app, technical issues resulted in a low user experience.Conclusions: The preliminary findings suggest that the PAUL apps are promising and innovative interventions for promoting PA. Users perceived the strength exercise prompts as a valuable addition to exercise apps. However, to be a feasible intervention, the app must be more stable.
DOCUMENT
Abstract Aims: Medical case vignettes play a crucial role in medical education, yet they often fail to authentically represent diverse patients. Moreover, these vignettes tend to oversimplify the complex relationship between patient characteristics and medical conditions, leading to biased and potentially harmful perspectives among students. Displaying aspects of patient diversity, such as ethnicity, in written cases proves challenging. Additionally, creating these cases places a significant burden on teachers in terms of labour and time. Our objective is to explore the potential of artificial intelligence (AI)-assisted computer-generated clinical cases to expedite case creation and enhance diversity, along with AI-generated patient photographs for more lifelike portrayal. Methods: In this study, we employed ChatGPT (OpenAI, GPT 3.5) to develop diverse and inclusive medical case vignettes. We evaluated various approaches and identified a set of eight consecutive prompts that can be readily customized to accommodate local contexts and specific assignments. To enhance visual representation, we utilized Adobe Firefly beta for image generation. Results: Using the described prompts, we consistently generated cases for various assignments, producing sets of 30 cases at a time. We ensured the inclusion of mandatory checks and formatting, completing the process within approximately 60 min per set. Conclusions: Our approach significantly accelerated case creation and improved diversity, although prioritizing maximum diversity compromised representativeness to some extent. While the optimized prompts are easily reusable, the process itself demands computer skills not all educators possess. To address this, we aim to share all created patients as open educational resources, empowering educators to create cases independently.
DOCUMENT
Prompt design can be understood similarly to query design, as a prompt aiming to understand cultural dimensions in visual research, forcing the AI to make sense of ambiguity as a way to understand its training dataset and biases ( Niederer, S. and Colombo, G., ‘Visual Methods for Digital Research’). It moves away from prompting engineering and efforts to make “code-like” prompts that suppress ambiguity and prevent the AI from bringing biases to the surface. Our idea is to keep the ambiguity present in the image descriptions like in natural language and let it flow through different stages (degrees) of the broken telephone dynamics. This way we have less control over the result or selection of the ideal result and more questions about the dynamics implicit in the biases present in the results obtained.Different from textual or mathematical results, in which prompt chains or asking the AI to explain how it got the result might be enough, images and visual methods assisted by AI demand new methods to deal with that. Exploring and developing a new approach for it is the main goal of this research project, particularly interested in possible biases and unexplored patterns in AI’s image affordances.How could we detect small biases in describing images and creating based on descriptions when it comes to AI? What exactly do the words written by AI when describing an image stand for? When detecting a ‘human’ or ‘science’, for example, what elements or archetypes are invisible between prompting, and the image created or described?Turning an AI’s image description into a new image could help us to have a glimpse behind the scenes. In the broken telephone game, small misperceptions between telling and hearing, coding and decoding, produce big divergences in the final result - and the cultural factors in between have been largely studied. To amplify and understand possible biases, we could check how this new image would be described by AI, starting a broken telephone cycle. This process could shed light not just into the gap between AI image description and its capacity to reconstruct images using this description as part of prompts, but also illuminate biases and patterns in AI image description and creation based on description.It is in line with previous projects on image clustering and image prompt analysis (see reference links), and questions such as identification of AI image biases, cross AI models analysis, reverse engineering through prompts, image clustering, and analysis of large datasets of images from online image and video-based platforms.The experiment becomes even more relevant in light of the results from recent studies (Shumailov et al., 2024) that show that AI models trained on AI generated data will eventually collapse.To frame this analysis, the proposal from Munn, Magee and Arora (2023) titled Unmaking AI Imagemaking introduces three methodological approaches for investigating AI image models: Unmaking the ecosystem, Unmaking the data and Unmaking the outputs.First, the idea of ecosystem was taken for these authors to describe socio-technical implications that surround the AI models: the place where they have been developed; the owners, partners, or supporters; and their interests, goals, and impositions. “Research has already identified how these image models internalize toxic stereotypes (Birnhane 2021) and reproduce forms of gendered and ethnic bias (Luccioni 2023), to name just two issues” (Munn et al., 2023, p. 2).There are also differences between the different models that currently dominate the market. Although Stable Diffusion seems to be the most open due to its origin, when working with images with this model, biases appear even more quickly than in other models. “In this framing, Stable Diffusion becomes an internet-based tool, which can be used and abused by “the people,” rather than a corporate product, where responsibility is clear, quality must be ensured, and toxicity must be mitigated” (Munn et al., 2023, p. 5).To unmaking the data, it is important to ask ourselves about the source and interests for the extraction of the data used. According to the description of their project “Creating an Ad Library Political Observatory”, “This project aims to explore diverse approaches to analyze and visualize the data from Meta’s ad library, which includes Instagram, Facebook, and other Meta products, using LLMs. The ultimate goal is to enhance the Ad Library Political Observatory, a tool we are developing to monitor Meta’s ad business.” That is to say, the images were taken from political advertising on the social network Facebook, as part of an observation process that seeks to make evident the investments in advertising around politics. These are prepared images in terms of what is seen in the background of the image, the position and posture of the characters, the visible objects. In general, we could say that we are dealing with staged images. This is important since the initial information that describes the AI is in itself a representation, a visual creation.
LINK
Mycelium biocomposites (MBCs) are a fairly new group of materials. MBCs are non-toxic and carbon-neutral cutting-edge circular materials obtained from agricultural residues and fungal mycelium, the vegetative part of fungi. Growing within days without complex processes, they offer versatile and effective solutions for diverse applications thanks to their customizable textures and characteristics achieved through controlled environmental conditions. This project involves a collaboration between MNEXT and First Circular Insulation (FC-I) to tackle challenges in MBC manufacturing, particularly the extended time and energy-intensive nature of the fungal incubation and drying phases. FC-I proposes an innovative deactivation method involving electrical discharges to expedite these processes, currently awaiting patent approval. However, a critical gap in scientific validation prompts the partnership with MNEXT, leveraging their expertise in mycelium research and MBCs. The research project centers on evaluating the efficacy of the innovative mycelium growth deactivation strategy proposed by FC-I. This one-year endeavor permits a thorough investigation, implementation, and validation of potential solutions, specifically targeting issues related to fungal regrowth and the preservation of sustained material properties. The collaboration synergizes academic and industrial expertise, with the dual purpose of achieving immediate project objectives and establishing a foundation for future advancements in mycelium materials.
Vogels verspreiden zaden, bestuiven planten en ruimen de natuur op; ze zijn onmisbaar voor een gezond ecosysteem. Van groot maatschappelijk belang is het beschermen van bedreigde dieren; biodiversiteit zorgt voor een gezond klimaat in Nederland. Voor de bescherming van vogels worden nesten gedetecteerd en geregistreerd. Boeren worden vervolgens geïnformeerd over de aanwezigheid van nesten op hun land zodat ze de nesten niet vernietigen tijdens hun agrarische werkzaamheden. Boeren worden in Nederland gecompenseerd voor de bescherming van nesten waardoor economische belangen samenkomen met het behoud van de natuur. In dit project wordt met behulp van technologische innovatie de samenwerking tussen boeren en natuur- en vogelbescherming verstevigd: drones worden gecombineerd met artificiële intelligentie om in samenwerking met vrijwilligers de monitoring van nesten uit te voeren. Dit helpt de Bond Friese VogelWachten (BFVW) om met het huidige aantal vogelwachters meer nesten te kunnen opsporen, de natuur doordat meer detectie leidt tot hogere broedsucces van vogels, en de boer kan met de drone meer financiële compensatie bemachtigen. Het consortium bestaat uit BFVW, NHL Stenden Lectoraat Computer Vision & Data Science en het drone bedrijf Aeroscan, die gezamenlijk de technische haarbaarheid willen onderzoeken om de business-case te ondersteunen. Met deze technologie kan de BFVW efficiënter en vooral effectiever nesten in kaart brengen. In de toekomst worden de resultaten van dit project breder ingezet door dit consortium. Binnen natuurbehoud en biodiversiteit zijn er veel andere uitdagingen waarvoor de, in dit project ontwikkelde, kennis ingezet kan worden.
Aanleiding Onderzoek wijst uit dat de leesvaardigheid van zowel vmbo- als hbo-studenten te wensen over laat; studerend lezen gaat ze niet goed af. Zowel hbo- als vmbo-studenten blijken vaak onvoldoende in staat om op een bevredigende manier kennis te verwerven uit studieteksten. Samenvattend is de vraag vanuit de onderwijspraktijk: 1) Op welke wijze kunnen vmbo-leerlingen en hbo-studenten binnenschools leren om informatie uit teksten te gebruiken voor (studeertaken gericht op) kennisverwerving? 2) In hoeverre kan men bij het uitvoeren van lees-studeertaken in groepjes gebruikmaken van ICT? Doelstelling Doel van het project is de ontwikkeling van een nieuwe leeromgeving voor studerend lezen in het vmbo en hbo, en het bepalen van de effectiviteit daarvan. Het onderzoek bestaat uit 2 delen. 1) Ontwikkelonderzoek. Het onderzoeksteam optimaliseert de leeromgeving in samenwerking met lerarenopleiders en vmbo-leerkrachten en de ICT-ondersteuning wordt afgestemd op de praktijk van het zaakvakonderwijs. 2) Effectonderzoek. Er worden twee 'randomized controlled trials' uitgevoerd (een in het vmbo en een in het hbo) om te toetsen welke effecten de nieuwe leeromgeving heeft op de vaardigheid 'studerend lezen' van leerlingen en studenten. Voorafgaand aan de experimenten maken de leerlingen/studenten voortoetsen om hun woordenschat en studerend lezen in kaart te brengen. De resultaten worden ingezet als co-variaten. Beoogde resultaten Het project resulteert in: 1) meer aandacht voor ondersteuning bij het lezen van teksten in vakonderwijs; 2) de ontwikkeling van een nieuwe leeromgeving voor studerend; 3) vergroting van de leesvaardigheden van vmbo-leerlingen/hbo-studenten. Het consortium zal de gegenereerde kennis over didactiek voor studerend lezen en de rol van de ICT-ondersteunde leeromgeving daarin verspreiden via wetenschappelijke artikelen en een proefschrift, presentaties op onderwijsconferenties en publicaties in landelijke vaktijdschriften voor het vmbo en hbo. De ontwikkelde didactiek wordt door Hogeschool Rotterdam geïntegreerd in het curriculum van de lerarenopleiding en wordt verspreid binnen de hogeschool, de consortiumscholen en de scholen van het Rotterdamse schoolbestuur BOOR. Ook Stichting Lezen, een intermediair tussen wetenschap en beroepspraktijk, gaat kennis over de didactiek verspreiden. Uitgeverij ThiemeMeulenhoff helpt de didactiek en leeromgeving beschikbaar te maken voor het onderwijsveld.