Video games clearly have great educational potential, both for formal and informal learning, and this avenue is being thoroughly investigated in the psychology and education literature. However, there appears to be a disconnect between social science academic research and the game development sector, in that research and development practices rarely inform each other. This paper presents a two-part analysis of this communicative disconnect based on investigations carried out within the H2020 Gaming Horizons project. The first part regards a literature review that identified the main topics of focus in the social sciences literature on games, as well as the chief recommendations authors express. The second part examines 73 interviews with 30 developers, 14 researchers, 13 players, 12 educators, and 4 policy makers, investigating how they perceived games and gaming. The study highlights several factors contributing to the disconnect: different priorities and dissemination practices; the lag between innovation in the games market and research advancements; low accessibility of academic research; and disproportionate academic focus on serious games compared to entertainment games. The authors suggest closer contact between researchers and developers might be sought by diversifying academic dissemination channels, promoting conferences involving both groups, and developing research partnerships with entertainment game companies.
LINK
Prompt design can be understood similarly to query design, as a prompt aiming to understand cultural dimensions in visual research, forcing the AI to make sense of ambiguity as a way to understand its training dataset and biases ( Niederer, S. and Colombo, G., ‘Visual Methods for Digital Research’). It moves away from prompting engineering and efforts to make “code-like” prompts that suppress ambiguity and prevent the AI from bringing biases to the surface. Our idea is to keep the ambiguity present in the image descriptions like in natural language and let it flow through different stages (degrees) of the broken telephone dynamics. This way we have less control over the result or selection of the ideal result and more questions about the dynamics implicit in the biases present in the results obtained.Different from textual or mathematical results, in which prompt chains or asking the AI to explain how it got the result might be enough, images and visual methods assisted by AI demand new methods to deal with that. Exploring and developing a new approach for it is the main goal of this research project, particularly interested in possible biases and unexplored patterns in AI’s image affordances.How could we detect small biases in describing images and creating based on descriptions when it comes to AI? What exactly do the words written by AI when describing an image stand for? When detecting a ‘human’ or ‘science’, for example, what elements or archetypes are invisible between prompting, and the image created or described?Turning an AI’s image description into a new image could help us to have a glimpse behind the scenes. In the broken telephone game, small misperceptions between telling and hearing, coding and decoding, produce big divergences in the final result - and the cultural factors in between have been largely studied. To amplify and understand possible biases, we could check how this new image would be described by AI, starting a broken telephone cycle. This process could shed light not just into the gap between AI image description and its capacity to reconstruct images using this description as part of prompts, but also illuminate biases and patterns in AI image description and creation based on description.It is in line with previous projects on image clustering and image prompt analysis (see reference links), and questions such as identification of AI image biases, cross AI models analysis, reverse engineering through prompts, image clustering, and analysis of large datasets of images from online image and video-based platforms.The experiment becomes even more relevant in light of the results from recent studies (Shumailov et al., 2024) that show that AI models trained on AI generated data will eventually collapse.To frame this analysis, the proposal from Munn, Magee and Arora (2023) titled Unmaking AI Imagemaking introduces three methodological approaches for investigating AI image models: Unmaking the ecosystem, Unmaking the data and Unmaking the outputs.First, the idea of ecosystem was taken for these authors to describe socio-technical implications that surround the AI models: the place where they have been developed; the owners, partners, or supporters; and their interests, goals, and impositions. “Research has already identified how these image models internalize toxic stereotypes (Birnhane 2021) and reproduce forms of gendered and ethnic bias (Luccioni 2023), to name just two issues” (Munn et al., 2023, p. 2).There are also differences between the different models that currently dominate the market. Although Stable Diffusion seems to be the most open due to its origin, when working with images with this model, biases appear even more quickly than in other models. “In this framing, Stable Diffusion becomes an internet-based tool, which can be used and abused by “the people,” rather than a corporate product, where responsibility is clear, quality must be ensured, and toxicity must be mitigated” (Munn et al., 2023, p. 5).To unmaking the data, it is important to ask ourselves about the source and interests for the extraction of the data used. According to the description of their project “Creating an Ad Library Political Observatory”, “This project aims to explore diverse approaches to analyze and visualize the data from Meta’s ad library, which includes Instagram, Facebook, and other Meta products, using LLMs. The ultimate goal is to enhance the Ad Library Political Observatory, a tool we are developing to monitor Meta’s ad business.” That is to say, the images were taken from political advertising on the social network Facebook, as part of an observation process that seeks to make evident the investments in advertising around politics. These are prepared images in terms of what is seen in the background of the image, the position and posture of the characters, the visible objects. In general, we could say that we are dealing with staged images. This is important since the initial information that describes the AI is in itself a representation, a visual creation.
LINK
UNLABELLED: Public library makerspaces intend to contribute to the development of children from marginalized communities through the education of digital technology and creativity and by stimulating young people to experience new social roles and develop their identity. Learning in these informal settings puts demands on the organization of the makerspace, the activities, and the support of the children. The present study investigates how children evaluate their activities and experiences in a public library makerspace both in the after-school programs and during school visits. Furthermore, it examines the effectiveness of the training program for the makerspace coaches. The study covers self-evaluations by children ( n = 307), and interviews with children ( n = 27) and makerspace coaches ( n = 11). Children report a lot of experiences concerning creating (maker skills, creativity) and maker mindset (motivation, persistence, confidence). Experiences with collaboration (helping each other) were mentioned to a lesser extent. Critical features of the training program for makerspace coaches were (i) adaptation to the prior knowledge, skills and needs of makerspace coaches, (ii) input of expert maker educators, (iii) emphasis on learning by doing, (iv) room for self-employed learning, and (v) collaboration with colleagues. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s41979-022-00070-w.
Physical rehabilitation programs revolve around the repetitive execution of exercises since it has been proven to lead to better rehabilitation results. Although beginning the motor (re)learning process early is paramount to obtain good recovery outcomes, patients do not normally see/experience any short-term improvement, which has a toll on their motivation. Therefore, patients find it difficult to stay engaged in seemingly mundane exercises, not only in terms of adhering to the rehabilitation program, but also in terms of proper execution of the movements. One way in which this motivation problem has been tackled is to employ games in the rehabilitation process. These games are designed to reward patients for performing the exercises correctly or regularly. The rewards can take many forms, for instance providing an experience that is engaging (fun), one that is aesthetically pleasing (appealing visual and aural feedback), or one that employs gamification elements such as points, badges, or achievements. However, even though some of these serious game systems are designed together with physiotherapists and with the patients’ needs in mind, many of them end up not being used consistently during physical rehabilitation past the first few sessions (i.e. novelty effect). Thus, in this project, we aim to 1) Identify, by means of literature reviews, focus groups, and interviews with the involved stakeholders, why this is happening, 2) Develop a set of guidelines for the successful deployment of serious games for rehabilitation, and 3) Develop an initial implementation process and ideas for potential serious games. In a follow-up application, we intend to build on this knowledge and apply it in the design of a (set of) serious game for rehabilitation to be deployed at one of the partners centers and conduct a longitudinal evaluation to measure the success of the application of the deployment guidelines.
De maatschappelijke discussies over de invloed van AI op ons leven tieren welig. De terugkerende vraag is of AI-toepassingen – en dan vooral recommendersystemen – een dreiging of een redding zijn. De impact van het kiezen van een film voor vanavond, met behulp van Netflix' recommendersysteem, is nog beperkt. De impact van datingsites, navigatiesystemen en sociale media – allemaal systemen die met algoritmes informatie filteren of keuzes aanraden – is al groter. De impact van recommendersystemen in bijvoorbeeld de zorg, bij werving en selectie, fraudedetectie, en beoordelingen van hypotheekaanvragen is enorm, zowel op individueel als op maatschappelijk niveau. Het is daarom urgent dat juist recommendersystemen volgens de waarden van Responsible AI ontworpen worden: veilig, eerlijk, betrouwbaar, inclusief, transparant en controleerbaar. Om op een goede manier Responsible AI te ontwerpen moeten technische, contextuele én interactievraagstukken worden opgelost. Op het technische en maatschappelijke niveau is al veel vooruitgang geboekt, respectievelijk door onderzoek naar algoritmen die waarden als inclusiviteit in hun berekening meenemen, en door de ontwikkeling van wettelijke kaders. Over implementatie op interactieniveau bestaat daarentegen nog weinig concrete kennis. Bekend is dat gebruikers die interactiemogelijkheden hebben om een algoritme bij te sturen of aan te vullen, meer transparantie en betrouwbaarheid ervaren. Echter, slecht ontworpen interactiemogelijkheden, of een mismatch tussen interactie en context kosten juist tijd, veroorzaken mentale overbelasting, frustratie, en een gevoel van incompetentie. Ze verhullen eerder dan dat ze tot transparantie leiden. Het ontbreekt ontwerpers van interfaces (UX/UI designers) aan systematische concrete kennis over deze interactiemogelijkheden, hun toepasbaarheid, en de ethische grenzen. Dat beperkt hun mogelijkheid om op interactieniveau aan Responsible AI bij te dragen. Ze willen daarom graag een pattern library van interactiemogelijkheden, geannoteerd met onderzoek over de werking en inzetbaarheid. Dit bestaat nu niet en met dit project willen we een substantiële bijdrage leveren aan de ontwikkeling ervan.
Electrohydrodynamic Atomization (EHDA), also known as Electrospray (ES), is a technology which uses strong electric fields to manipulate liquid atomization. Among many other areas, electrospray is used as an important tool for biomedical application (droplet encapsulation), water technology (thermal desalination and metal recovery) and material sciences (nanofibers and nano spheres fabrication, metal recovery, selective membranes and batteries). A complete review about the particularities of this tool and its application was recently published (2018), as an especial edition of the Journal of Aerosol Sciences. One of the main known bottlenecks of this technique, it is the fact that the necessary strong electric fields create a risk for electric discharges. Such discharges destabilize the process but can also be an explosion risk depending on the application. The goal of this project is to develop a reliable tool to prevent discharges in electrospray applications.