This paper introduces and contextualises Climate Futures, an experiment in which AI was repurposed as a ‘co-author’ of climate stories and a co-designer of climate-related images that facilitate reflections on present and future(s) of living with climate change. It converses with histories of writing and computation, including surrealistic ‘algorithmic writing’, recombinatory poems and ‘electronic literature’. At the core lies a reflection about how machine learning’s associative, predictive and regenerative capacities can be employed in playful, critical and contemplative goals. Our goal is not automating writing (as in product-oriented applications of AI). Instead, as poet Charles Hartman argues, ‘the question isn’t exactly whether a poet or a computer writes the poem, but what kinds of collaboration might be interesting’ (1996, p. 5). STS scholars critique labs as future-making sites and machine learning modelling practices and, for example, describe them also as fictions. Building on these critiques and in line with ‘critical technical practice’ (Agre, 1997), we embed our critique of ‘making the future’ in how we employ machine learning to design a tool for looking ahead and telling stories on life with climate change. This has involved engaging with climate narratives and machine learning from the critical and practical perspectives of artistic research. We trained machine learning algorithms (i.e. GPT-2 and AttnGAN) using climate fiction novels (as a dataset of cultural imaginaries of the future). We prompted them to produce new climate fiction stories and images, which we edited to create a tarot-like deck and a story-book, thus also playfully engaging with machine learning’s predictive associations. The tarot deck is designed to facilitate conversations about climate change. How to imagine the future beyond scenarios of resilience and the dystopian? How to aid our transition into different ways of caring for the planet and each other?
DOCUMENT
The past two years I have conducted an extensive literature and tool review to answer the question: “What should software engineers learn about building production-ready machine learning systems?”. During my research I noted that because the discipline of building production-ready machine learning systems is so new, it is not so easy to get the terminology straight. People write about it from different perspectives and backgrounds and have not yet found each other to join forces. At the same time the field is moving fast and far from mature. My focus on material that is ready to be used with our bachelor level students (applied software engineers, profession-oriented education), helped me to consolidate everything I have found into a body of knowledge for building production-ready machine learning (ML) systems. In this post I will first define the discipline and introduce the terminology for AI engineering and MLOps.
LINK
In sports, inertial measurement units are often used to measure the orientation of human body segments. A Madgwick (MW) filter can be used to obtain accurate inertial measurement unit (IMU) orientation estimates. This filter combines two different orientation estimates by applying a correction of the (1) gyroscope-based estimate in the direction of the (2) earth frame-based estimate. However, in sports situations that are characterized by relatively large linear accelerations and/or close magnetic sources, such as wheelchair sports, obtaining accurate IMU orientation estimates is challenging. In these situations, applying the MW filter in the regular way, i.e., with the same magnitude of correction at all time frames, may lead to estimation errors. Therefore, in this study, the MW filter was extended with machine learning to distinguish instances in which a small correction magnitude is beneficial from instances in which a large correction magnitude is beneficial, to eventually arrive at accurate body segment orientations in IMU-challenging sports situations. A machine learning algorithm was trained to make this distinction based on raw IMU data. Experiments on wheelchair sports were performed to assess the validity of the extended MW filter, and to compare the extended MW filter with the original MW filter based on comparisons with a motion capture-based reference system. Results indicate that the extended MW filter performs better than the original MW filter in assessing instantaneous trunk inclination (7.6 vs. 11.7◦ root-mean-squared error, RMSE), especially during the dynamic, IMU-challenging situations with moving athlete and wheelchair. Improvements of up to 45% RMSE were obtained for the extended MW filter compared with the original MW filter. To conclude, the machine learning-based extended MW filter has an acceptable accuracy and performs better than the original MW filter for the assessment of body segment orientation in IMU-challenging sports situations.
DOCUMENT