The pervasive use of media at current-day festivals thoroughly impacts how these live events are experienced, anticipated, and remembered. This empirical study examined event-goers’ live media practices – taking photos, making videos, and in-the-moment sharing of content on social media platforms – at three large cultural events in the Netherlands. Taking a practice approach (Ahva 2017; Couldry 2004), the author studied online and offline event environments through extensive ethnographic fieldwork: online and offline observations, and interviews with 379 eventgoers. Analysis of this research material shows that through their live media practices eventgoers are continuously involved in mediated memory work (Lohmeier and Pentzold 2014; Van Dijck 2007), a form of live storytelling thatrevolves around how they want to remember the event. The article focuses on the impact of mediated memory work on the live experience in the present. It distinguishes two types of mediatised experience of live events: live as future memory and the experiential live. The author argues that memory is increasingly incorporated into the live experience in the present, so much so that, for many eventgoers, mediated memory-making is crucial to having a full live event experience. The article shows how empirical research in media studies can shed new light on key questions within memory studies.
MULTIFILE
As the Dutch population is aging, the field of music-in-healthcare keeps expanding. Healthcare, institutionally and at home, is multiprofessional and demands interprofessional collaboration. Musicians are sought-after collaborators in social and healthcare fields, yet lesser-known agents of this multiprofessional group. Although live music supports social-emotional wellbeing and vitality, and nurtures compassionate care delivery, interprofessional collaboration between musicians, social work, and healthcare professionals remains marginal. This limits optimising and integrating music-making in the care. A significant part of this problem is a lack of collaborative transdisciplinary education for music, social, and healthcare students that deep-dives into the development of interprofessional skills. To meet the growing demand for musical collaborations by particularly elderly care organisations, and to innovate musical contributions to the quality of social and healthcare in Northern Netherlands, a transdisciplinary education for music, physiotherapy, and social work studies is needed. This project aims to equip multiprofessional student groups of Hanze with interprofessional skills through co-creative transdisciplinary learning aimed at innovating and improving musical collaborative approaches for working with vulnerable, often older people. The education builds upon experiential learning in Learning LABs, and collaborative project work in real-life care settings, supported by transdisciplinary community forming.The expected outcomes include a new concept of a transdisciplinary education for HBO-curricula, concrete building blocks for a transdisciplinary arts-in-health minor study, innovative student-led approaches for supporting the care and wellbeing of (older) vulnerable people, enhanced integration of musicians in interprofessional care teams, and new interprofessional structures for educational collaboration between music, social work and healthcare faculties.
The increasing amount of electronic waste (e-waste) urgently requires the use of innovative solutions within the circular economy models in this industry. Sorting of e-waste in a proper manner are essential for the recovery of valuable materials and minimizing environmental problems. The conventional e-waste sorting models are time-consuming processes, which involve laborious manual classification of complex and diverse electronic components. Moreover, the sector is lacking in skilled labor, thus making automation in sorting procedures is an urgent necessity. The project “AdapSort: Adaptive AI for Sorting E-Waste” aims to develop an adaptable AI-based system for optimal and efficient e-waste sorting. The project combines deep learning object detection algorithms with open-world vision-language models to enable adaptive AI models that incorporate operator feedback as part of a continuous learning process. The project initiates with problem analysis, including use case definition, requirement specification, and collection of labeled image data. AI models will be trained and deployed on edge devices for real-time sorting and scalability. Then, the feasibility of developing adaptive AI models that capture the state-of-the-art open-world vision-language models will be investigated. The human-in-the-loop learning is an important feature of this phase, wherein the user is enabled to provide ongoing feedback about how to refine the model further. An interface will be constructed to enable human intervention to facilitate real-time improvement of classification accuracy and sorting of different items. Finally, the project will deliver a proof of concept for the AI-based sorter, validated through selected use cases in collaboration with industrial partners. By integrating AI with human feedback, this project aims to facilitate e-waste management and serve as a foundation for larger projects.
Drones have been verified as the camera of 2024 due to the enormous exponential growth in terms of the relevant technologies and applications such as smart agriculture, transportation, inspection, logistics, surveillance and interaction. Therefore, the commercial solutions to deploy drones in different working places have become a crucial demand for companies. Warehouses are one of the most promising industrial domains to utilize drones to automate different operations such as inventory scanning, goods transportation to the delivery lines, area monitoring on demand and so on. On the other hands, deploying drones (or even mobile robots) in such challenging environment needs to enable accurate state estimation in terms of position and orientation to allow autonomous navigation. This is because GPS signals are not available in warehouses due to the obstruction by the closed-sky areas and the signal deflection by structures. Vision-based positioning systems are the most promising techniques to achieve reliable position estimation in indoor environments. This is because of using low-cost sensors (cameras), the utilization of dense environmental features and the possibilities to operate in indoor/outdoor areas. Therefore, this proposal aims to address a crucial question for industrial applications with our industrial partners to explore limitations and develop solutions towards robust state estimation of drones in challenging environments such as warehouses and greenhouses. The results of this project will be used as the baseline to develop other navigation technologies towards full autonomous deployment of drones such as mapping, localization, docking and maneuvering to safely deploy drones in GPS-denied areas.
Lectoraat, onderdeel van NHL Stenden Hogeschool