Introduction: A trauma resuscitation is dynamic and complex process in which failures could lead to serious adverse events. In several trauma centers, evaluation of trauma resuscitation is part of a hospital's quality assessment program. While video analysis is commonly used, some hospitals use live observations, mainly due to ethical and medicolegal concerns. The aim of this study was to compare the validity and reliability of video analysis and live observations to evaluate trauma resuscitations. Methods: In this prospective observational study, validity was assessed by comparing the observed adherence to 28 advanced trauma life support (ATLS) guideline related tasks by video analysis to life observations. Interobserver reliability was assessed by calculating the intra class coefficient of observed ATLS related tasks by live observations and video analysis. Results: Eleven simulated and thirteen real-life resuscitations were assessed. Overall, the percentage of observed ATLS related tasks performed during simulated resuscitations was 10.4% (P < 0.001) higher when the same resuscitations were analysed using video compared to live observations. During real-life resuscitations, 8.7% (p < 0.001) more ATLS related tasks were observed using video review compared to live observations. In absolute terms, a mean of 2.9 (during simulated resuscitations) respectively 2.5 (during actual resuscitations) ATLS-related tasks per resuscitation were not identified using live observers, that were observed through video analysis. The interobserver variability for observed ATLS related tasks was significantly higher using video analysis compared to live observations for both simulated (video analysis: ICC 0.97; 95% CI 0.97-0.98 vs. live observation: ICC 0.69; 95% CI 0.57-0.78) and real-life witnessed resuscitations (video analyse 0.99; 95% CI 0.99-1.00 vs live observers 0.86; 95% CI 0.83-0.89). Conclusion: Video analysis of trauma resuscitations may be more valid and reliable compared to evaluation by live observers. These outcomes may guide the debate to justify video review instead of live observations.
The pervasive use of media at current-day festivals thoroughly impacts how these live events are experienced, anticipated, and remembered. This empirical study examined event-goers’ live media practices – taking photos, making videos, and in-the-moment sharing of content on social media platforms – at three large cultural events in the Netherlands. Taking a practice approach (Ahva 2017; Couldry 2004), the author studied online and offline event environments through extensive ethnographic fieldwork: online and offline observations, and interviews with 379 eventgoers. Analysis of this research material shows that through their live media practices eventgoers are continuously involved in mediated memory work (Lohmeier and Pentzold 2014; Van Dijck 2007), a form of live storytelling thatrevolves around how they want to remember the event. The article focuses on the impact of mediated memory work on the live experience in the present. It distinguishes two types of mediatised experience of live events: live as future memory and the experiential live. The author argues that memory is increasingly incorporated into the live experience in the present, so much so that, for many eventgoers, mediated memory-making is crucial to having a full live event experience. The article shows how empirical research in media studies can shed new light on key questions within memory studies.
MULTIFILE
Artificial Intelligence (AI) wordt realiteit. Slimme ICT-producten die diensten op maat leveren accelereren de digitalisering van de maatschappij. De grote innovaties van de komende jaren –zelfrijdende auto’s, spraakgestuurde virtuele assistenten, autodiagnose systemen, robots die autonoom complexe taken uitvoeren – zijn datagedreven en hebben een AI-component. Dit gaat de rol van professionals in alle domeinen, gezondheidzorg, bouwsector, financiële dienstverlening, maakindustrie, journalistiek, rechtspraak, etc., raken. ICT is niet meer volgend en ondersteunend (een ‘enabling’ technologie), maar de motor die de transformatie van de samenleving in gang zet. Grote bedrijven, overheidsinstanties, het MKB, en de vele startups in de Brainport regio zijn innovatieve datagedreven scenario’s volop aan het verkennen. Dit wordt nog eens versterkt door de democratisering van AI; machine learning en deep learning algoritmes zijn beschikbaar zowel in open source software als in Cloud oplossingen en zijn daarmee toegankelijk voor iedereen. Data science wordt ‘applied’ en verschuift van een PhD specialisme naar een HBO-vaardigheid. Het stadium waarin veel bedrijven nu verkeren is te omschrijven als: “Help, mijn AI-pilot is succesvol. Wat nu?” Deze aanvraag richt zich op het succesvol implementeren van AI binnen de context van softwareontwikkeling. De onderzoeksvraag van dit voorstel is: “Hoe kunnen we state-of-the-art data science methoden en technieken waardevol en verantwoord toepassen ten behoeve van deze slimme lerende ICT-producten?” De postdoc gaat fungeren als een linking pin tussen alle onderzoeksprojecten en opdrachten waarbij studenten ICT-producten met AI (machine learning, deep learning) ontwikkelen voor opdrachtgevers uit de praktijk. Door mee te kijken en mee te denken met de studenten kan de postdoc overzicht en inzicht creëren over alle cases heen. Als er overzicht is kan er daarna ook gestuurd worden op de uit te voeren cases om verschillende deelaspecten samen met de studenten te onderzoeken. Deliverables zijn rapporten, guidelines en frameworks voor praktijk en onderwijs, peer-reviewed artikelen en kennisdelingsevents.
The AR in Staged Entertainment project focuses on utilizing immersive technologies to strengthen performances and create resiliency in live events. In this project The Experiencelab at BUas explores this by comparing live as well as pre-recorded events that utilize Augmented Reality technology to provide an added layer to the experience of the user. Experiences will be measured among others through observational measurements using biometrics. This projects runs in the Experience lab of BUas with partners The Effenaar and 4DR Studio and is connected to the networks and goals related to Chronosphere, Digireal and Makerspace. Project is powered by Fieldlab Events (PPS / ClickNL)..
Within the food industry there is a need to be able to rapidly react to changing regulatory requirements and consumer preferences by adjusting recipes, processes, and products. A good knowledge of the properties of food ingredients is crucial in this process. Currently this knowledge is available in scattered heterogeneous resources such as scientific peer-reviewed articles, databases, recipes, food blogs as well as in the experience of food-experts. This prevents, in practice, the efficient integration and use of this knowledge, leading to inefficiency and missed opportunities. In this project we will build a structured database of properties of food ingredients, focusing in particular on the taste and texture properties. By large-scale collection and text mining on a large number of textual resources, a comprehensive data set on ingredient properties will be created, along with knowledge on the relationships between these ingredients. This database will then be used for to find new potential applications for healthy and taste enhancing ingredient combinations by network-based discovery methods and artificial intelligence algorithms will be used. A concrete focus will be on application questions formulated by the industrial partners. The resulting hypothesis will be validated in a real life setting at the premises of the industrial partners. The deliverables of this project will be: - A reusable open-access ingredient database that is accessible via a user-friendly web portal - A set of state-of-the-art mining algorithms that can address a wide variety of industry driven use cases - Novel product formulations that can be further developed for the consumer and business2business market