IntroductionMechanical power of ventilation, a summary parameter reflecting the energy transferred from the ventilator to the respiratory system, has associations with outcomes. INTELLiVENT–Adaptive Support Ventilation is an automated ventilation mode that changes ventilator settings according to algorithms that target a low work–and force of breathing. The study aims to compare mechanical power between automated ventilation by means of INTELLiVENT–Adaptive Support Ventilation and conventional ventilation in critically ill patients.Materials and methodsInternational, multicenter, randomized crossover clinical trial in patients that were expected to need invasive ventilation > 24 hours. Patients were randomly assigned to start with a 3–hour period of automated ventilation or conventional ventilation after which the alternate ventilation mode was selected. The primary outcome was mechanical power in passive and active patients; secondary outcomes included key ventilator settings and ventilatory parameters that affect mechanical power.ResultsA total of 96 patients were randomized. Median mechanical power was not different between automated and conventional ventilation (15.8 [11.5–21.0] versus 16.1 [10.9–22.6] J/min; mean difference –0.44 (95%–CI –1.17 to 0.29) J/min; P = 0.24). Subgroup analyses showed that mechanical power was lower with automated ventilation in passive patients, 16.9 [12.5–22.1] versus 19.0 [14.1–25.0] J/min; mean difference –1.76 (95%–CI –2.47 to –10.34J/min; P < 0.01), and not in active patients (14.6 [11.0–20.3] vs 14.1 [10.1–21.3] J/min; mean difference 0.81 (95%–CI –2.13 to 0.49) J/min; P = 0.23).ConclusionsIn this cohort of unselected critically ill invasively ventilated patients, automated ventilation by means of INTELLiVENT–Adaptive Support Ventilation did not reduce mechanical power. A reduction in mechanical power was only seen in passive patients.Study registrationClinicaltrials.gov (study identifier NCT04827927), April 1, 2021URL of trial registry recordhttps://clinicaltrials.gov/study/NCT04827927?term=intellipower&rank=1
MULTIFILE
The report from Inholland University is dedicated to the impacts of data-driven practices on non-journalistic media production and creative industries. It explores trends, showcases advancements, and highlights opportunities and threats in this dynamic landscape. Examining various stakeholders' perspectives provides actionable insights for navigating challenges and leveraging opportunities. Through curated showcases and analyses, the report underscores the transformative potential of data-driven work while addressing concerns such as copyright issues and AI's role in replacing human artists. The findings culminate in a comprehensive overview that guides informed decision-making in the creative industry.
MULTIFILE
Pauses in speech may be categorized on the basis of their length. Some authors claim that there are two categories (short and long pauses) (Baken & Orlikoff, 2000), others claim that there are three (Campione & Véronis, 2002), or even more. Pause lengths may be affected in speakers with aphasia. Individuals with dementia probably caused by Alzheimer’s disease (AD) or Parkinson’s disease (PD) interrupt speech longer and more frequently. One infrequent form of dementia, non-fluent primary progressive aphasia (PPA-NF), is even defined as causing speech with an unusual interruption pattern (”hesitant and labored speech”). Although human listeners can often easily distinguish pathological speech from healthy speech, it is unclear yet how software can detect the relevant patterns. The research question in this study is: how can software measure the statistical parameters that characterize the disfluent speech of PPA-NF/AD/PD patients in connected conversational speech?
The PhD research by Joris Weijdom studies the impact of collective embodied design techniques in collaborative mixed-reality environments (CMRE) in art- and engineering design practice and education. He aims to stimulate invention and innovation from an early stage of the collective design process.Joris combines theory and practice from the performing arts, human-computer interaction, and engineering to develop CMRE configurations, strategies for its creative implementation, and an embodied immersive learning pedagogy for students and professionals.This lecture was given at the Transmedia Arts seminar of the Mahindra Humanities Center of Harvard University. In this lecture, Joris Weijdom discusses critical concepts, such as embodiment, presence, and immersion, that concern mixed-reality design in the performing arts. He introduces examples from his practice and interdisciplinary projects of other artists.About the researchMultiple research areas now support the idea that embodiment is an underpinning of cognition, suggesting new discovery and learning approaches through full-body engagement with the virtual environment. Furthermore, improvisation and immediate reflection on the experience itself, common creative strategies in artist training and practice, are central when inventing something new. In this research, a new embodied design method, entitled Performative prototyping, has been developed to enable interdisciplinary collective design processes in CMRE’s and offers a vocabulary of multiple perspectives to reflect on its outcomes.Studies also find that engineering education values creativity in design processes, but often disregards the potential of full-body improvisation in generating and refining ideas. Conversely, artists lack the technical know-how to utilize mixed-reality technologies in their design process. This know-how from multiple disciplines is thus combined and explored in this research, connecting concepts and discourse from human-computer interaction and media- and performance studies.This research is a collaboration of the University of Twente, Utrecht University, and HKU University of the Arts Utrecht. This research is partly financed by the Dutch Research Council (NWO).Mixed-reality experiences merge real and virtual environments in which physical and digital spaces, objects, and actors co-exist and interact in real-time. Collaborative Mix-Reality Environments, or CMRE's, enable creative design- and learning processes through full-body interaction with spatial manifestations of mediated ideas and concepts, as live-puppeteered or automated real-time computer-generated content. It employs large-scale projection mapping techniques, motion-capture, augmented- and virtual reality technologies, and networked real-time 3D environments in various inter-connected configurations.This keynote was given at the IETM Plenary meeting in Amsterdam for more than 500 theatre and performing arts professionals. It addresses the following questions in a roller coaster ride of thought-provoking ideas and examples from the world of technology, media, and theatre:What do current developments like Mixed Reality, Transmedia, and The Internet of Things mean for telling stories and creating theatrical experiences? How do we design performances on multiple "stages" and relate to our audiences when they become co-creators?Contactjoris.weijdom@hku.nl / LinkedIn profileThis research is part of the professorship Performative Processes