BACKGROUND:Endotracheal suctioning causes discomfort, is associated with adverse effects, and is resource-demanding. An artificial secretion removal method, known as an automated cough, has been developed, which applies rapid, automated deflation, and inflation of the endotracheal tube cuff during the inspiratory phase of mechanical ventilation. This method has been evaluated in the hands of researchers but not when used by attending nurses. The aim of this study was to explore the efficacy of the method over the course of patient management as part of routine care.METHODS:This prospective, longitudinal, interventional study recruited 28 subjects who were intubated and mechanically ventilated. For a maximum of 7 d and on clinical need for endotracheal suctioning, the automatic cough procedure was applied. The subjects were placed in a pressure-regulated ventilation mode with elevated inspiratory pressure, and automated cuff deflation and inflation were performed 3 times, with this repeated if deemed necessary. Success was determined by resolution of the clinical need for suctioning as determined by the attending nurse. Adverse effects were recorded.RESULTS:A total of 84 procedures were performed. In 54% of the subjects, the artificial cough procedure was successful on > 70% of occasions, with 56% of all procedures considered successful. Ninety percent of all the procedures were performed in subjects who were spontaneously breathing and on pressure-support ventilation with peak inspiratory pressures of 20 cm H2O. Rates of adverse events were similar to those seen in the application of endotracheal suctioning.CONCLUSIONS:This study solely evaluated the efficacy of an automated artificial cough procedure, which illustrated the potential for reducing the need for endotracheal suctioning when applied by attending nurses in routine care.
At ageing, there comes a certain point when people are no longer able to live independently in their own homes. With an ever increasing elderly population, this constitutes a significant and increasing burden for the health care expenses. The need for more cost effective solutions is evident.Research from H. van der Kloet (Hanze UAS) suggests that there is one main concern why people consider moving to an elderly home earlier than they actually need to; safety. There are many aspects to safety: self reliance, self confidence, indoor security and social security.With the elderly population becoming more technically aware, the opportunity of using technology to enable a longer independent life while maintaining or even enhancing quality of life, and thus to prevent rising health care expenses, is possible.With this in mind a Home Automated Living Platform (H.A.L.P.) was developed.
We developed an application which allows learners to construct qualitative representations of dynamic systems to aid them in learning subject content knowledge and system thinking skills simultaneously. Within this application, we implemented a lightweight support function which automatically generates help from a norm-representation to aid learners as they construct these qualitative representations. This support can be expected to improve learning. Using this function it is not necessary to define in advance possible errors that learners may make and the subsequent feedback. Also, no data from (previous) learners is required. Such a lightweight support function is ideal for situations where lessons are designed for a wide variety of topics for small groups of learners. Here, we report on the use and impact of this support function in two lessons: Star Formation and Neolithic Age. A total of 63 ninth-grade learners from secondary school participated. The study used a pretest/intervention/post-test design with two conditions (no support vs. support) for both lessons. Learners with access to the support create better representations, learn more subject content knowledge, and improve their system thinking skills. Learners use the support throughout the lessons, more often than they would use support from the teacher. We also found no evidence for misuse, i.e., 'gaming the system', of the support function.
Developing a framework that integrates Advanced Language Models into the qualitative research process.Qualitative research, vital for understanding complex phenomena, is often limited by labour-intensive data collection, transcription, and analysis processes. This hinders scalability, accessibility, and efficiency in both academic and industry contexts. As a result, insights are often delayed or incomplete, impacting decision-making, policy development, and innovation. The lack of tools to enhance accuracy and reduce human error exacerbates these challenges, particularly for projects requiring large datasets or quick iterations. Addressing these inefficiencies through AI-driven solutions like AIDA can empower researchers, enhance outcomes, and make qualitative research more inclusive, impactful, and efficient.The AIDA project enhances qualitative research by integrating AI technologies to streamline transcription, coding, and analysis processes. This innovation enables researchers to analyse larger datasets with greater efficiency and accuracy, providing faster and more comprehensive insights. By reducing manual effort and human error, AIDA empowers organisations to make informed decisions and implement evidence-based policies more effectively. Its scalability supports diverse societal and industry applications, from healthcare to market research, fostering innovation and addressing complex challenges. Ultimately, AIDA contributes to improving research quality, accessibility, and societal relevance, driving advancements across multiple sectors.
Various companies in diagnostic testing struggle with the same “valley of death” challenge. In order to further develop their sensing application, they rely on the technological readiness of easy and reproducible read-out systems. Photonic chips can be very sensitive sensors and can be made application-specific when coated with a properly chosen bio-functionalized layer. Here the challenge lies in the optical coupling of the active components (light source and detector) to the (disposable) photonic sensor chip. For the technology to be commercially viable, the price of the disposable photonic sensor chip should be as low as possible. The coupling of light from the source to the photonic sensor chip and back to the detectors requires a positioning accuracy of less than 1 micrometer, which is a tremendous challenge. In this research proposal, we want to investigate which of the six degrees of freedom (three translational and three rotational) are the most crucial when aligning photonic sensor chips with the external active components. Knowing these degrees of freedom and their respective range we can develop and test an automated alignment tool which can realize photonic sensor chip alignment reproducibly and fully autonomously. The consortium with expertise and contributions in the value chain of photonics interfacing, system and mechanical engineering will investigate a two-step solution. This solution comprises a passive pre-alignment step (a mechanical stop determines the position), followed by an active alignment step (an algorithm moves the source to the optimal position with respect to the chip). The results will be integrated into a demonstrator that performs an automated procedure that aligns a passive photonic chip with a terminal that contains the active components. The demonstrator is successful if adequate optical coupling of the passive photonic chip with the external active components is realized fully automatically, without the need of operator intervention.
The maximum capacity of the road infrastructure is being reached due to the number of vehicles that are being introduced on Dutch roads each day. One of the plausible solutions to tackle congestion could be efficient and effective use of road infrastructure using modern technologies such as cooperative mobility. Cooperative mobility relies majorly on big data that is generated potentially by millions of vehicles that are travelling on the road. But how can this data be generated? Modern vehicles already contain a host of sensors that are required for its operation. This data is typically circulated within an automobile via the CAN bus and can in-principle be shared with the outside world considering the privacy aspects of data sharing. The main problem is, however, the difficulty in interpreting this data. This is mainly because the configuration of this data varies between manufacturers and vehicle models and have not been standardized by the manufacturers. Signals from the CAN bus could be manually reverse engineered, but this process is extremely labour-intensive and time-consuming. In this project we investigate if an intelligent tool or specific test procedures could be developed to extract CAN messages and their composition efficiently irrespective of vehicle brand and type. This would lay the foundations that are required to generate big data-sets from in-vehicle data efficiently.