BackgroundA key factor in successfully preventing falls, is early identification of elderly with a high risk of falling. However, currently there is no easy-to-use pre-screening tool available; current tools are either not discriminative, time-consuming and/or costly. This pilot investigates the feasibility of developing an automatic gait-screening method by using a low-cost optical sensor and machinelearning algorithms to automatically detect features and classify gait patterns.MethodParticipants (n = 204, age 27 ± 7 yrs.) performed a gait test under two conditions: control and with distorted depth perception (induced by wearing special goggles). Each test consisted of 4x 3m walking at comfortable speed. Full-body 3D kinematics were captured using an optical sensor (Microsoft Xbox One Kinect). Tests were conducted in a public space to establish relatively 'natural' conditions. Data was processed in Matlab and common spatiotemporal variables were calculated per gait section. The 3D-time series data of the centre of mass for each section was used as input for a neural network, that was trained to discriminate between the two conditions.ResultsWearing the goggles affected the gait pattern significantly: gait velocity and step length decreased, and lateral sway increased compared to the control condition. A 2-layer neural network could correctly classify 79% of the gait segments (i.e. with or without distorted vision).ConclusionsThe results show that gait patterns of healthy people with distorted vision could automatically be classified with the proposed approach. Future work will focus on adapting this model for identification of specific physical risk-factors in elderly.
In practice, faults in building installations are seldom noticed because automated systems to diagnose such faults are not common use, despite many proposed methods: they are cumbersome to apply and not matching the way of thinking of HVAC engineers. Additionally, fault diagnosis and energy performance diagnosis are seldom combined, while energy wastage is mostly a consequence of component, sensors or control faults. In this paper new advances on the 4S3F diagnose framework for automated diagnostic of energy waste in HVAC systems are presented. The architecture of HVAC systems can be derived from a process and instrumentation diagram (P&ID) usually set up by HVAC designers. The paper demonstrates how all possible faults and symptoms can be extracted on a very structured way from the P&ID, and classified in 4 types of symptoms (deviations from balance equations, operational states, energy performances or additional information) and 3 types of faults (component, control and model faults). Symptoms and faults are related to each other through Diagnostic Bayesian Networks (DBNs) which work as an expert system. During operation of the HVAC system the data from the BMS is converted to symptoms, which are fed to the DBN. The DBN analyses the symptoms and determines the probability of faults. Generic indicators are proposed for the 4 types of symptoms. Standard DBN models for common components, controls and models are developed and it is demonstrated how to combine them in order to represent the complete HVAC system. Both the symptom and the fault identification parts are tested on historical BMS data of an ATES system including heat pump, boiler, solar panels, and hydronic systems. The energy savings resulting from fault corrections are estimated and amount 25%. Finally, the 4S3F method is extended to hard and soft sensor faults. Sensors are the core of any FDD system and any control system. Automated diagnostic of sensor faults is therefore essential. By considering hard sensors as components and soft sensors as models, they can be integrated into the 4S3F method.
BACKGROUND:Endotracheal suctioning causes discomfort, is associated with adverse effects, and is resource-demanding. An artificial secretion removal method, known as an automated cough, has been developed, which applies rapid, automated deflation, and inflation of the endotracheal tube cuff during the inspiratory phase of mechanical ventilation. This method has been evaluated in the hands of researchers but not when used by attending nurses. The aim of this study was to explore the efficacy of the method over the course of patient management as part of routine care.METHODS:This prospective, longitudinal, interventional study recruited 28 subjects who were intubated and mechanically ventilated. For a maximum of 7 d and on clinical need for endotracheal suctioning, the automatic cough procedure was applied. The subjects were placed in a pressure-regulated ventilation mode with elevated inspiratory pressure, and automated cuff deflation and inflation were performed 3 times, with this repeated if deemed necessary. Success was determined by resolution of the clinical need for suctioning as determined by the attending nurse. Adverse effects were recorded.RESULTS:A total of 84 procedures were performed. In 54% of the subjects, the artificial cough procedure was successful on > 70% of occasions, with 56% of all procedures considered successful. Ninety percent of all the procedures were performed in subjects who were spontaneously breathing and on pressure-support ventilation with peak inspiratory pressures of 20 cm H2O. Rates of adverse events were similar to those seen in the application of endotracheal suctioning.CONCLUSIONS:This study solely evaluated the efficacy of an automated artificial cough procedure, which illustrated the potential for reducing the need for endotracheal suctioning when applied by attending nurses in routine care.
Developing a framework that integrates Advanced Language Models into the qualitative research process.Qualitative research, vital for understanding complex phenomena, is often limited by labour-intensive data collection, transcription, and analysis processes. This hinders scalability, accessibility, and efficiency in both academic and industry contexts. As a result, insights are often delayed or incomplete, impacting decision-making, policy development, and innovation. The lack of tools to enhance accuracy and reduce human error exacerbates these challenges, particularly for projects requiring large datasets or quick iterations. Addressing these inefficiencies through AI-driven solutions like AIDA can empower researchers, enhance outcomes, and make qualitative research more inclusive, impactful, and efficient.The AIDA project enhances qualitative research by integrating AI technologies to streamline transcription, coding, and analysis processes. This innovation enables researchers to analyse larger datasets with greater efficiency and accuracy, providing faster and more comprehensive insights. By reducing manual effort and human error, AIDA empowers organisations to make informed decisions and implement evidence-based policies more effectively. Its scalability supports diverse societal and industry applications, from healthcare to market research, fostering innovation and addressing complex challenges. Ultimately, AIDA contributes to improving research quality, accessibility, and societal relevance, driving advancements across multiple sectors.
Various companies in diagnostic testing struggle with the same “valley of death” challenge. In order to further develop their sensing application, they rely on the technological readiness of easy and reproducible read-out systems. Photonic chips can be very sensitive sensors and can be made application-specific when coated with a properly chosen bio-functionalized layer. Here the challenge lies in the optical coupling of the active components (light source and detector) to the (disposable) photonic sensor chip. For the technology to be commercially viable, the price of the disposable photonic sensor chip should be as low as possible. The coupling of light from the source to the photonic sensor chip and back to the detectors requires a positioning accuracy of less than 1 micrometer, which is a tremendous challenge. In this research proposal, we want to investigate which of the six degrees of freedom (three translational and three rotational) are the most crucial when aligning photonic sensor chips with the external active components. Knowing these degrees of freedom and their respective range we can develop and test an automated alignment tool which can realize photonic sensor chip alignment reproducibly and fully autonomously. The consortium with expertise and contributions in the value chain of photonics interfacing, system and mechanical engineering will investigate a two-step solution. This solution comprises a passive pre-alignment step (a mechanical stop determines the position), followed by an active alignment step (an algorithm moves the source to the optimal position with respect to the chip). The results will be integrated into a demonstrator that performs an automated procedure that aligns a passive photonic chip with a terminal that contains the active components. The demonstrator is successful if adequate optical coupling of the passive photonic chip with the external active components is realized fully automatically, without the need of operator intervention.
The maximum capacity of the road infrastructure is being reached due to the number of vehicles that are being introduced on Dutch roads each day. One of the plausible solutions to tackle congestion could be efficient and effective use of road infrastructure using modern technologies such as cooperative mobility. Cooperative mobility relies majorly on big data that is generated potentially by millions of vehicles that are travelling on the road. But how can this data be generated? Modern vehicles already contain a host of sensors that are required for its operation. This data is typically circulated within an automobile via the CAN bus and can in-principle be shared with the outside world considering the privacy aspects of data sharing. The main problem is, however, the difficulty in interpreting this data. This is mainly because the configuration of this data varies between manufacturers and vehicle models and have not been standardized by the manufacturers. Signals from the CAN bus could be manually reverse engineered, but this process is extremely labour-intensive and time-consuming. In this project we investigate if an intelligent tool or specific test procedures could be developed to extract CAN messages and their composition efficiently irrespective of vehicle brand and type. This would lay the foundations that are required to generate big data-sets from in-vehicle data efficiently.