The aim of the present study was to investigate if the presence of anterior cruciate ligament (ACL) injury risk factors depicted in the laboratory would reflect at-risk patterns in football-specific field data. Twenty-four female footballers (14.9 ± 0.9 year) performed unanticipated cutting maneuvers in a laboratory setting and on the football pitch during football-specific exercises (F-EX) and games (F-GAME). Knee joint moments were collected in the laboratory and grouped using hierarchical agglomerative clustering. The clusters were used to investigate the kinematics collected on field through wearable sensors. Three clusters emerged: Cluster 1 presented the lowest knee moments; Cluster 2 presented high knee extension but low knee abduction and rotation moments; Cluster 3 presented the highest knee abduction, extension, and external rotation moments. In F-EX, greater knee abduction angles were found in Cluster 2 and 3 compared to Cluster 1 (p = 0.007). Cluster 2 showed the lowest knee and hip flexion angles (p < 0.013). Cluster 3 showed the greatest hip external rotation angles (p = 0.006). In F-GAME, Cluster 3 presented the greatest knee external rotation and lowest knee flexion angles (p = 0.003). Clinically relevant differences towards ACL injury identified in the laboratory reflected at-risk patterns only in part when cutting on the field: in the field, low-risk players exhibited similar kinematic patterns as the high-risk players. Therefore, in-lab injury risk screening may lack ecological validity.
We present a method for measuring gait velocity using data from an existing ambient sensor network. Gait velocity is an important predictor of fall risk and functional health. In contrast to other approaches that use specific sensors or sensor configurations our method imposes no constraints on the elderly. We studied different probabilistic models for the description of the sensor patterns. Experiments are carried out on 15 months of data and include repeated assessments from an occupational therapist. We showed that the measured gait velocities correlate with these assessments.
Drones have been verified as the camera of 2024 due to the enormous exponential growth in terms of the relevant technologies and applications such as smart agriculture, transportation, inspection, logistics, surveillance and interaction. Therefore, the commercial solutions to deploy drones in different working places have become a crucial demand for companies. Warehouses are one of the most promising industrial domains to utilize drones to automate different operations such as inventory scanning, goods transportation to the delivery lines, area monitoring on demand and so on. On the other hands, deploying drones (or even mobile robots) in such challenging environment needs to enable accurate state estimation in terms of position and orientation to allow autonomous navigation. This is because GPS signals are not available in warehouses due to the obstruction by the closed-sky areas and the signal deflection by structures. Vision-based positioning systems are the most promising techniques to achieve reliable position estimation in indoor environments. This is because of using low-cost sensors (cameras), the utilization of dense environmental features and the possibilities to operate in indoor/outdoor areas. Therefore, this proposal aims to address a crucial question for industrial applications with our industrial partners to explore limitations and develop solutions towards robust state estimation of drones in challenging environments such as warehouses and greenhouses. The results of this project will be used as the baseline to develop other navigation technologies towards full autonomous deployment of drones such as mapping, localization, docking and maneuvering to safely deploy drones in GPS-denied areas.
Various companies in diagnostic testing struggle with the same “valley of death” challenge. In order to further develop their sensing application, they rely on the technological readiness of easy and reproducible read-out systems. Photonic chips can be very sensitive sensors and can be made application-specific when coated with a properly chosen bio-functionalized layer. Here the challenge lies in the optical coupling of the active components (light source and detector) to the (disposable) photonic sensor chip. For the technology to be commercially viable, the price of the disposable photonic sensor chip should be as low as possible. The coupling of light from the source to the photonic sensor chip and back to the detectors requires a positioning accuracy of less than 1 micrometer, which is a tremendous challenge. In this research proposal, we want to investigate which of the six degrees of freedom (three translational and three rotational) are the most crucial when aligning photonic sensor chips with the external active components. Knowing these degrees of freedom and their respective range we can develop and test an automated alignment tool which can realize photonic sensor chip alignment reproducibly and fully autonomously. The consortium with expertise and contributions in the value chain of photonics interfacing, system and mechanical engineering will investigate a two-step solution. This solution comprises a passive pre-alignment step (a mechanical stop determines the position), followed by an active alignment step (an algorithm moves the source to the optimal position with respect to the chip). The results will be integrated into a demonstrator that performs an automated procedure that aligns a passive photonic chip with a terminal that contains the active components. The demonstrator is successful if adequate optical coupling of the passive photonic chip with the external active components is realized fully automatically, without the need of operator intervention.
The maximum capacity of the road infrastructure is being reached due to the number of vehicles that are being introduced on Dutch roads each day. One of the plausible solutions to tackle congestion could be efficient and effective use of road infrastructure using modern technologies such as cooperative mobility. Cooperative mobility relies majorly on big data that is generated potentially by millions of vehicles that are travelling on the road. But how can this data be generated? Modern vehicles already contain a host of sensors that are required for its operation. This data is typically circulated within an automobile via the CAN bus and can in-principle be shared with the outside world considering the privacy aspects of data sharing. The main problem is, however, the difficulty in interpreting this data. This is mainly because the configuration of this data varies between manufacturers and vehicle models and have not been standardized by the manufacturers. Signals from the CAN bus could be manually reverse engineered, but this process is extremely labour-intensive and time-consuming. In this project we investigate if an intelligent tool or specific test procedures could be developed to extract CAN messages and their composition efficiently irrespective of vehicle brand and type. This would lay the foundations that are required to generate big data-sets from in-vehicle data efficiently.