The estimation of the pose of a differential drive mobile robot from noisy odometer, compass and beacon distance measurements is studied. The estimation problem, which is a state estimation problem with unknown input, is reformulated into a state estimation problem with known input and a process noise term. A heuristic sensor fusion algorithm solving this state-estimation problem is proposed and compared with the extended Kalman filter solution and the Particle Filter solution in a simulation experiment. https://doi.org/10.4018/IJAIML.2020010101 https://www.linkedin.com/in/john-bolte-0856134/
DOCUMENT
To better control the growing process of horticulture plants greenhouse growers need an automated way to efficiently and effectively find where diseases are spreading.The HiPerGreen project has done research in using an autonomous quadcopter for this scouting. In order for the quadcopter to be able to scout autonomously accurate location data is needed. Several different methods of obtaining location data have been investigated in prior research. In this research a relative sensor based on optical flow is looked into as a method of stabilizing an absolute measurement based on trilateration. For the optical flow sensor a novel block matching algorithm was developed. Simulated testing showed that Kalman Filter based sensor fusion of both measurements worked to reduce the standard deviation of the absolute measurement from 30 cm to less than 1 cm, while drift due to dead-reckoning was reduced to a maximum of 11 cm from over 36 cm.
DOCUMENT
Twirre V2 is the evolution of an architecture for mini-UAV platforms which allows automated operation in both GPS-enabled and GPSdeprived applications. This second version separates mission logic, sensor data processing and high-level control, which results in reusable software components for multiple applications. The concept of Local Positioning System (LPS) is introduced, which, using sensor fusion, would aid or automate the flying process like GPS currently does. For this, new sensors are added to the architecture and a generic sensor interface together with missions for landing and following a line have been implemented. V2 introduces a software modular design and new hardware has been coupled, showing its extensibility and adaptability
DOCUMENT
Poster presentation on conference Alice and Eve 2020.
MULTIFILE
In wheelchair sports, the use of Inertial Measurement Units (IMUs) has proven to be one of the most accessible ways for ambulatory measurement of wheelchair kinematics. A three-IMU configuration, with one IMU attached to the wheelchair frame and two IMUs on each wheel axle, has previously shown accurate results and is considered optimal for accuracy. Configurations with fewer sensors reduce costs and could enhance usability, but may be less accurate. The aim of this study was to quantify the decline in accuracy for measuring wheelchair kinematics with a stepwise sensor reduction. Ten differently skilled participants performed a series of wheelchair sport specific tests while their performance was simultaneously measured with IMUs and an optical motion capture system which served as reference. Subsequently, both a one-IMU and a two-IMU configuration were validated and the accuracy of the two approaches was compared for linear and angular wheelchair velocity. Results revealed that the one-IMU approach show a mean absolute error (MAE) of 0.10 m/s for absolute linear velocity and a MAE of 8.1◦/s for wheelchair angular velocity when compared with the reference system. The twoIMU approach showed similar differences for absolute linear wheelchair velocity (MAE 0.10 m/s), and smaller differences for angular velocity (MAE 3.0◦/s). Overall, a lower number of IMUs used in the configuration resulted in a lower accuracy of wheelchair kinematics. Based on the results of this study, choices regarding the number of IMUs can be made depending on the aim, required accuracy and resources available.
DOCUMENT
Wat zijn belangrijke succesfactoren om onderzoek, onderwijs en ondernemen bij elkaar te brengen, zó dat 'het klikt'. De uitdaging voor de toekomst van bedrijven in de smart factoryligt bij data science: het omzetten van ruwe (sensor) data naar (zinnige) informatie en kennis, waarmee producten en diensten verbeterd kunnen worden. Tevens programma van het symposium t.g.l. inauguratie 3 december 2015
MULTIFILE
Several studies have suggested that precision livestock farming (PLF) is a useful tool foranimal welfare management and assessment. Location, posture and movement of an individual are key elements in identifying the animal and recording its behaviour. Currently, multiple technologies are available for automated monitoring of the location of individual animals, ranging from Global Navigation Satellite Systems (GNSS) to ultra-wideband (UWB), RFID, wireless sensor networks (WSN) and even computer vision. These techniques and developments all yield potential to manage and assess animal welfare, but also have their constraints, such as range and accuracy. Combining sensors such as accelerometers with any location determining technique into a sensor fusion systemcan give more detailed information on the individual cow, achieving an even more reliable and accurate indication of animal welfare. We conclude that location systems are a promising approach to determining animal welfare, especially when applied in conjunction with additional sensors, but additional research focused on the use of technology in animal welfare monitoring is needed.
MULTIFILE
In sports, inertial measurement units are often used to measure the orientation of human body segments. A Madgwick (MW) filter can be used to obtain accurate inertial measurement unit (IMU) orientation estimates. This filter combines two different orientation estimates by applying a correction of the (1) gyroscope-based estimate in the direction of the (2) earth frame-based estimate. However, in sports situations that are characterized by relatively large linear accelerations and/or close magnetic sources, such as wheelchair sports, obtaining accurate IMU orientation estimates is challenging. In these situations, applying the MW filter in the regular way, i.e., with the same magnitude of correction at all time frames, may lead to estimation errors. Therefore, in this study, the MW filter was extended with machine learning to distinguish instances in which a small correction magnitude is beneficial from instances in which a large correction magnitude is beneficial, to eventually arrive at accurate body segment orientations in IMU-challenging sports situations. A machine learning algorithm was trained to make this distinction based on raw IMU data. Experiments on wheelchair sports were performed to assess the validity of the extended MW filter, and to compare the extended MW filter with the original MW filter based on comparisons with a motion capture-based reference system. Results indicate that the extended MW filter performs better than the original MW filter in assessing instantaneous trunk inclination (7.6 vs. 11.7◦ root-mean-squared error, RMSE), especially during the dynamic, IMU-challenging situations with moving athlete and wheelchair. Improvements of up to 45% RMSE were obtained for the extended MW filter compared with the original MW filter. To conclude, the machine learning-based extended MW filter has an acceptable accuracy and performs better than the original MW filter for the assessment of body segment orientation in IMU-challenging sports situations.
DOCUMENT
In a multi-sensory environment, supported with embedded computer technology, the system can capture and interpret what the users are doing and assist or collaborate with the users in real-time. Such an environment should be aware of users intentions, tasks and feelings, and allow people to interact with the environment in a natural way: by moving, pointing and gesturing. In this paper we propose an architecture for such a smart environment consisting of three modules.
DOCUMENT
This study presents an automated method for detecting and measuring the apex head thickness of tomato plants, a critical phenotypic trait associated with plant health, fruit development, and yield forecasting. Due to the apex's sensitivity to physical contact, non-invasive monitoring is essential. This paper addresses the demand for automated, contactless systems among Dutch growers. Our approach integrates deep learning models (YOLO and Faster RCNN) with RGB-D camera imaging to enable accurate, scalable, and non-invasive measurement in greenhouse environments. A dataset of 600 RGB-D images captured in a controlled greenhouse, was fully preprocessed, annotated, and augmented for optimal training. Experimental results show that YOLOv8n achieved superior performance with a precision of 91.2 %, recall of 86.7 %, and an Intersection over Union (IoU) score of 89.4 %. Other models, such as YOLOv9t, YOLOv10n, YOLOv11n, and Faster RCNN, demonstrated lower precision scores of 83.6 %, 74.6 %, 75.4 %, and 78 %, respectively. Their IoU scores were also lower, indicating less reliable detection. This research establishes a robust, real-time method for precision agriculture through automated apex head thickness measurement.
DOCUMENT