Channel State Information (CSI) analysis for Predictive Maintenance using Convolutiona Neural Network (CNN).
MULTIFILE
Accurate localization in autonomous robots enables effective decision-making within their operating environment. Various methods have been developed to address this challenge, encompassing traditional techniques, fiducial marker utilization, and machine learning approaches. This work proposes a deep-learning solution employing Convolutional Neural Networks (CNN) to tackle the localization problem, specifically in the context of the RobotAtFactory 4.0 competition. The proposed approach leverages transfer learning from the pre-trained VGG16 model to capitalize on its existing knowledge. To validate the effectiveness of the approach, a simulated scenario was employed. The experimental results demonstrated an error within the millimeter scale and rapid response times in milliseconds. Notably, the presented approach offers several advantages, including a consistent model size regardless of the number of training images utilized and the elimination of the need to know the absolute positions of the fiducial markers.
DOCUMENT
Agriculture and horticulture are essential for ensuring safe food to the growing global population, but they also contribute significantly to climate change and biodiversity loss due to the extensive use of chemicals. Integrated pest management is currently employed to monitor and control pest populations, but it relies on labor-intensive methods with low accuracy. Automating crop monitoring using aerial robotics, such as flapping-wing drones, presents a viable solution. This study explores the application of deep learning algorithms, You Only Look Once (YOLO) and Faster region-based convolutional neural network regions with convolutional neural networks (R-CNN), for pest and disease detection in greenhouse environments. The research involved collecting and annotating a diverse dataset of images and videos of common pests and diseases affecting tomatoes, bell peppers, and cucumbers cultivated in Dutch greenhouses. Data augmentation and image resizing techniques were applied to enhance the dataset. The study compared the performance of YOLO and Faster R-CNN, with YOLO demonstrating superior performance. Testing on data acquired by flapping-wing drones showed that YOLO could detect powdery mildew with accuracy ranging from 0.29 to 0.61 despite the shaking movement induced by the actuation system of the drone’s flapping wings.
DOCUMENT
The number of applications in which industrial robots share their working environment with people is increasing. Robots appropriate for such applications are equipped with safety systems according to ISO/TS 15066:2016 and are often referred to as collaborative robots (cobots). Due to the nature of human-robot collaboration, the working environment of cobots is subjected to unforeseeable modifications caused by people. Vision systems are often used to increase the adaptability of cobots, but they usually require knowledge of the objects to be manipulated. The application of machine learning techniques can increase the flexibility by enabling the control system of a cobot to continuously learn and adapt to unexpected changes in the working environment. In this paper we address this issue by investigating the use of Reinforcement Learning (RL) to control a cobot to perform pick-and-place tasks. We present the implementation of a control system that can adapt to changes in position and enables a cobot to grasp objects which were not part of the training. Our proposed system uses deep Q-learning to process color and depth images and generates an (Formula presented.) -greedy policy to define robot actions. The Q-values are estimated using Convolution Neural Networks (CNNs) based on pre-trained models for feature extraction. To reduce training time, we implement a simulation environment to first train the RL agent, then we apply the resulting system on a real cobot. System performance is compared when using the pre-trained CNN models ResNext, DenseNet, MobileNet, and MNASNet. Simulation and experimental results validate the proposed approach and show that our system reaches a grasping success rate of 89.9% when manipulating a never-seen object operating with the pre-trained CNN model MobileNet.
DOCUMENT
Terms like ‘big data’, ‘data science’, and ‘data visualisation’ have become buzzwords in recent years and are increasingly intertwined with journalism. Data visualisation may further blur the lines between science communication and graphic design. Our study is situated in these overlaps to compare the design of data visualisations in science news stories across four online news media platforms in South Africa and the United States. Our study contributes to an understanding of how well-considered data visualisations are tools for effective storytelling, and offers practical recommendations for using data visualisation in science communication efforts.
LINK
This study presents an automated method for detecting and measuring the apex head thickness of tomato plants, a critical phenotypic trait associated with plant health, fruit development, and yield forecasting. Due to the apex's sensitivity to physical contact, non-invasive monitoring is essential. This paper addresses the demand for automated, contactless systems among Dutch growers. Our approach integrates deep learning models (YOLO and Faster RCNN) with RGB-D camera imaging to enable accurate, scalable, and non-invasive measurement in greenhouse environments. A dataset of 600 RGB-D images captured in a controlled greenhouse, was fully preprocessed, annotated, and augmented for optimal training. Experimental results show that YOLOv8n achieved superior performance with a precision of 91.2 %, recall of 86.7 %, and an Intersection over Union (IoU) score of 89.4 %. Other models, such as YOLOv9t, YOLOv10n, YOLOv11n, and Faster RCNN, demonstrated lower precision scores of 83.6 %, 74.6 %, 75.4 %, and 78 %, respectively. Their IoU scores were also lower, indicating less reliable detection. This research establishes a robust, real-time method for precision agriculture through automated apex head thickness measurement.
DOCUMENT
The paper introduced an automatic score detection model using object detection techniques. The performance of sevenmodels belonging to two different architectural setups was compared. Models like YOLOv8n, YOLOv8s, YOLOv8m, RetinaNet-50, and RetinaNet-101 are single-shot detectors, while Faster RCNN-50 and Faster RCNN-101 belong to the two-shot detectors category. The dataset was manually captured from the shooting range and expanded by generating more versatile data using Python code. Before the dataset was trained to develop models, it was resized (640x640) and augmented using Roboflow API. The trained models were then assessed on the test dataset, and their performance was compared using matrices like mAP50, mAP50-90, precision, and recall. The results showed that YOLOv8 models can detect multiple objects with good confidence scores.
DOCUMENT
Artikel verschenen in NVVR/MemoRad: Het missen van fracturen kan leiden tot onnodige problemen voor patiënten. Verschillende complicaties en bijkomende kosten kunnen het gevolg zijn. Wat is de waarde van kunstmatige intelligentie (Al) bij röntgenbeelden voor het opsporen van fracturen? De toepassing van Al-systemen in het werkveld kan radiologen ondersteunen. Uit de literatuurstudie van Patricia Dinkgreve komt naar voren dat Al met hoge nauwkeurigheid fracturen kan opsporen.1 Al presteert zelfs beter dan medisch specialisten met/of zonder de hulp van Al
DOCUMENT
Chest imaging plays a pivotal role in screening and monitoring patients, and various predictive artificial intelligence (AI) models have been developed in support of this. However, little is known about the effect of decreasing the radiation dose and, thus, image quality on AI performance. This study aims to design a low-dose simulation and evaluate the effect of this simulation on the performance of CNNs in plain chest radiography. Seven pathology labels and corresponding images from Medical Information Mart for Intensive Care datasets were used to train AI models at two spatial resolutions. These 14 models were tested using the original images, 50% and 75% low-dose simulations. We compared the area under the receiver operator characteristic (AUROC) of the original images and both simulations using DeLong testing. The average absolute change in AUROC related to simulated dose reduction for both resolutions was <0.005, and none exceeded a change of 0.014. Of the 28 test sets, 6 were significantly different. An assessment of predictions, performed through the splitting of the data by gender and patient positioning, showed a similar trend. The effect of simulated dose reductions on CNN performance, although significant in 6 of 28 cases, has minimal clinical impact. The effect of patient positioning exceeds that of dose reduction.
LINK
In recent years, drones have increasingly supported First Responders (FRs) in monitoring incidents and providing additional information. However, analysing drone footage is time-intensive and cognitively demanding. In this research, we investigate the use of AI models for the detection of humans in drone footage to aid FRs in tasks such as locating victims. Detecting small-scale objects, particularly humans from high altitudes, poses a challenge for AI systems. We present first steps of introducing and evaluating a series of YOLOv8 Convolutional Neural Networks (CNNs) for human detection from drone images. The models are fine-tuned on a created drone image dataset of the Dutch Fire Services and were able to achieve a 53.1% F1-Score, identifying 439 out of 825 humans in the test dataset. These preliminary findings, validated by an incident commander, highlight the promising utility of these models. Ongoing efforts aim to further refine the models and explore additional technologies.
MULTIFILE