This paper describes the work that is done by a group of I3 students at Philips CFT in Eindhoven, Netherlands. I3 is an initiative of Fontys University of Professional Education also located in Eindhoven. The work focuses on the use of computer vision in motion control. Experiments are done with several techniques for object recognition and tracking, and with the guidance of a robot movement by means of computer vision. These experiments involve detection of coloured objects, object detection based on specific features, template matching with automatically generated templates, and interaction of a robot with a physical object that is viewed by a camera mounted on the robot.
DOCUMENT
In this paper we propose a head detection method using range data from a stereo camera. The method is based on a technique that has been introduced in the domain of voxel data. For application in stereo cameras, the technique is extended (1) to be applicable to stereo data, and (2) to be robust with regard to noise and variation in environmental settings. The method consists of foreground selection, head detection, and blob separation, and, to improve results in case of misdetections, incorporates a means for people tracking. It is tested in experiments with actual stereo data, gathered from three distinct real-life scenarios. Experimental results show that the proposed method performs well in terms of both precision and recall. In addition, the method was shown to perform well in highly crowded situations. From our results, we may conclude that the proposed method provides a strong basis for head detection in applications that utilise stereo cameras.
MULTIFILE
This study presents an automated method for detecting and measuring the apex head thickness of tomato plants, a critical phenotypic trait associated with plant health, fruit development, and yield forecasting. Due to the apex's sensitivity to physical contact, non-invasive monitoring is essential. This paper addresses the demand for automated, contactless systems among Dutch growers. Our approach integrates deep learning models (YOLO and Faster RCNN) with RGB-D camera imaging to enable accurate, scalable, and non-invasive measurement in greenhouse environments. A dataset of 600 RGB-D images captured in a controlled greenhouse, was fully preprocessed, annotated, and augmented for optimal training. Experimental results show that YOLOv8n achieved superior performance with a precision of 91.2 %, recall of 86.7 %, and an Intersection over Union (IoU) score of 89.4 %. Other models, such as YOLOv9t, YOLOv10n, YOLOv11n, and Faster RCNN, demonstrated lower precision scores of 83.6 %, 74.6 %, 75.4 %, and 78 %, respectively. Their IoU scores were also lower, indicating less reliable detection. This research establishes a robust, real-time method for precision agriculture through automated apex head thickness measurement.
DOCUMENT
Receiving the first “Rijbewijs” is always an exciting moment for any teenager, but, this also comes with considerable risks. In the Netherlands, the fatality rate of young novice drivers is five times higher than that of drivers between the ages of 30 and 59 years. These risks are mainly because of age-related factors and lack of experience which manifests in inadequate higher-order skills required for hazard perception and successful interventions to react to risks on the road. Although risk assessment and driving attitude is included in the drivers’ training and examination process, the accident statistics show that it only has limited influence on the development factors such as attitudes, motivations, lifestyles, self-assessment and risk acceptance that play a significant role in post-licensing driving. This negatively impacts traffic safety. “How could novice drivers receive critical feedback on their driving behaviour and traffic safety? ” is, therefore, an important question. Due to major advancements in domains such as ICT, sensors, big data, and Artificial Intelligence (AI), in-vehicle data is being extensively used for monitoring driver behaviour, driving style identification and driver modelling. However, use of such techniques in pre-license driver training and assessment has not been extensively explored. EIDETIC aims at developing a novel approach by fusing multiple data sources such as in-vehicle sensors/data (to trace the vehicle trajectory), eye-tracking glasses (to monitor viewing behaviour) and cameras (to monitor the surroundings) for providing quantifiable and understandable feedback to novice drivers. Furthermore, this new knowledge could also support driving instructors and examiners in ensuring safe drivers. This project will also generate necessary knowledge that would serve as a foundation for facilitating the transition to the training and assessment for drivers of automated vehicles.