Camera trap technology has galvanized the study of predator-prey ecology in wild animal communities by expanding the scale and diversity of predator-prey interactions that can be analyzed. While observational data from systematic camera arrays have informed inferences on the spatiotemporal outcomes of predator-prey interactions, the capacity for observational studies to identify mechanistic drivers of species interactions is limited. Experimental study designs that utilize camera traps uniquely allow for testing hypothesized mechanisms that drive predator and prey behavior, incorporating environmental realism not possible in the lab while benefiting from the distinct capacity of camera traps to generate large data sets from multiple species with minimal observer interference. However, such pairings of camera traps with experimental methods remain underutilized. We review recent advances in the experimental application of camera traps to investigate fundamental mechanisms underlying predator-prey ecology and present a conceptual guide for designing experimental camera trap studies. Only 9% of camera trap studies on predator-prey ecology in our review mention experimental methods, but the application of experimental approaches is increasing. To illustrate the utility of camera trap-based experiments using a case study, we propose a study design that integrates observational and experimental techniques to test a perennial question in predator-prey ecology: how prey balance foraging and safety, as formalized by the risk allocation hypothesis. We discuss applications of camera trap-based experiments to evaluate the diversity of anthropogenic influences on wildlife communities globally. Finally, we review challenges to conducting experimental camera trap studies. Experimental camera trap studies have already begun to play an important role in understanding the predator-prey ecology of free-living animals, and such methods will become increasingly critical to quantifying drivers of community interactions in a rapidly changing world. We recommend increased application of experimental methods in the study of predator and prey responses to humans, synanthropic and invasive species, and other anthropogenic disturbances.
MULTIFILE
This paper describes the work that is done by a group of I3 students at Philips CFT in Eindhoven, Netherlands. I3 is an initiative of Fontys University of Professional Education also located in Eindhoven. The work focuses on the use of computer vision in motion control. Experiments are done with several techniques for object recognition and tracking, and with the guidance of a robot movement by means of computer vision. These experiments involve detection of coloured objects, object detection based on specific features, template matching with automatically generated templates, and interaction of a robot with a physical object that is viewed by a camera mounted on the robot.
DOCUMENT
In this paper we propose a head detection method using range data from a stereo camera. The method is based on a technique that has been introduced in the domain of voxel data. For application in stereo cameras, the technique is extended (1) to be applicable to stereo data, and (2) to be robust with regard to noise and variation in environmental settings. The method consists of foreground selection, head detection, and blob separation, and, to improve results in case of misdetections, incorporates a means for people tracking. It is tested in experiments with actual stereo data, gathered from three distinct real-life scenarios. Experimental results show that the proposed method performs well in terms of both precision and recall. In addition, the method was shown to perform well in highly crowded situations. From our results, we may conclude that the proposed method provides a strong basis for head detection in applications that utilise stereo cameras.
MULTIFILE
Receiving the first “Rijbewijs” is always an exciting moment for any teenager, but, this also comes with considerable risks. In the Netherlands, the fatality rate of young novice drivers is five times higher than that of drivers between the ages of 30 and 59 years. These risks are mainly because of age-related factors and lack of experience which manifests in inadequate higher-order skills required for hazard perception and successful interventions to react to risks on the road. Although risk assessment and driving attitude is included in the drivers’ training and examination process, the accident statistics show that it only has limited influence on the development factors such as attitudes, motivations, lifestyles, self-assessment and risk acceptance that play a significant role in post-licensing driving. This negatively impacts traffic safety. “How could novice drivers receive critical feedback on their driving behaviour and traffic safety? ” is, therefore, an important question. Due to major advancements in domains such as ICT, sensors, big data, and Artificial Intelligence (AI), in-vehicle data is being extensively used for monitoring driver behaviour, driving style identification and driver modelling. However, use of such techniques in pre-license driver training and assessment has not been extensively explored. EIDETIC aims at developing a novel approach by fusing multiple data sources such as in-vehicle sensors/data (to trace the vehicle trajectory), eye-tracking glasses (to monitor viewing behaviour) and cameras (to monitor the surroundings) for providing quantifiable and understandable feedback to novice drivers. Furthermore, this new knowledge could also support driving instructors and examiners in ensuring safe drivers. This project will also generate necessary knowledge that would serve as a foundation for facilitating the transition to the training and assessment for drivers of automated vehicles.