The last decade has seen an increasing demand from the industrial field of computerized visual inspection. Applications rapidly become more complex and often with more demanding real time constraints. However, from 2004 onwards the clock frequency of CPUs has not increased significantly. Computer Vision applications have an increasing demand for more processing power but are limited by the performance capabilities of sequential processor architectures. The only way to get more performance using commodity hardware, like multi-core processors and graphics cards, is to go for parallel programming. This article focuses on the practical question: How can the processing time for vision algorithms be improved, by parallelization, in an economical way and execute them on multiple platforms?
DOCUMENT
This paper describes the work that is done by a group of I3 students at Philips CFT in Eindhoven, Netherlands. I3 is an initiative of Fontys University of Professional Education also located in Eindhoven. The work focuses on the use of computer vision in motion control. Experiments are done with several techniques for object recognition and tracking, and with the guidance of a robot movement by means of computer vision. These experiments involve detection of coloured objects, object detection based on specific features, template matching with automatically generated templates, and interaction of a robot with a physical object that is viewed by a camera mounted on the robot.
DOCUMENT
Purpose: To establish age-related, normal limits of monocular and binocular spatial vision under photopic and mesopic conditions. Methods: Photopic and mesopic visual acuity (VA) and contrast thresholds (CTs) were measured with both positive and negative contrast optotypes under binocular and monocular viewing conditions using the Acuity-Plus (AP) test. The experiments were carried out on participants (age range from 10 to 86 years), who met pre-established, normal sight criteria. Mean and ± 2.5σ limits were calculated within each 5-year subgroup. A biologically meaningful model was then fitted to predict mean values and upper and lower threshold limits for VA and CT as a function of age. The best-fit model parameters describe normal aging of spatial vision for each of the 16 experimental conditions investigated. Results: Out of the 382 participants recruited for this study, 285 participants passed the selection criteria for normal aging. Log transforms were applied to ensure approximate normal distributions. Outliers were also removed for each of the 16 stimulus conditions investigated based on the ±2.5σ limit criterion. VA, CTs and the overall variability were found to be age-invariant up to ~50 years in the photopic condition. A lower, age-invariant limit of ~30 years was more appropriate for the mesopic range with a gradual, but accelerating increase in both mean thresholds and intersubject variability above this age. Binocular thresholds were smaller and much less variable when compared to the thresholds measured in either eye. Results with negative contrast optotypes were significantly better than the corresponding results measured with positive contrast (p < 0.004). Conclusions: This project has established the expected age limits of spatial vision for monocular and binocular viewing under photopic and high mesopic lighting with both positive and negative contrast optotypes using a single test, which can be implemented either in the clinic or in an occupational setting.
DOCUMENT
Gepubliceerd in Mikroniek, nr. 6 2018 In manufacturing environments where collaborative robots are employed, conventional computer vision algorithms have trouble in the robust localisation and detection of products due to changing illumination conditions and shadows caused by a human sharing the workspace with the robotic system. In order to enhance the robustness of vision applications, machine learning with neural networks is explored. The performance of machine-learning algorithms versus conventional computer vision algorithms is studied by observing a generic user scenario for the manufacturing process: the assembly of a product by localisation, identification and manipulation of building blocks.
MULTIFILE
The objective of this study was to determine if a 3-dimensional computer vision automatic locomotion scoring (3D-ALS) method was able to outperform human observers for classifying cows as lame or nonlame and for detecting cows affected and nonaffected by specific type(s) of hoof lesion. Data collection was carried out in 2 experimental sessions (5 months apart).
MULTIFILE
Wat zijn belangrijke succesfactoren om onderzoek, onderwijs en ondernemen bij elkaar te brengen, zó dat 'het klikt'. De uitdaging voor de toekomst van bedrijven in de smart factoryligt bij data science: het omzetten van ruwe (sensor) data naar (zinnige) informatie en kennis, waarmee producten en diensten verbeterd kunnen worden. Tevens programma van het symposium t.g.l. inauguratie 3 december 2015
MULTIFILE
Shared Vision Planning (SVP) is a collaborative approach to water (resource) management that combines three practices: (1) traditional water resources planning; (2) structured participation of stakeholders; (3) (collaborative) computer modeling and simulation. The authors argue that there are ample opportunities for learning and innovation in SVP when we look at it as a form of Policy Analysis (PA) in a multi-actor context. SVP faces three classic PA dilemmas: (1) the role of experts and scientific knowledge in policymaking; (2) The design and management of participatory and interactive planning processes; and (3) the (ab)use of computer models and simulations in (multi actor) policymaking. In dealing with these dilemmas, SVP can benefit from looking at the richness of PA methodology, such as for stakeholder analysis and process management. And it can innovate by incorporating some of the rapid developments now taking place in the field of (serious) gaming and simulation (S&G) for policy analysis. In return, the principles, methods, and case studies of SVP can significantly enhance how we perform PA for multi-actor water (resource) management.
DOCUMENT
Computer security incident response teams (CSIRTs) respond to a computer security incident when the need arises. Failure of these teams can have far-reaching effects for the economy and national security. CSIRTs often have to work on an ad hoc basis, in close cooperation with other teams, and in time constrained environments. It could be argued that under these working conditions CSIRTs would be likely to encounter problems. A needs assessment was done to see to which extent this argument holds true. We constructed an incident response needs model to assist in identifying areas that require improvement. We envisioned a model consisting of four assessment categories: Organization, Team, Individual and Instrumental. Central to this is the idea that both problems and needs can have an organizational, team, individual, or technical origin or a combination of these levels. To gather data we conducted a literature review. This resulted in a comprehensive list of challenges and needs that could hinder or improve, respectively, the performance of CSIRTs. Then, semi-structured in depth interviews were held with team coordinators and team members of five public and private sector Dutch CSIRTs to ground these findings in practice and to identify gaps between current and desired incident handling practices. This paper presents the findings of our needs assessment and ends with a discussion of potential solutions to problems with performance in incident response. https://doi.org/10.3389/fpsyg.2017.02179 LinkedIn: https://www.linkedin.com/in/rickvanderkleij1/
MULTIFILE
Cozmo is a real-life robot designed to interact with people playing games, making sounds, expressing emotions on a LCD screen and many other pre-programmable functions. We present the development and implementation of an educational platform for Cozmo mobile robot, with several features, including web server for user interface, computer vision, voice recognition, robot trajectory tracking control, among others. Functions for educational purposes were implemented, including mathematical operations, spelling, directions, and questions functions that gives more flexibility for the teachers to create their own scripts. In this system, a cloud voice recognition tool was implemented to improve the interactive system between Cozmo and the users. Also, a cloud computing vision system was used to perform object recognition using Cozmo's camera, to be applied on educational games. Other functions were created with the purpose of controlling the emotions and the motors of Cozmo to create more sophisticated scripts. To apply the functions on Cozmo robot, an interpreter algorithm was developed to translate the functions into Cozmo's programming language. To validate this work, the proposed framework was presented to several elementary school teachers (classes with students between 4 and 12). Students and teacher's impressions are reported in this text, and indicate that the proposed system can be a useful educational tool.
DOCUMENT
Brochure from the Inauguration of Klaas Dijkstra, professor Computer Vision and Data Science
DOCUMENT