The last decade has seen an increasing demand from the industrial field of computerized visual inspection. Applications rapidly become more complex and often with more demanding real time constraints. However, from 2004 onwards the clock frequency of CPUs has not increased significantly. Computer Vision applications have an increasing demand for more processing power but are limited by the performance capabilities of sequential processor architectures. The only way to get more performance using commodity hardware, like multi-core processors and graphics cards, is to go for parallel programming. This article focuses on the practical question: How can the processing time for vision algorithms be improved, by parallelization, in an economical way and execute them on multiple platforms?
DOCUMENT
This paper describes the work that is done by a group of I3 students at Philips CFT in Eindhoven, Netherlands. I3 is an initiative of Fontys University of Professional Education also located in Eindhoven. The work focuses on the use of computer vision in motion control. Experiments are done with several techniques for object recognition and tracking, and with the guidance of a robot movement by means of computer vision. These experiments involve detection of coloured objects, object detection based on specific features, template matching with automatically generated templates, and interaction of a robot with a physical object that is viewed by a camera mounted on the robot.
DOCUMENT
The objective of this study was to determine if a 3-dimensional computer vision automatic locomotion scoring (3D-ALS) method was able to outperform human observers for classifying cows as lame or nonlame and for detecting cows affected and nonaffected by specific type(s) of hoof lesion. Data collection was carried out in 2 experimental sessions (5 months apart).
MULTIFILE
Purpose: To establish age-related, normal limits of monocular and binocular spatial vision under photopic and mesopic conditions. Methods: Photopic and mesopic visual acuity (VA) and contrast thresholds (CTs) were measured with both positive and negative contrast optotypes under binocular and monocular viewing conditions using the Acuity-Plus (AP) test. The experiments were carried out on participants (age range from 10 to 86 years), who met pre-established, normal sight criteria. Mean and ± 2.5σ limits were calculated within each 5-year subgroup. A biologically meaningful model was then fitted to predict mean values and upper and lower threshold limits for VA and CT as a function of age. The best-fit model parameters describe normal aging of spatial vision for each of the 16 experimental conditions investigated. Results: Out of the 382 participants recruited for this study, 285 participants passed the selection criteria for normal aging. Log transforms were applied to ensure approximate normal distributions. Outliers were also removed for each of the 16 stimulus conditions investigated based on the ±2.5σ limit criterion. VA, CTs and the overall variability were found to be age-invariant up to ~50 years in the photopic condition. A lower, age-invariant limit of ~30 years was more appropriate for the mesopic range with a gradual, but accelerating increase in both mean thresholds and intersubject variability above this age. Binocular thresholds were smaller and much less variable when compared to the thresholds measured in either eye. Results with negative contrast optotypes were significantly better than the corresponding results measured with positive contrast (p < 0.004). Conclusions: This project has established the expected age limits of spatial vision for monocular and binocular viewing under photopic and high mesopic lighting with both positive and negative contrast optotypes using a single test, which can be implemented either in the clinic or in an occupational setting.
DOCUMENT
* Poster 1 - Weet wat er leeftDoor data van computer vision, eDNA, geuren en omgevingssensoren te voeden aan een AI systeem, kan de verspreiding van soorten in kassen snel worden bepaald. Met TKI PPS Weet wat er leeft wordt een dergelijk systeem ontwikkeld. De HAS is betrokken bij de werkgroep Vision* Poster 2 - Weet wat er leeft: Hoe meer hoe beter?Het aantal soorten op een vangplaat kan van invloed zijn op de performance van een Vision model. Hier testten we in hoeverre het mogelijk is om automatisch verschillende plaatsoorten op plakplaten te herkennen* Poster 3 - Weet wat er leeft: Bestrijders in the pictureHet aantal soorten op een foto kan van invloed zijn op de performance van een Vision model. Hier testten we in hoeverre het mogelijk is om automatisch verschillende biologische bestrijders te herkennen* Poster 4 - Weet wat er leeft: Een licht schijnen op telefoonsOm automatische beeldherkenning goed te kunnen gebruiken moet een model goed zijn afgestemd op het uiteindelijke gebruik. Hier testen we het effect van verschillende telefoons en lichtcondities op de performance* Poster 5 - Weet wat er leeft: Combineren van Vision en eDNAZowel computer vision als eDNA technieken worden steeds meer gebruikt om soorten te monitoren. Ze kennen beiden hun voordelen en beperkingen. In deze studie onderzoeken we hoe ze elkaar kunnen aanvullen* Poster 6 - Weet wat er leeft: Optimaliseren van modelEen model is zo goed als de data waar het op getraind is. Door grote verschillen in aantallen insecten en locaties, kan bias plaatsvinden. In dit project is trainingsdata van een Custom Vision model aangepast en is bepaald of het model daar beter van wordt* Poster 7 - Weet wat er leeft: Combinatie van techniekenPlaatmonitoring in de kas is arbeids- en kennisintensief. In dit onderzoek is onderzocht op welke manier automatische beeldherkenning en eDNA elkaar aanvullen in zowel een gecontroleerde al sin een praktijkomgeving als alternatieve monitoringstechnieken* Poster 8 - Weet wat er leeft: verschillende platenHet Custom Vision model (CV2) dat gebruikt wordt binnen het project is getraind op gele droge platen. Hier onderzoeken we of dat model ook geschikt is voor andere platen, of dat het nodig is nieuwe modellen te trainen* Poster 9 - Weet wat er leeft: plakplaten door de tijdIn de praktijk blijven plakplaten om te monitoren meerdere weken in de kas hangen. Dit kan invloed hebben op de performance van een model, omdat insecten verouderen en de dichtheid toeneemt
DOCUMENT
Gepubliceerd in Mikroniek, nr. 6 2018 In manufacturing environments where collaborative robots are employed, conventional computer vision algorithms have trouble in the robust localisation and detection of products due to changing illumination conditions and shadows caused by a human sharing the workspace with the robotic system. In order to enhance the robustness of vision applications, machine learning with neural networks is explored. The performance of machine-learning algorithms versus conventional computer vision algorithms is studied by observing a generic user scenario for the manufacturing process: the assembly of a product by localisation, identification and manipulation of building blocks.
MULTIFILE
Wat zijn belangrijke succesfactoren om onderzoek, onderwijs en ondernemen bij elkaar te brengen, zó dat 'het klikt'. De uitdaging voor de toekomst van bedrijven in de smart factoryligt bij data science: het omzetten van ruwe (sensor) data naar (zinnige) informatie en kennis, waarmee producten en diensten verbeterd kunnen worden. Tevens programma van het symposium t.g.l. inauguratie 3 december 2015
MULTIFILE
Shared Vision Planning (SVP) is a collaborative approach to water (resource) management that combines three practices: (1) traditional water resources planning; (2) structured participation of stakeholders; (3) (collaborative) computer modeling and simulation. The authors argue that there are ample opportunities for learning and innovation in SVP when we look at it as a form of Policy Analysis (PA) in a multi-actor context. SVP faces three classic PA dilemmas: (1) the role of experts and scientific knowledge in policymaking; (2) The design and management of participatory and interactive planning processes; and (3) the (ab)use of computer models and simulations in (multi actor) policymaking. In dealing with these dilemmas, SVP can benefit from looking at the richness of PA methodology, such as for stakeholder analysis and process management. And it can innovate by incorporating some of the rapid developments now taking place in the field of (serious) gaming and simulation (S&G) for policy analysis. In return, the principles, methods, and case studies of SVP can significantly enhance how we perform PA for multi-actor water (resource) management.
DOCUMENT
Computer security incident response teams (CSIRTs) respond to a computer security incident when the need arises. Failure of these teams can have far-reaching effects for the economy and national security. CSIRTs often have to work on an ad hoc basis, in close cooperation with other teams, and in time constrained environments. It could be argued that under these working conditions CSIRTs would be likely to encounter problems. A needs assessment was done to see to which extent this argument holds true. We constructed an incident response needs model to assist in identifying areas that require improvement. We envisioned a model consisting of four assessment categories: Organization, Team, Individual and Instrumental. Central to this is the idea that both problems and needs can have an organizational, team, individual, or technical origin or a combination of these levels. To gather data we conducted a literature review. This resulted in a comprehensive list of challenges and needs that could hinder or improve, respectively, the performance of CSIRTs. Then, semi-structured in depth interviews were held with team coordinators and team members of five public and private sector Dutch CSIRTs to ground these findings in practice and to identify gaps between current and desired incident handling practices. This paper presents the findings of our needs assessment and ends with a discussion of potential solutions to problems with performance in incident response. https://doi.org/10.3389/fpsyg.2017.02179 LinkedIn: https://www.linkedin.com/in/rickvanderkleij1/
MULTIFILE
Cozmo is a real-life robot designed to interact with people playing games, making sounds, expressing emotions on a LCD screen and many other pre-programmable functions. We present the development and implementation of an educational platform for Cozmo mobile robot, with several features, including web server for user interface, computer vision, voice recognition, robot trajectory tracking control, among others. Functions for educational purposes were implemented, including mathematical operations, spelling, directions, and questions functions that gives more flexibility for the teachers to create their own scripts. In this system, a cloud voice recognition tool was implemented to improve the interactive system between Cozmo and the users. Also, a cloud computing vision system was used to perform object recognition using Cozmo's camera, to be applied on educational games. Other functions were created with the purpose of controlling the emotions and the motors of Cozmo to create more sophisticated scripts. To apply the functions on Cozmo robot, an interpreter algorithm was developed to translate the functions into Cozmo's programming language. To validate this work, the proposed framework was presented to several elementary school teachers (classes with students between 4 and 12). Students and teacher's impressions are reported in this text, and indicate that the proposed system can be a useful educational tool.
DOCUMENT