The last decade has seen an increasing demand from the industrial field of computerized visual inspection. Applications rapidly become more complex and often with more demanding real time constraints. However, from 2004 onwards the clock frequency of CPUs has not increased significantly. Computer Vision applications have an increasing demand for more processing power but are limited by the performance capabilities of sequential processor architectures. The only way to get more performance using commodity hardware, like multi-core processors and graphics cards, is to go for parallel programming. This article focuses on the practical question: How can the processing time for vision algorithms be improved, by parallelization, in an economical way and execute them on multiple platforms?
DOCUMENT
Over the past few years, there has been an explosion of data science as a profession and an academic field. The increasing impact and societal relevance of data science is accompanied by important questions that reflect this development: how can data science become more responsible and accountable while also responding to key challenges such as bias, fairness, and transparency in a rigorous and systematic manner? This Patterns special collection has brought together research and perspective from academia, the public and the private sector, showcasing original research articles and perspectives pertaining to responsible and accountable data science.
MULTIFILE
Abstract Despite the numerous business benefits of data science, the number of data science models in production is limited. Data science model deployment presents many challenges and many organisations have little model deployment knowledge. This research studied five model deployments in a Dutch government organisation. The study revealed that as a result of model deployment a data science subprocess is added into the target business process, the model itself can be adapted, model maintenance is incorporated in the model development process and a feedback loop is established between the target business process and the model development process. These model deployment effects and the related deployment challenges are different in strategic and operational target business processes. Based on these findings, guidelines are formulated which can form a basis for future principles how to successfully deploy data science models. Organisations can use these guidelines as suggestions to solve their own model deployment challenges.
DOCUMENT
Creating and testing the first Brand Segmentation Model in Augmented Reality using Microsoft Hololens. Sanoma together with SAMR launched an online brand segmentation tool based on large scale research, The brand model uses several brand values divided over three axes. However they cannot be displayed clearly in a 2D model. The space of BSR Quality Planner can be seen as a 3-dimensional meaningful space that is defined by the terms used to typify the brands. The third axis concerns a behaviour-based dimension: from ‘quirky behaviour’ to ‘standardadjusted behaviour’ (respectful, tolerant, solidarity). ‘Virtual/augmented reality’ does make it possible to clearly display (and experience) 3D. The Academy for Digital Entertainment (ADE) of Breda University of Applied Sciences has created the BSR Quality Planner in Virtual Reality – as a hologram. It’s the world’s first segmentation model in AR. Breda University of Applied Sciences (professorship Digital Media Concepts) has deployed hologram technology in order to use and demonstrate the planning tool in 3D. The Microsoft HoloLens can be used to experience the model in 3D while the user still sees the actual surroundings (unlike VR, with AR the space in which the user is active remains visible). The HoloLens is wireless, so the user can easily walk around the hologram. The device is operated using finger gestures, eye movements or voice commands. On a computer screen, other people who are present can watch along with the user. Research showed the added value of the AR model.Partners:Sanoma MediaMarketResponse (SAMR)
The focus of the research is 'Automated Analysis of Human Performance Data'. The three interconnected main components are (i)Human Performance (ii) Monitoring Human Performance and (iii) Automated Data Analysis . Human Performance is both the process and result of the person interacting with context to engage in tasks, whereas the performance range is determined by the interaction between the person and the context. Cheap and reliable wearable sensors allow for gathering large amounts of data, which is very useful for understanding, and possibly predicting, the performance of the user. Given the amount of data generated by such sensors, manual analysis becomes infeasible; tools should be devised for performing automated analysis looking for patterns, features, and anomalies. Such tools can help transform wearable sensors into reliable high resolution devices and help experts analyse wearable sensor data in the context of human performance, and use it for diagnosis and intervention purposes. Shyr and Spisic describe Automated Data Analysis as follows: Automated data analysis provides a systematic process of inspecting, cleaning, transforming, and modelling data with the goal of discovering useful information, suggesting conclusions and supporting decision making for further analysis. Their philosophy is to do the tedious part of the work automatically, and allow experts to focus on performing their research and applying their domain knowledge. However, automated data analysis means that the system has to teach itself to interpret interim results and do iterations. Knuth stated: Science is knowledge which we understand so well that we can teach it to a computer; and if we don't fully understand something, it is an art to deal with it.[Knuth, 1974]. The knowledge on Human Performance and its Monitoring is to be 'taught' to the system. To be able to construct automated analysis systems, an overview of the essential processes and components of these systems is needed.Knuth Since the notion of an algorithm or a computer program provides us with an extremely useful test for the depth of our knowledge about any given subject, the process of going from an art to a science means that we learn how to automate something.
De gezondheidszorg kampt met personeelstekorten en lange wachtlijsten, wat de zorgkwaliteit voor patiënten ernstig treft. De toenemende vergrijzing van de bevolking en een toenemend tekort aan geschoold personeel verergeren deze problemen. Hierdoor komen zowel zorgverleners als mantelzorgers onder grote druk te staan [1]. In dit project wordt met behulp van AI-onderzoek gedaan naar de haalbaarheid van het automatisch detecteren van de gesteldheid van zorgbehoevenden. Dit biedt mogelijkheden om de druk op zorgverleners en mantelzorgers te verlichten door taken te automatiseren en hen te ondersteunen bij het identificeren van de behoeften van de patiënten. De huidige tekorten in de zorg zijn verontrustend en daarom niet houdbaar voor de kwaliteit van de zorg. Automatisering is daarom essentieel om de zorgkwaliteit te waarborgen. Het consortium bestaat uit zorginstelling De Zijlen, Valtes en het NHL Stenden Lectoraat Computer Vision & Data Science. Vanuit De Zijlen en Valtes is de vraag ontstaan voor de automatische detectie van de gesteldheid van zorgbehoevenden. Gezamenlijk wordt de technische haalbaarheid onderzocht om de business-case te ondersteunen. Daarnaast is het doel van dit project om met een proof-of-concept een breder netwerk van belangenorganisaties, ontwikkelaars en eindgebruikers aan te spreken. Er wordt gewerkt in een multidisciplinair team van studenten, docent-onderzoekers, lectoren, ontwikkelaar en potentiële eindgebruikers.
Lectorate, part of NHL Stenden Hogeschool