This article deals with automatic object recognition. The goal is that in a certain grey-level image, possibly containing many objects, a certain object can be recognized and localized, based upon its shape. The assumption is that this shape has no special characteristics on which a dedicated recognition algorithm can be based (e.g. if we know that the object is circular, we could use a Hough transform or if we know that it is the only object with grey level 90, we can simply use thresholding). Our starting point is an object with a random shape. The image in which the object is searched is called the Search Image. A well known technique for this is Template Matching, which is described first.
DOCUMENT
This paper describes the work that is done by a group of I3 students at Philips CFT in Eindhoven, Netherlands. I3 is an initiative of Fontys University of Professional Education also located in Eindhoven. The work focuses on the use of computer vision in motion control. Experiments are done with several techniques for object recognition and tracking, and with the guidance of a robot movement by means of computer vision. These experiments involve detection of coloured objects, object detection based on specific features, template matching with automatically generated templates, and interaction of a robot with a physical object that is viewed by a camera mounted on the robot.
DOCUMENT
Digital surveillance technologies using artificial intelligence (AI) tools such as computer vision and facial recognition are becoming cheaper and easier to integrate into governance practices worldwide. Morocco serves as an example of how such technologies are becoming key tools of governance in authoritarian contexts. Based on qualitative fieldwork including semi-structured interviews, observation, and extensive desk reviews, this chapter focusses on the role played by AI-enhanced technology in urban surveillance and the control of migration between the Moroccan–Spanish borders. Two cross-cutting issues emerge: first, while international donors provide funding for urban and border surveillance projects, their role in enforcing transparency mechanisms in their implementation remains limited; second, Morocco’s existing legal framework hinders any kind of public oversight. Video surveillance is treated as the sole prerogative of the security apparatus, and so far public actors have avoided to engage directly with the topic. The lack of institutional oversight and public debate on the matter raise serious concerns on the extent to which the deployment of such technologies affects citizens’ rights. AI-enhanced surveillance is thus an intrinsically transnational challenge in which private interests of economic gain and public interests of national security collide with citizens’ human rights across the Global North/Global South divide.
MULTIFILE
GAMING HORIZONS is a multidisciplinary project that aims to expand the research and innovation agenda on serious gaming and gamification. The project is particularly interested in the use of games for learning and cultural development. Gamification - and gaming more broadly – are very important from a socio-economic point of view, but over the past few years they have been at the centre of critical and challenging debates, which highlighted issues such as gender and minority representation, and exploitative game mechanics. Our project’s key contention is that it is important for the European ICT community to engage with design trends and social themes that have affected profoundly the mainstream and ‘independent’ game development cultures over the past few years, especially because the boundaries between leisure and serious games are increasingly blurred. GAMING HORIZONS is a direct response to the official recognition by the H2020 programme of work that multidisciplinary research can help to advance the integration between Responsible Research and Innovation (RRI) and the Social Sciences and the Humanities (SSH). The project’s objective is to enable a higher uptake of socially responsible ICT-related research in relation to gaming. This objective will be achieved through a research-based exchange between communities of developers, policy makers, users and researchers. The methodology will involve innovative data collection activities and consultations with a range of stakeholders over a period of 14 months. We will interrogate the official ‘H2020 discourse’ on gamification – with a particular focus on ‘gamified learning’ - whilst engaging with experts, developers and critical commentators through interviews, events, workshops and systematic dialogue with an Advisory Board. Ultimately, GAMING HORIZONS will help identify future directions at the intersection of ethics, social research, and both the digital entertainment and serious games industries.EU FundingThe 14-month research project 'Gaming Horizons' was funded by the European Commission through the Horizon 2020 research and innovation programme.
In the past decade, particularly smaller drones have started to claim their share of the sky due to their potential applications in the civil sector as flying-eyes, noses, and very recently as flying hands. Network partners from various application domains: safety, Agro, Energy & logistic are curious about the next leap in this field, namely, collaborative Sky-workers. Their main practical question is essentially: “Can multiple small drones transport a large object over a high altitude together in outdoor applications?” The industrial partners, together with Saxion and RUG, will conduct feasibility study to investigate if it is possible to develop these collaborative Sky-workers and to identify which possibilities this new technology will offer. Design science research methodology, which focuses on solution-oriented applied research involving multiple iterations with rigorous evaluations, will be used to research the feasibility of the main technological building blocks. They are: • Accurate localization based on onboard sensors. • Safe and optimal interaction controller for collaborative aerial transport Within this project, the first proof-of-concepts will be developed. The results of this project will be used to expand the existing network and formulate a bigger project to address additional critical aspects in order to develop a complete framework for collaborative drones.
The demand for mobile agents in industrial environments to perform various tasks is growing tremendously in recent years. However, changing environments, security considerations and robustness against failure are major persistent challenges autonomous agents have to face when operating alongside other mobile agents. Currently, such problems remain largely unsolved. Collaborative multi-platform Cyber- Physical-Systems (CPSs) in which different agents flexibly contribute with their relative equipment and capabilities forming a symbiotic network solving multiple objectives simultaneously are highly desirable. Our proposed SMART-AGENTS platform will enable flexibility and modularity providing multi-objective solutions, demonstrated in two industrial domains: logistics (cycle-counting in warehouses) and agriculture (pest and disease identification in greenhouses). Aerial vehicles are limited in their computational power due to weight limitations but offer large mobility to provide access to otherwise unreachable places and an “eagle eye” to inform about terrain, obstacles by taking pictures and videos. Specialized autonomous agents carrying optical sensors will enable disease classification and product recognition improving green- and warehouse productivity. Newly developed micro-electromechanical systems (MEMS) sensor arrays will create 3D flow-based images of surroundings even in dark and hazy conditions contributing to the multi-sensor system, including cameras, wireless signatures and magnetic field information shared among the symbiotic fleet. Integration of mobile systems, such as smart phones, which are not explicitly controlled, will provide valuable information about human as well as equipment movement in the environment by generating data from relative positioning sensors, such as wireless and magnetic signatures. Newly developed algorithms will enable robust autonomous navigation and control of the fleet in dynamic environments incorporating the multi-sensor data generated by the variety of mobile actors. The proposed SMART-AGENTS platform will use real-time 5G communication and edge computing providing new organizational structures to cope with scalability and integration of multiple devices/agents. It will enable a symbiosis of the complementary CPSs using a combination of equipment yielding efficiency and versatility of operation.