This article deals with automatic object recognition. The goal is that in a certain grey-level image, possibly containing many objects, a certain object can be recognized and localized, based upon its shape. The assumption is that this shape has no special characteristics on which a dedicated recognition algorithm can be based (e.g. if we know that the object is circular, we could use a Hough transform or if we know that it is the only object with grey level 90, we can simply use thresholding). Our starting point is an object with a random shape. The image in which the object is searched is called the Search Image. A well known technique for this is Template Matching, which is described first.
DOCUMENT
This paper describes the work that is done by a group of I3 students at Philips CFT in Eindhoven, Netherlands. I3 is an initiative of Fontys University of Professional Education also located in Eindhoven. The work focuses on the use of computer vision in motion control. Experiments are done with several techniques for object recognition and tracking, and with the guidance of a robot movement by means of computer vision. These experiments involve detection of coloured objects, object detection based on specific features, template matching with automatically generated templates, and interaction of a robot with a physical object that is viewed by a camera mounted on the robot.
DOCUMENT
The paper introduced an automatic score detection model using object detection techniques. The performance of sevenmodels belonging to two different architectural setups was compared. Models like YOLOv8n, YOLOv8s, YOLOv8m, RetinaNet-50, and RetinaNet-101 are single-shot detectors, while Faster RCNN-50 and Faster RCNN-101 belong to the two-shot detectors category. The dataset was manually captured from the shooting range and expanded by generating more versatile data using Python code. Before the dataset was trained to develop models, it was resized (640x640) and augmented using Roboflow API. The trained models were then assessed on the test dataset, and their performance was compared using matrices like mAP50, mAP50-90, precision, and recall. The results showed that YOLOv8 models can detect multiple objects with good confidence scores.
DOCUMENT
Under the umbrella of artistic sustenance, I question the life of materials, subjective value structures, and working conditions underlying exhibition making through three interconnected areas of inquiry: Material Life and Ecological Impact — how to avoid the accumulation of physical materials/storage after exhibitions? I aim to highlight the provenance and afterlife of exhibition materials in my practice, seeking economic and ecological alternatives to traditional practices through sustainable solutions like borrowing, reselling, and alternative storage methods that could transform exhibition material handling and thoughts on material storage and circulation. Value Systems and Economic Conditions —what do we mean when we talk about 'value' in relation to art? By examining the flow of financial value in contemporary art and addressing the subjectivity of worth in art-making and artists' livelihoods, I question traditional notions of sculptural skill while advocating for recognition of conceptual labour. The research considers how artists might be compensated for the elegance of thought rather than just material output. Text as Archive and Speculation— how can text can store, speculate, and circulate the invisible labour and layers of exhibition making? Through titles, material lists, and exhibition texts, I explore writing's potential to uncover latent structures and document invisible labor, considering text both as an archiving method and a tool for speculating about future exhibitions. Using personal practice as a case study, ‘Conditions for Raw Materials’ seeks to question notions of value in contemporary art, develop alternative economic models, and make visible the material, financial, and relational flows within exhibitions. The research will manifest through international exhibitions, a book combining poetic auto-theoretical reflection with exhibition speculation, new teaching formats, and long-term investigations. Following “sticky relations," of intimacy, economy and conditions, each exhibition serves as a case study exploring exhibition making from emotional, ecological, and economic perspectives.
Granular materials (GMs) are simply a collection of individual particles, e.g., rice, coffee, iron-ore. Although straightforward in appearance, GMs are key to several processes in chemical-pharmaceutical, high-tech, agri-food and energy industry. Examples include laser sintering in additive manufacturing, tableting in pharma or just mixing of your favourite crunchy muesli mix in food industry. However, these bulk material handling processes are notorious for their inefficiency and ineffectiveness. Thereby, affecting the overall expenses and product quality. To understand and enhance the quality of a process, GMs industries utilise computer-simulations, much like how cars and aeroplanes have been designed and optimised since the 1990s. Just as how engineers utilise advanced computer-models to develop our fuel-efficient vehicle design, energy-saving granular processes are also developed utilising physics-based simulation-models, using a computer. Although physics-based models can effectively optimise large-scale processes, creating and simulating a fully representative virtual prototype of a GMs process is very iterative, computationally expensive and time intensive. On the contrary, given the available data, this is where machine learning (ML) could be of immense value. Like how ML has transformed the healthcare, energy and other top sectors, recent ML-based developments for GMs show serious promise in faster virtual prototyping and reduced computational cost. Enabling industries to rapidly design and optimise, enhancing real-time data-driven decision making. GranML aims to empower the GMs industries with ML. We will do so by (i) performing an in-depth GMs-ML literature review, (ii) developing open-access ML implementation guidelines; and (iii) an open-source proof-of-concept for an industry-relevant use case. Eventually, our follow-up mission is to build upon this vital knowledge by (i) expanding the consortium; (ii) co-developing a unified methodology for efficient computer-prototyping, unifying physics- and ML-based technologies for GMs; (iii) enhancing the existing computer-modelling infrastructure; and (iv) validating through industry focused demonstrators.
The integration of renewable energy resources, controllable devices and energy storage into electricity distribution grids requires Decentralized Energy Management to ensure a stable distribution process. This demands the full integration of information and communication technology into the control of distribution grids. Supervisory Control and Data Acquisition (SCADA) is used to communicate measurements and commands between individual components and the control server. In the future this control is especially needed at medium voltage and probably also at the low voltage. This leads to an increased connectivity and thereby makes the system more vulnerable to cyber-attacks. According to the research agenda NCSRA III, the energy domain is becoming a prime target for cyber-attacks, e.g., abusing control protocol vulnerabilities. Detection of such attacks in SCADA networks is challenging when only relying on existing network Intrusion Detection Systems (IDSs). Although these systems were designed specifically for SCADA, they do not necessarily detect malicious control commands sent in legitimate format. However, analyzing each command in the context of the physical system has the potential to reveal certain inconsistencies. We propose to use dedicated intrusion detection mechanisms, which are fundamentally different from existing techniques used in the Internet. Up to now distribution grids are monitored and controlled centrally, whereby measurements are taken at field stations and send to the control room, which then issues commands back to actuators. In future smart grids, communication with and remote control of field stations is required. Attackers, who gain access to the corresponding communication links to substations can intercept and even exchange commands, which would not be detected by central security mechanisms. We argue that centralized SCADA systems should be enhanced by a distributed intrusion-detection approach to meet the new security challenges. Recently, as a first step a process-aware monitoring approach has been proposed as an additional layer that can be applied directly at Remote Terminal Units (RTUs). However, this allows purely local consistency checks. Instead, we propose a distributed and integrated approach for process-aware monitoring, which includes knowledge about the grid topology and measurements from neighboring RTUs to detect malicious incoming commands. The proposed approach requires a near real-time model of the relevant physical process, direct and secure communication between adjacent RTUs, and synchronized sensor measurements in trustable real-time, labeled with accurate global time-stamps. We investigate, to which extend the grid topology can be integrated into the IDS, while maintaining near real-time performance. Based on topology information and efficient solving of power flow equation we aim to detect e.g. non-consistent voltage drops or the occurrence of over/under-voltage and -current. By this, centrally requested switching commands and transformer tap change commands can be checked on consistency and safety based on the current state of the physical system. The developed concepts are not only relevant to increase the security of the distribution grids but are also crucial to deal with future developments like e.g. the safe integration of microgrids in the distribution networks or the operation of decentralized heat or biogas networks.