Author Supplied: In the last decades, architecture has emerged as a discipline in the domain of Information Technology (IT). A well-accepted definition of architecture is from ISO/IEC 42010: "The fundamental organization of a system, embodied in its components, their relationships to each other and the environment, and the principles governing its design and evolution." Currently, many levels and types of architecture in the domain of IT have been defined. We have scoped our work to two types of architecture: enterprise architecture and software architecture. IT architecture work is demanding and challenging and includes, inter alia, identifying architectural significant requirements (functional and non-functional), designing and selecting solutions for these requirements, and ensuring that the solutions are implemented according to the architectural design. To reflect on the quality of architecture work, we have taken ISO/IEC 8402 as a starting point. It defines quality as "the totality of characteristics of an entity that bear on its ability to satisfy stated requirements". We consider architecture work to be of high quality, when it is effective; when it answers stated requirements. Although IT Architecture has been introduced in many organizations, the elaboration does not always proceed without problems. In the domain of enterprise architecture, most practices are still in the early stages of maturity with, for example, low scores on the focus areas ‘Development of architecture’ and ‘Monitoring’ (of the implementation activities). In the domain of software architecture, problems of the same kind are observed. For instance, architecture designs are frequently poor and incomplete, while architecture compliance checking is performed in practice on a limited scale only. With our work, we intend to contribute to the advancement of architecture in the domain of IT and the effectiveness of architecture work by means of the development and improvement of supporting instruments and tools. In line with this intention, the main research question of this thesis is: How can the effectiveness of IT architecture work be evaluated and improved?
DOCUMENT
Neighborhood image processing operations on Field Programmable Gate Array (FPGA) are considered as memory intensive operations. A large memory bandwidth is required to transfer the required pixel data from external memory to the processing unit. On-chip image buffers are employed to reduce this data transfer rate. Conventional image buffers, implemented either by using FPGA logic resources or embedded memories are resource inefficient. They exhaust the limited FPGA resources quickly. Consequently, hardware implementation of neighborhood operations becomes expensive, and integrating them in resource constrained devices becomes unfeasible. This paper presents a resource efficient FPGA based on-chip buffer architecture. The proposed architecture utilizes full capacity of a single Xilinx BlockRAM (BRAM36 primitive) for storing multiple rows of input image. To get multiple pixels/clock in a user defined scan order, an efficient duty-cycle based memory accessing technique is coupled with a customized addressing circuitry. This accessing technique exploits switching capabilities of BRAM to read 4 pixels in a single clock cycle without degrading system frequency. The addressing circuitry provides multiple pixels/clock in any user defined scan order to implement a wide range of neighborhood operations. With the saving of 83% BRAM resources, the buffer architecture operates at 278 MHz on Xilinx Artix-7 FPGA with an efficiency of 1.3 clock/pixel. It is thus capable to fulfill real time image processing requirements for HD image resolution (1080 × 1920) @103 fcps.
DOCUMENT
Author supplied: Abstract—The growing importance and impact of new technologies are changing many industries. This effect is especially noticeable in the manufacturing industry. This paper explores a practical implementation of a hybrid architecture for the newest generation of manufacturing systems. The papers starts with a proposition that envisions reconfigurable systems that work together autonomously to create Manufacturing as a Service (MaaS). It introduces a number of problems in this area and shows the requirements for an architecture that can be the main research platform to solve a number of these problems, including the need for safe and flexible system behaviour and the ability to reconfigure with limited interference to other systems within the manufacturing environment. The paper highlights the infrastructure and architecture itself that can support the requirements to solve the mentioned problems in the future. A concept system named Grid Manufacturing is then introduced that shows both the hardware and software systems to handle the challenges. The paper then moves towards the design of the architecture and introduces all systems involved, including the specific hardware platforms that will be controlled by the software platform called REXOS (Reconfigurable EQuipletS Operating System). The design choices are provided that show why it has become a hybrid platform that uses Java Agent Development Framework (JADE) and Robot Operating System (ROS). Finally, to validate REXOS, the performance is measured and discussed, which shows that REXOS can be used as a practical basis for more specific research for robust autonomous reconfigurable systems and application in industry 4.0. This paper shows practical examples of how to successfully combine several technologies that are meant to lead to a faster adoption and a better business case for autonomous and reconfigurable systems in industry.
DOCUMENT
The last decade has seen an increasing demand from the industrial field of computerized visual inspection. Applications rapidly become more complex and often with more demanding real time constraints. However, from 2004 onwards the clock frequency of CPUs has not increased significantly. Computer Vision applications have an increasing demand for more processing power but are limited by the performance capabilities of sequential processor architectures. The only way to get more performance using commodity hardware, like multi-core processors and graphics cards, is to go for parallel programming. This article focuses on the practical question: How can the processing time for vision algorithms be improved, by parallelization, in an economical way and execute them on multiple platforms?
DOCUMENT
Business Rule Management (BRM) is a means to make decision-making within organizations explicit and manageable. BRM functions within the context of an Enterprise Architecture (EA). The aim of EA is to enable the organization to achieve its strategic goals. Ideally, BRM and EA should be well aligned. This paper explores through study of case study documentation the BRM design choices that relate to EA and hence might influence the organizations ability to achieve a digital business strategy. We translate this exploration into five propositions relating BRM design choices to EA characteristics.
DOCUMENT
Twirre is a new architecture for mini-UAV platforms designed for autonomous flight in both GPS-enabled and GPS-deprived applications. The architecture consists of low-cost hardware and software components. High-level control software enables autonomous operation. Exchanging or upgrading hardware components is straightforward and the architecture is an excellent starting point for building low-cost autonomous mini-UAVs for a variety of applications. Experiments with an implementation of the architecture are in development, and preliminary results demonstrate accurate indoor navigation
MULTIFILE
Athor supplied : "This paper describes an agent-based architecture for domotics. This architecture is based on requirements about expandability and hardware independence. The heart of the system is a multi-agent system. This system is distributed over several platforms to open the possibility to tie the agents directly to the actuators, sensors and devices involved. This way a level of abstraction is created and all intelligence of the system as a whole is related to the agents involved. A proof of concept has been built and functions as expected. By implementing real and simulated devices and an easy to use graphical interface, all kind of compositions can be studied using this platform."
DOCUMENT
Sustainable and Agile manufacturing is expected of future generation manufacturing systems. The goal is to create scalable, reconfigurable and adaptable manufacturing systems which are able to produce a range of products without new investments into new manufacturing equipment. This requires a new approach with a combination of high performance software and intelligent systems. Other case studies have used hybrid and intelligent systems in software before. However, they were mainly used to improve the logistic processes and are not commonly used within the hardware control loop. This paper introduces a case study on flexible and hybrid software architecture, which uses prototype manufacturing machines called equiplets. These systems should be applicable for the industry and are able to dynamically adapt to changes in the product as well as changes in the manufacturing systems. This is done by creating self-configurable machines which use intelligent control software, based on agent technology and computer vision. The requirements and resulting technologies are discussed using simple reasoning and analysis, leading to a basic design of a software control system, which is based on a hybrid distributed control system
DOCUMENT
Twirre V2 is the evolution of an architecture for mini-UAV platforms which allows automated operation in both GPS-enabled and GPSdeprived applications. This second version separates mission logic, sensor data processing and high-level control, which results in reusable software components for multiple applications. The concept of Local Positioning System (LPS) is introduced, which, using sensor fusion, would aid or automate the flying process like GPS currently does. For this, new sensors are added to the architecture and a generic sensor interface together with missions for landing and following a line have been implemented. V2 introduces a software modular design and new hardware has been coupled, showing its extensibility and adaptability
DOCUMENT
The project STORE&GO aims to investigate all the aspects regarding the integration of large-scale Power-to-Gas (PtG) at European level, by exploiting it as means for long term storage. One of the aspects that should be properly addressed is the beneficial impact that the integration of PtG plants may have on the electricity system.In the project framework, WP6 devoted its activities to investigate different aspects of the integration of PtG in the electricity grid, with the previous delivered reports.This deliverable focused in particular on how integrate the information about the facilities replicating the real world condition into a simulation environment. For doing this, the concept of remote Physical Hardware-in-the-Loop (PHIL) has been used and exploit.Remote simulation with physical hardware appears to be an effective means for investigating new technologies for energy transition, with the purpose of solving the issues related to the introduction of new Renewable Energy Sources (RES) into the electricity system. These solutions are making the overall energy systems to be investigated much more complex than the traditional ones, introducingnew challenges to the research. In fact:• the newly integrated technologies deal with different energy vectors and sectors, thus• requiring interoperability and multidisciplinary analysis;• the systems to be implemented often are large-scale energy systems leading to enormously complicated simulation models;• the facilities for carrying out the experiments require huge investments as well as suitable areas where to be properly installed.This may lead to the fact that a single laboratory with limited expertise, hardware/software facilities and available data has not the ability to secure satisfactory outcomes. The solution is the share of existing research infrastructures, by virtually joining different distant laboratories or facilities.This results in improvement of simulation capabilities for large-scale systems by decoupling into subsystems to be run on distant targets avoidance of replication of already existing facilities by exploiting remote hardware in the loop concept for testing of remote devices.Also confidential information of one lab, whose sharing may be either not allowed or requiring long administrative authorization procedures, can be kept confidential by simulating models locally and exchanging with the partners only proper data and simulation results through the co-simulation medium.Thanks to the realized method it is possible to real time analyse renewable devices at remotepower plants and place them in the loop of a local network simulation.The results reported show that the architecture developed is strong enough for being applied also atnew renewable power plants. This opens the possibility to use the data for research purposed, butalso to act in remote on the infrastructure in case of particular test (for example the acceptance test).
DOCUMENT