The use of machine learning in embedded systems is an interesting topic, especially with the growth in popularity of the Internet of Things (IoT). The capacity of a system, such as a robot, to self-localize, is a fundamental skill for its navigation and decision-making processes. This work focuses on the feasibility of using machine learning in a Raspberry Pi 4 Model B, solving the localization problem using images and fiducial markers (ArUco markers) in the context of the RobotAtFactory 4.0 competition. The approaches were validated using a realistically simulated scenario. Three algorithms were tested, and all were shown to be a good solution for a limited amount of data. Results also show that when the amount of data grows, only Multi-Layer Perception (MLP) is feasible for the embedded application due to the required training time and the resulting size of the model.
DOCUMENT
The objective of this report is to share the results of the test that we have made to evaluate the localization performance of the Spot robot across three scenarios, each featuring different velocities in both translation and rotation.
MULTIFILE
Autonomous driving in public roads requires precise localization within the range of few centimeters. Even the best localization systems based on GNSS cannot always reach this level of precision, especially in an urban environment, where the signal is disturbed by surrounding buildings and artifacts. Recent works have shown the advantage of using maps as a precise, robust, and reliable way of localization. Typical approaches use the set of current readings from the vehicle sensors to estimate its position on the map. The approach presented in this paper exploits a short-range visual lane marking detector and a dead reckoning system to construct a registry of the detected back lane markings corresponding to the last 240 m driven. This information is used to search in the map the most similar section, to determine the vehicle localization in the map reference. Additional filtering is used to obtain a more robust estimation for the localization. The accuracy obtained is sufficiently high to allow autonomous driving in a narrow road. The system uses a low-cost architecture of sensors and the algorithm is light enough to run on low-power embedded architecture.
DOCUMENT
Drones have been verified as the camera of 2024 due to the enormous exponential growth in terms of the relevant technologies and applications such as smart agriculture, transportation, inspection, logistics, surveillance and interaction. Therefore, the commercial solutions to deploy drones in different working places have become a crucial demand for companies. Warehouses are one of the most promising industrial domains to utilize drones to automate different operations such as inventory scanning, goods transportation to the delivery lines, area monitoring on demand and so on. On the other hands, deploying drones (or even mobile robots) in such challenging environment needs to enable accurate state estimation in terms of position and orientation to allow autonomous navigation. This is because GPS signals are not available in warehouses due to the obstruction by the closed-sky areas and the signal deflection by structures. Vision-based positioning systems are the most promising techniques to achieve reliable position estimation in indoor environments. This is because of using low-cost sensors (cameras), the utilization of dense environmental features and the possibilities to operate in indoor/outdoor areas. Therefore, this proposal aims to address a crucial question for industrial applications with our industrial partners to explore limitations and develop solutions towards robust state estimation of drones in challenging environments such as warehouses and greenhouses. The results of this project will be used as the baseline to develop other navigation technologies towards full autonomous deployment of drones such as mapping, localization, docking and maneuvering to safely deploy drones in GPS-denied areas.
The CARTS (Collaborative Aerial Robotic Team for Safety and Security) project aims to improve autonomous firefighting operations through an collaborative drone system. The system combines a sensing drone optimized for patrolling and fire detection with an action drone equipped for fire suppression. While current urban safety operations rely on manually operated drones that face significant limitations in speed, accessibility, and coordination, CARTS addresses these challenges by creating a system that enhances operational efficiency through minimal human intervention, while building on previous research with the IFFS drone project. This feasibility study focuses on developing effective coordination between the sensing and action drones, implementing fire detection and localization algorithms, and establishing parameters for autonomous flight planning. Through this innovative collaborative drone approach, we aim to significantly improve both fire detection and suppression capabilities. A critical aspect of the project involves ensuring reliable and safe operation under various environmental conditions. This feasibility study aims to explore the potential of a sensing drone with detection capabilities while investigating coordination mechanisms between the sensing and action drones. We will examine autonomous flight planning approaches and test initial prototypes in controlled environments to assess technical feasibility and safety considerations. If successful, this exploratory work will provide valuable insights for future research into autonomous collaborative drone systems, currently focused on firefighting. This could lead to larger follow-up projects expanding the concept to other safety and security applications.
Automation is a key enabler for the required productivity improvement in the agrifood sector. After years of GPS-steering systems in tractors, mobile robots start to enter the market. Localization is one of the core functions for these robots to operate properly on fields and in orchards. GNSS (Global Navigation Satellite System) solutions like GPS provide cm-precision performance in open sky, but buildings, poles and biomaterial may reduce system performance. On top, certain areas do not provide a dependable grid communication link for the necessary GPS corrections and geopolitics lead to jamming activities. Other means for localization are required for robust operation. VSLAM (Visual Simultaneous Localization And Mapping) is a complex software approach that imitates the way we as humans learn to find our ways in unknown environments. VSLAM technology uses camera input to detect features in the environment, position itself in that 3D environment while concurrently creating a map that is stored and compared for future encounters, allowing the robot to recognize known environments and continue building a complete, consistent map of the environment covered by its movement. The technology also allows continuous updating of the map in environments that evolve over time, which is a specific advantage for agrifood use cases with growing crops and trees. The technology is however relatively new, as required computational power only recently became available in tolerable cost range and it is not well-explored for industrialized applications in fields and orchards. Orientate investigates the merits of open-source SLAM algorithms on fields - with Pixelfarming Robotics and RapAgra - and in an orchard - with Hillbird - preceded by simulations and initial application on a HAN test vehicle driving in different terrains. The project learnings will be captured in educational material elaborating on VSLAM technology and its application potential in agrifood.