Privacy concerns can potentially make camera-based object classification unsuitable for robot navigation. To address this problem, we propose a novel object classification system using only a 2D-LiDAR sensor on mobile robots. The proposed system enables semantic understanding of the environment by applying the YOLOv8n model to classify objects such as tables, chairs, cupboards, walls, and door frames using only data captured by a 2D-LiDAR sensor. The experimental results show that the resulting YOLOv8n model achieved an accuracy of 83.7% in real-time classification running on Raspberry Pi 5, despite having a lower accuracy when classifying door-frames and walls. This validates our proposed approach as a privacy-friendly alternative to camera-based methods and illustrates that it can run on small computers onboard mobile robots.
DOCUMENT
In mobile robotics, LASER scanners have a wide spectrum of indoor and outdoor applications, both in structured and unstructured environments, due to their accuracy and precision. Most works that use this sensor have their own data representation and their own case-specific modeling strategies, and no common formalism is adopted. To address this issue, this manuscript presents an analytical approach for the identification and localization of objects using 2D LiDARs. Our main contribution lies in formally defining LASER sensor measurements and their representation, the identification of objects, their main properties, and their location in a scene. We validate our proposal with experiments in generic semi-structured environments common in autonomous navigation, and we demonstrate its feasibility in multiple object detection and identification, strictly following its analytical representation. Finally, our proposal further encourages and facilitates the design, modeling, and implementation of other applications that use LASER scanners as a distance sensor.
DOCUMENT
The increase in the number and complexity of crime activities in our nation together with shortage in human resources in the safety and security domain is putting extra pressure on emergency responders. The emergency responders are constantly confronted with sophisticated situations that urgently require professional, safe, and rapid handling to contain and conclude the situation to minimize the danger to public and the emergency responders. Recently, Dutch emergency responders have started to experiment with various types of robots to improve the responsiveness and the effectiveness of their responses. One of these robots is the Boston Dynamic’s Spot Robot Dog, which is primarily appealing for its ability to move in difficult terrains. The deployment of the robot in real emergencies is at its infancy. The main challenge that the robot dog operators are facing is the high workload. It requires the full attention to operate the robot itself. As such, the professional acts entirely as a robot operator rather than a domain expert that critically examines and addresses the main safety problems at hand. Therefore, there is an urgent request from these emergency response professionals to develop and integrate key technologies that enable the robot dog to operate more autonomously. In this project, we explore on how to increase the autonomy level of the robot dog in order to reduce the workload of the operator, and eventually help the operator remain domain expert. Therefore, we will explore the ability of the robot to autonomously 3D-map unknown confined areas. The results of this project will lead to new practical knowledge and a follow-up project that will focus on further developing the technologies that increase the autonomy of the robot for eventual deployment in operational environments. This project will also have direct contribution to education through involvement of students and lecturers.