This work describes the design, implementation and validation of an autonomous gas leakage inspection robot. Navigation with centimeter level accuracy is achieved using RTK GNSS integrated using the ROS 2 and Nav2 frameworks. The proposed solution has been validated successfully in terms of navigation accuracy and gas detection capabilities. The approach has the potential to effectively address the increasing demand for inspections of the grid.
MULTIFILE
Introduction: Visually impaired people experience trouble with navigation and orientation due to their weakened ability to rely on eyesight to monitor the environment [1][2]. Smartphones such as the iPhone are already popular devices among the visually impaired for navigating [3]. We explored if an iPhone application that responds to Bluetooth beacons to inform the user about their environment could aid the visually impaired in navigation in an urban environment.Method: We tested the implementation in an urban environment with visually impaired people using the route from the Amsterdam Bijlmer train station to the Royal Dutch Visio office. Bluetooth beacons were attached at two meters high to lampposts and traffic signs along a specified route to give the user instructions via a custom made iPhone app. Three different obstacle types were identified and implemented in the app: a crossover with traffic signs, a car parking entrance and objects blocking the pathway like stairs. Based on the work of Atkin et al.[5] and Havik et al. [6] at each obstacle the beacon will trigger the app to present important information about the surroundings like potential hazards nearby, how to navigate around or through obstacles and information about the next obstacle. The information is presented using pictures of the environment and instructions in text and voice based on Giudice et al. [4]. The application uses Apple’s accessibility features to communicate the instructions with VoiceOver screenreader. The app allows the user to preview the route, to prepare for upcoming obstacles and landmarks. Last, users can customize the app by specifying the amount of detail in images and information the app presents.To determine if the app is more useful for the participants than their current navigational method, participants walked the route both with and without the application. When walking with the app, participants were guided by the app. When walking without the app they used their own navigational method. During both walks a supervisor ensured the safety of the participant.During both walks, after each obstacle, participants were asked how safe they felt. We used a five point Likert scale where one stood for “feeling very safe” and five for “feeling very unsafe”.Qualitative feedback on the usability of the app was collected using the speak-a-lout method during walking and by interview afster walking.Results: Five visually impaired participated, one female and five males, age range from 30 to 78 and with varying levels of visual limitations. Three participants were familiar with the route and two walked the route for the first time.After each obstacle participants rated how safe they felt on a five point Likert scale. We normalized the results by deducting the scores of the walk without the app from the scores of the walk with the app. The average of all participants is shown in figure 2. When passing the traffic light halfway during the route we see that the participants feel safer with than without the app.Summarizing the qualitative feedback, we noticed that all participants indicated feeling supported by the app. They found the type of instructions ideal for walking and learning new routes. Of the five participants, three found the length of the instructions appropriate and two found them too long. They would like to split the detailed instructions in a short instruction and the option for more detailed instructions. They felt that a detailed instruction gave too much information in a hazardous environment like a crossover. Two participants found the information focused on orientation not necessary, while three participants liked knowing their surroundings.Conclusion and discussion: Regarding the safety questions we see that participants felt safer with the app, especially when crossing the road with traffic lights. We believe this big difference in comparison to the other obstacles is due to the crossover being considered more dangerous than the other obstacles. This is reflected by their feedback in requesting less direct information at these locations.All participants indicated feeling supported and at ease with our application, stating they would use the application when walking new routes.Because of the small sample size we consider our results an indication that the app can be of help and a good start for further research on guiding people through an urban environment using beacons.
Students more and more have access to online recordings of the lectures they attend at universities. The volume and length of these recorded lectures however make them difficult to navigate. Research shows that students primarily watch the recorded lectures while preparing for their exams. They do watch the full recorded lectures, but review only the parts that are relevant to them. While doing so, they often lack the required mechanisms to locate efficiently those parts of the recorded lecture that they want to view. In this paper, we describe an experiment where expert tagging is used as a means to facilitate the students' search. In the experiment, 255 students had the option to use tags to navigate 18 recorded lectures. We used the data tracked by the lecture capture system to analyze the use of the tags by the students. We compared these data to studentswho did not use the tagging interface (TI). Results showthat the use of the TI increases in time. Students use the TI more actively over timewhile reducing the amount of video that they view. The experiment also shows that students who use the TI score higher grades when compared with students who use the regular interface.
LINK
Drones have been verified as the camera of 2024 due to the enormous exponential growth in terms of the relevant technologies and applications such as smart agriculture, transportation, inspection, logistics, surveillance and interaction. Therefore, the commercial solutions to deploy drones in different working places have become a crucial demand for companies. Warehouses are one of the most promising industrial domains to utilize drones to automate different operations such as inventory scanning, goods transportation to the delivery lines, area monitoring on demand and so on. On the other hands, deploying drones (or even mobile robots) in such challenging environment needs to enable accurate state estimation in terms of position and orientation to allow autonomous navigation. This is because GPS signals are not available in warehouses due to the obstruction by the closed-sky areas and the signal deflection by structures. Vision-based positioning systems are the most promising techniques to achieve reliable position estimation in indoor environments. This is because of using low-cost sensors (cameras), the utilization of dense environmental features and the possibilities to operate in indoor/outdoor areas. Therefore, this proposal aims to address a crucial question for industrial applications with our industrial partners to explore limitations and develop solutions towards robust state estimation of drones in challenging environments such as warehouses and greenhouses. The results of this project will be used as the baseline to develop other navigation technologies towards full autonomous deployment of drones such as mapping, localization, docking and maneuvering to safely deploy drones in GPS-denied areas.
In onze visie voeren robots autonoom taken uit op de akker. Ze kunnen zaaien, oogsten, onkruid verwijderen, gewassen monitoren en verzorgen. Hierdoor zijn agrariërs minder kostbare tijd kwijt aan basistaken. Ook zijn er met dit soort robots geen (of veel minder) bestrijdingsmiddelen nodig en rijden er geen zware machines meer op het land. Dit leidt tot minder bodemverdichting en daardoor hoeft het land niet (of minder diep) te worden omgeploegd. Naast een enorme besparing op brandstof leidt dit ook tot een betere bodemkwaliteit en worden nieuwe teelten mogelijk. Agrarische robots zijn volop in ontwikkeling. Er zijn echter nog een aantal uitdagingen die opgelost moeten worden. Eén van die uitdagingen is volledig autonome, robuuste en veilige navigatie. De robot moet kunnen rijden zonder een bestuurder. Het AgriNav project: Agricultural Navigation In dit project werkt Saxion samen met drie pioniers op het gebied van agrarische robots in Nederland. Het doel is om een gedegen beeld van oplossingen voor het navigatieprobleem te ontwikkelen. We brengen daarvoor in kaart welke producten en frameworks er zijn en in hoeverre deze direct te gebruiken zijn. Op basis van de bevindingen maken we een afweging of de navigatie oplossing wordt ingekocht of dat deze zelf wordt ontwikkeld, bijvoorbeeld op basis van bestaande open source projecten. Onderdeel van dit KIEM project is het starten van vervolgtrajecten, zoals RAAK-mkb of RAAK-PRO. Impact Het project “AgriNav” geeft de inzet van kleine autonome zelfrijdende robots in de agrarische sector een boost, waardoor er nieuwe en duurzamere landbouw kan ontstaan. Dit past bij de ambitie van Nederland om voorop te lopen op het gebied van technologie voor voedselproductie. Door het project wordt de kennispositie van het consortium versterkt in zowel de topsector HTSM als AgriFood en de NWA routes “Duurzame productie van gezond en veilig voedsel” en “smart industrie”.
Om onze groeiende wereldbevolking op een duurzame manier te kunnen voeden moeten we op zoek naar toekomstbestendige vormen van voedselproductie. We streven naar een akker- en tuinbouw waarbij minder verloren gaat, natuurlijke hulpbronnen worden gespaard en bodemecologie en biodiversiteit worden versterkt. Waar conventionele akkerbouw leunt op de inzet van grote zware machines, chemische bestrijdingsmiddelen en kunstmest, richten we ons in dit onderzoek op de inzet van lichtere en kleinere agrarische robots. Hierdoor ontstaan nieuwe manieren van telen en rijden er geen zware machines meer op het land. Als gevolg hiervan vind er minder bodemverdichting plaatst en hoeft het land niet te worden omgeploegd. In Nederland worden op dit moment een aantal agrarische robots ontwikkeld. Dit zijn inherent complexe systemen en er zijn nog een aantal uitdagingen die moeten worden opgelost voordat deze robots het veld op kunnen. Wij richten ons in dit project op de software die nodig is om de robot autonoom, oftewel zelfstandig, te kunnen laten rijden. We willen een beproefd framework aanvullen en toepassen, zodat deze op agrarische robots gebruikt kan worden. In dit project werkt Saxion samen met zes pioniers op het gebied van agrarische robots in Nederland. In een voorgaand project zijn oplossingsrichtingen verkend en in dit project worden de ontbrekende schakels ontwikkeld. Voor de navigatie gebruiken we Robot Operating System (ROS), het framework dat wereldwijd door grote robotbouwers wordt gebruikt: In dit project modelleren en simuleren we de robots. Ontbrekende onderdelen worden ontwikkeld, samengesteld of geconfigureerd. Tenslotte wordt de software op de fysieke robots getest. De ontwikkelde software wordt al gedurende de ontwikkeling als open source project publiek beschikbaar gesteld. Met de resultaten van het onderzoek kan de time-to-market voor nieuwe agrarische robots drastisch worden verlaagd.