In May 2007, our Centre for Research in Intellectual Capital hosted the International Congress on Intellectual Capital: The future of business navigation. The Congress – which took place in Haarlem, The Netherlands – was attended by more than 140 participants from 23 countries. Based on almost 70 papers, we designed a conference program that consisted of more than 90 sessions. This special issue is based on a selection of the best papers of our conference.
DOCUMENT
For long flights, the cruise is the longest phase and where the largest amount of fuel is consumed. An in-cruise optimization method has been implemented to calculate the optimal trajectory that reduces the flight cost. A three-dimensional grid has been created, coupling lateral navigation and vertical navigation profiles. With a dynamic analysis of the wind, the aircraft can perform a horizontal deviation or change altitudes via step climbs to reduce fuel consumption. As the number of waypoints and possible step climbs is increased, the number of flight trajectories increases exponentially; thus, a genetic algorithm has been implemented to reduce the total number of calculated trajectories compared to an exhaustive search. The aircraft’s model has been obtained from a performance database, which is currently used in the commercial flight management system studied in this paper. A 5% average flight cost reduction has been obtained.
MULTIFILE
Twirre is a new architecture for mini-UAV platforms designed for autonomous flight in both GPS-enabled and GPS-deprived applications. The architecture consists of low-cost hardware and software components. High-level control software enables autonomous operation. Exchanging or upgrading hardware components is straightforward and the architecture is an excellent starting point for building low-cost autonomous mini-UAVs for a variety of applications. Experiments with an implementation of the architecture are in development, and preliminary results demonstrate accurate indoor navigation
MULTIFILE
More and more people suffer from age-related eye conditions, e.g. Macular Degeneration. One of the problems experienced by these people is navigation. A strategy shown by many juvenile visually impaired persons (VIPs) is using auditory information for navigation. Therefore, it is important to train age-related VIPs to use auditory information for navigation. Hence the serious game HearHere was developed to train the focused auditory attention of age-related VIPs enhancing the use of auditory information for navigation, available as an application for tablets. Players of the game are instructed to navigate virtually as quickly as possible to a specific sound, requiring focused auditory attention. In an experimental study, the effectiveness of the game on improving focused auditory attention was examined. Forty participants were included, all students of the University of Groningen with normal or corrected-to-normal vision. By including sighted participants, we could investigate whether someone who was used to rely on its vision could improve its focused auditory attention after playing HearHere. As a control, participants played a digital version of Sudoku. The order of playing the games was counterbalanced. Participants were asked to perform a dichotic listening task before playing any game, after playing the first game and after playing the second game. It was found that participants improved significantly more in their performance on the dichotic listening task after having played HearHere (p<.001) than after playing Sudoku (p=.040). This means the game indeed improves focused auditory attention, a skill necessary to navigate on sounds. In conclusion, we recommend the game to become part of the orientation and mobility program, offering age-related VIPs the opportunity to practice the use of auditory information for navigation. Currently, we are working on a version that is suitable for actual use.
DOCUMENT
Privacy concerns can potentially make camera-based object classification unsuitable for robot navigation. To address this problem, we propose a novel object classification system using only a 2D-LiDAR sensor on mobile robots. The proposed system enables semantic understanding of the environment by applying the YOLOv8n model to classify objects such as tables, chairs, cupboards, walls, and door frames using only data captured by a 2D-LiDAR sensor. The experimental results show that the resulting YOLOv8n model achieved an accuracy of 83.7% in real-time classification running on Raspberry Pi 5, despite having a lower accuracy when classifying door-frames and walls. This validates our proposed approach as a privacy-friendly alternative to camera-based methods and illustrates that it can run on small computers onboard mobile robots.
DOCUMENT
Traditional turn-by-turn navigation approaches often do not provide sufficiently detailed information to help people with a visual impairment (PVI) to successfully navigate through an urban environment. To provide PVI with clear and supportive navigation information we created Sidewalk, a new wayfinding message syntax for mobile applications. Sidewalk proposes a consistent structure for detailed wayfinding instructions, short instructions and alerts. We tested Sidewalk with six PVI in the urban center of Amsterdam, the Netherlands. Results show that our approach to wayfinding was positively valued by the participants.
DOCUMENT
In this document we present information about a test that we made to test SPOT with the RTK-GNSS and ROS2.
MULTIFILE
Introduction: Visually impaired people experience trouble with navigation and orientation due to their weakened ability to rely on eyesight to monitor the environment [1][2]. Smartphones such as the iPhone are already popular devices among the visually impaired for navigating [3]. We explored if an iPhone application that responds to Bluetooth beacons to inform the user about their environment could aid the visually impaired in navigation in an urban environment.Method: We tested the implementation in an urban environment with visually impaired people using the route from the Amsterdam Bijlmer train station to the Royal Dutch Visio office. Bluetooth beacons were attached at two meters high to lampposts and traffic signs along a specified route to give the user instructions via a custom made iPhone app. Three different obstacle types were identified and implemented in the app: a crossover with traffic signs, a car parking entrance and objects blocking the pathway like stairs. Based on the work of Atkin et al.[5] and Havik et al. [6] at each obstacle the beacon will trigger the app to present important information about the surroundings like potential hazards nearby, how to navigate around or through obstacles and information about the next obstacle. The information is presented using pictures of the environment and instructions in text and voice based on Giudice et al. [4]. The application uses Apple’s accessibility features to communicate the instructions with VoiceOver screenreader. The app allows the user to preview the route, to prepare for upcoming obstacles and landmarks. Last, users can customize the app by specifying the amount of detail in images and information the app presents.To determine if the app is more useful for the participants than their current navigational method, participants walked the route both with and without the application. When walking with the app, participants were guided by the app. When walking without the app they used their own navigational method. During both walks a supervisor ensured the safety of the participant.During both walks, after each obstacle, participants were asked how safe they felt. We used a five point Likert scale where one stood for “feeling very safe” and five for “feeling very unsafe”.Qualitative feedback on the usability of the app was collected using the speak-a-lout method during walking and by interview afster walking.Results: Five visually impaired participated, one female and five males, age range from 30 to 78 and with varying levels of visual limitations. Three participants were familiar with the route and two walked the route for the first time.After each obstacle participants rated how safe they felt on a five point Likert scale. We normalized the results by deducting the scores of the walk without the app from the scores of the walk with the app. The average of all participants is shown in figure 2. When passing the traffic light halfway during the route we see that the participants feel safer with than without the app.Summarizing the qualitative feedback, we noticed that all participants indicated feeling supported by the app. They found the type of instructions ideal for walking and learning new routes. Of the five participants, three found the length of the instructions appropriate and two found them too long. They would like to split the detailed instructions in a short instruction and the option for more detailed instructions. They felt that a detailed instruction gave too much information in a hazardous environment like a crossover. Two participants found the information focused on orientation not necessary, while three participants liked knowing their surroundings.Conclusion and discussion: Regarding the safety questions we see that participants felt safer with the app, especially when crossing the road with traffic lights. We believe this big difference in comparison to the other obstacles is due to the crossover being considered more dangerous than the other obstacles. This is reflected by their feedback in requesting less direct information at these locations.All participants indicated feeling supported and at ease with our application, stating they would use the application when walking new routes.Because of the small sample size we consider our results an indication that the app can be of help and a good start for further research on guiding people through an urban environment using beacons.
DOCUMENT
Students more and more have access to online recordings of the lectures they attend at universities. The volume and length of these recorded lectures however make them difficult to navigate. Research shows that students primarily watch the recorded lectures while preparing for their exams. They do watch the full recorded lectures, but review only the parts that are relevant to them. While doing so, they often lack the required mechanisms to locate efficiently those parts of the recorded lecture that they want to view. In this paper, we describe an experiment where expert tagging is used as a means to facilitate the students' search. In the experiment, 255 students had the option to use tags to navigate 18 recorded lectures. We used the data tracked by the lecture capture system to analyze the use of the tags by the students. We compared these data to studentswho did not use the tagging interface (TI). Results showthat the use of the TI increases in time. Students use the TI more actively over timewhile reducing the amount of video that they view. The experiment also shows that students who use the TI score higher grades when compared with students who use the regular interface.
LINK
Twirre V2 is the evolution of an architecture for mini-UAV platforms which allows automated operation in both GPS-enabled and GPSdeprived applications. This second version separates mission logic, sensor data processing and high-level control, which results in reusable software components for multiple applications. The concept of Local Positioning System (LPS) is introduced, which, using sensor fusion, would aid or automate the flying process like GPS currently does. For this, new sensors are added to the architecture and a generic sensor interface together with missions for landing and following a line have been implemented. V2 introduces a software modular design and new hardware has been coupled, showing its extensibility and adaptability
DOCUMENT