Abstract Background: The benefit of MR-only workflow compared to current CT-based workflow for prostate radiotherapy is reduction of systematic errors in the radiotherapy chain by 2–3 mm. Nowadays, MRI is used for target delineation while CT is needed for position verification. In MR-only workflows, MRI based synthetic CT (sCT) replaces CT. Intraprostatic fiducial markers (FMs) are used as a surrogate for the position of the prostate improving targeting. However, FMs are not visible on sCT. Therefore, a semi-automatic method for burning-in FMs on sCT was developed. Accuracy of MR-only workflow using semi-automatically burned-in FMs was assessed and compared to CT/MR workflow. Methods: Thirty-one prostate cancer patients receiving radiotherapy, underwent an additional MR sequence (mDIXON) to create an sCT for MR-only workflow simulation. Three sources of accuracy in the CT/MR- and MR-only workflow were investigated. To compare image registrations for target delineation, the inter-observer error (IOE) of FM-based CT-to-MR image registrations and soft-tissue-based MR-to-MR image registrations were determined on twenty patients. Secondly, the inter-observer variation of the resulting FM positions was determined on twenty patients. Thirdly, on 26 patients CBCTs were retrospectively registered on sCT with burned-in FMs and compared to CT-CBCT registrations. Results: Image registration for target delineation shows a three times smaller IOE for MR-only workflow compared to CT/MR workflow. All observers agreed in correctly identifying all FMs for 18 out of 20 patients (90%). The IOE in CC direction of the center of mass (COM) position of the markers was within the CT slice thickness (2.5 mm), the IOE in AP and RL direction were below 1.0 mm and 1.5 mm, respectively. Registrations for IGRT position verification in MR-only workflow compared to CT/MR workflow were equivalent in RL-, CC- and AP-direction, except for a significant difference for random error in rotation. Conclusions: MR-only workflow using sCT with burned-in FMs is an improvement compared to the current CT/ MR workflow, with a three times smaller inter observer error in CT-MR registration and comparable CBCT registration results between CT and sCT reference scans. Trial registry Medical Research Involving Human Subjects Act (WMO) does apply to this study and was approved by the Medical Ethics review Committee of the Academic Medical Center. Registration number: NL65414.018.18. Date of registration: 21–08-2018.
LINK
Localization is a crucial skill in mobile robotics because the robot needs to make reasonable navigation decisions to complete its mission. Many approaches exist to implement localization, but artificial intelligence can be an interesting alternative to traditional localization techniques based on model calculations. This work proposes a machine learning approach to solve the localization problem in the RobotAtFactory 4.0 competition. The idea is to obtain the relative pose of an onboard camera with respect to fiducial markers (ArUcos) and then estimate the robot pose with machine learning. The approaches were validated in a simulation. Several algorithms were tested, and the best results were obtained by using Random Forest Regressor, with an error on the millimeter scale. The proposed solution presents results as high as the analytical approach for solving the localization problem in the RobotAtFactory 4.0 scenario, with the advantage of not requiring explicit knowledge of the exact positions of the fiducial markers, as in the analytical approach.
DOCUMENT
The use of machine learning in embedded systems is an interesting topic, especially with the growth in popularity of the Internet of Things (IoT). The capacity of a system, such as a robot, to self-localize, is a fundamental skill for its navigation and decision-making processes. This work focuses on the feasibility of using machine learning in a Raspberry Pi 4 Model B, solving the localization problem using images and fiducial markers (ArUco markers) in the context of the RobotAtFactory 4.0 competition. The approaches were validated using a realistically simulated scenario. Three algorithms were tested, and all were shown to be a good solution for a limited amount of data. Results also show that when the amount of data grows, only Multi-Layer Perception (MLP) is feasible for the embedded application due to the required training time and the resulting size of the model.
DOCUMENT
Due to the exponential growth of ecommerce, the need for automated Inventory management is crucial to have, among others, up-to-date information. There have been recent developments in using drones equipped with RGB cameras for scanning and counting inventories in warehouse. Due to their unlimited reach, agility and speed, drones can speed up the inventory process and keep it actual. To benefit from this drone technology, warehouse owners and inventory service providers are actively exploring ways for maximizing the utilization of this technology through extending its capability in long-term autonomy, collaboration and operation in night and weekends. This feasibility study is aimed at investigating the possibility of developing a robust, reliable and resilient group of aerial robots with long-term autonomy as part of effectively automating warehouse inventory system to have competitive advantage in highly dynamic and competitive market. To that end, the main research question is, “Which technologies need to be further developed to enable collaborative drones with long-term autonomy to conduct warehouse inventory at night and in the weekends?” This research focusses on user requirement analysis, complete system architecting including functional decomposition, concept development, technology selection, proof-of-concept demonstrator development and compiling a follow-up projects.
In the past decade, particularly smaller drones have started to claim their share of the sky due to their potential applications in the civil sector as flying-eyes, noses, and very recently as flying hands. Network partners from various application domains: safety, Agro, Energy & logistic are curious about the next leap in this field, namely, collaborative Sky-workers. Their main practical question is essentially: “Can multiple small drones transport a large object over a high altitude together in outdoor applications?” The industrial partners, together with Saxion and RUG, will conduct feasibility study to investigate if it is possible to develop these collaborative Sky-workers and to identify which possibilities this new technology will offer. Design science research methodology, which focuses on solution-oriented applied research involving multiple iterations with rigorous evaluations, will be used to research the feasibility of the main technological building blocks. They are: • Accurate localization based on onboard sensors. • Safe and optimal interaction controller for collaborative aerial transport Within this project, the first proof-of-concepts will be developed. The results of this project will be used to expand the existing network and formulate a bigger project to address additional critical aspects in order to develop a complete framework for collaborative drones.