This work describes the design, implementation and validation of an autonomous gas leakage inspection robot. Navigation with centimeter level accuracy is achieved using RTK GNSS integrated using the ROS 2 and Nav2 frameworks. The proposed solution has been validated successfully in terms of navigation accuracy and gas detection capabilities. The approach has the potential to effectively address the increasing demand for inspections of the grid.
MULTIFILE
This manual focuses on the initial phase of a (digital) publishing process. It offers methods to critically examine the narrative structures of content and explore alternative conceptions of a publication. By raising the question of how modular publishing can be used as a way to create, edit and structure content it tries to resist a monolithic story line, and embraces multiple perspectives.
This method paper presents a template solution for text mining of scientific literature using the R tm package. Literature to be analyzed can be collected manually or automatically using the code provided with this paper. Once the literature is collected, the three steps for conducting text mining can be performed as outlined below:• loading and cleaning of text from articles,• processing, statistical analysis, and clustering, and• presentation of results using generalized and tailor-made visualizations.The text mining steps can be applied to a single, multiple, or time series groups of documents.References are provided to three published peer reviewed articles that use the presented text mining methodology. The main advantages of our method are: (1) Its suitability for both research and educational purposes, (2) Compliance with the Findable Accessible Interoperable and Reproducible (FAIR) principles, and (3) code and example data are made available on GitHub under the open-source Apache V2 license.
The scientific publishing industry is rapidly transitioning towards information analytics. This shift is disproportionately benefiting large companies. These can afford to deploy digital technologies like knowledge graphs that can index their contents and create advanced search engines. Small and medium publishing enterprises, instead, often lack the resources to fully embrace such digital transformations. This divide is acutely felt in the arts, humanities and social sciences. Scholars from these disciplines are largely unable to benefit from modern scientific search engines, because their publishing ecosystem is made of many specialized businesses which cannot, individually, develop comparable services. We propose to start bridging this gap by democratizing access to knowledge graphs – the technology underpinning modern scientific search engines – for small and medium publishers in the arts, humanities and social sciences. Their contents, largely made of books, already contain rich, structured information – such as references and indexes – which can be automatically mined and interlinked. We plan to develop a framework for extracting structured information and create knowledge graphs from it. We will as much as possible consolidate existing proven technologies into a single codebase, instead of reinventing the wheel. Our consortium is a collaboration of researchers in scientific information mining, Odoma, an AI consulting company, and the publisher Brill, sharing its data and expertise. Brill will be able to immediately put to use the project results to improve its internal processes and services. Furthermore, our results will be published in open source with a commercial-friendly license, in order to foster the adoption and future development of the framework by other publishers. Ultimately, our proposal is an example of industry innovation where, instead of scaling-up, we scale wide by creating a common resource which many small players can then use and expand upon.
A huge amount of data are being generated, collected, analysed and distributed in a fast pace in our daily life. This data growth requires efficient techniques for analysing and processing high volumes of data, for which preserving privacy effectively is a crucial challenge and even a key necessity, considering the recently coming into effect privacy laws (e.g., the EU General Data Protection Regulation-GDPR). Companies and organisations in their real-world applications need scalable and usable privacy preserving techniques to support them in protecting personal data. This research focuses on efficient and usable privacy preserving techniques in data processing. The research will be conducted in different directions: - Exploring state of the art techniques. - Designing and applying experiments on existing tool-sets. - Evaluating the results of the experiments based on the real-life case studies. - Improving the techniques and/or the tool to meet the requirements of the companies. The proposal will provide results for: - Education: like offering courses, lectures, students projects, solutions for privacy preservation challenges within the educational institutes. - Companies: like providing tool evaluation insights based on case studies and giving proposals for enhancing current challenges. - Research centre (i.e., Creating 010): like expanding its expertise on privacy protection technologies and publishing technical reports and papers. This research will be sustained by pursuing following up projects actively.