An illustrative non-technical review was published on Towards Data Science regarding our recent Journal paper “Automatic crack classification and segmentation on masonry surfaces using convolutional neural networks and transfer learning”.While new technologies have changed almost every aspect of our lives, the construction field seems to be struggling to catch up. Currently, the structural condition of a building is still predominantly manually inspected. In simple terms, even nowadays when a structure needs to be inspected for any damage, an engineer will manually check all the surfaces and take a bunch of photos while keeping notes of the position of any cracks. Then a few more hours need to be spent at the office to sort all the photos and notes trying to make a meaningful report out of it. Apparently this a laborious, costly, and subjective process. On top of that, safety concerns arise since there are parts of structures with access restrictions and difficult to reach. To give you an example, the Golden Gate Bridge needs to be periodically inspected. In other words, up to very recently there would be specially trained people who would climb across this picturesque structure and check every inch of it.
LINK
BACKGROUND: Our previously published CUDA-only application PaSWAS for Smith-Waterman (SW) sequence alignment of any type of sequence on NVIDIA-based GPUs is platform-specific and therefore adopted less than could be. The OpenCL language is supported more widely and allows use on a variety of hardware platforms. Moreover, there is a need to promote the adoption of parallel computing in bioinformatics by making its use and extension more simple through more and better application of high-level languages commonly used in bioinformatics, such as Python.RESULTS: The novel application pyPaSWAS presents the parallel SW sequence alignment code fully packed in Python. It is a generic SW implementation running on several hardware platforms with multi-core systems and/or GPUs that provides accurate sequence alignments that also can be inspected for alignment details. Additionally, pyPaSWAS support the affine gap penalty. Python libraries are used for automated system configuration, I/O and logging. This way, the Python environment will stimulate further extension and use of pyPaSWAS.CONCLUSIONS: pyPaSWAS presents an easy Python-based environment for accurate and retrievable parallel SW sequence alignments on GPUs and multi-core systems. The strategy of integrating Python with high-performance parallel compute languages to create a developer- and user-friendly environment should be considered for other computationally intensive bioinformatics algorithms.
DOCUMENT
Machine learning models have proven to be reliable methods in classification tasks. However, little research has been done on classifying dwelling characteristics based on smart meter & weather data before. Gaining insights into dwelling characteristics can be helpful to create/improve the policies for creating new dwellings at NZEB standard. This paper compares the different machine learning algorithms and the methods used to correctly implement the models. These methods include the data pre-processing, model validation and evaluation. Smart meter data was provided by Groene Mient, which was used to train several machine learning algorithms. The models that were generated by the algorithms were compared on their performance. The results showed that Recurrent Neural Network (RNN) 2performed the best with 96% of accuracy. Cross Validation was used to validate the models, where 80% of the data was used for training purposes and 20% was used for testing purposes. Evaluation metrices were used to produce classification reports, which can indicate which of the models work the best for this specific problem. The models were programmed in Python.
DOCUMENT
Social networks and news outlets use recommender systems to distribute information and suggest news to their users. These algorithms are an attractive solution to deal with the massive amount of content on the web [6]. However, some organisations prioritise retention and maximisation of the number of access, which can be incompatible with values like the diversity of content and transparency. In recent years critics have warned of the dangers of algorithmic curation. The term filter bubbles, coined by the internet activist Eli Pariser [1], describes the outcome of pre-selected personalisation, where users are trapped in a bubble of similar contents. Pariser warns that it is not the user but the algorithm that curates and selects interesting topics to watch or read. Still, there is disagreement about the consequences for individuals and society. Research on the existence of filter bubbles is inconclusive. Fletcher in [5], claims that the term filter bubbles is an oversimplification of a much more complex system involving cognitive processes and social and technological interactions. And most of the empirical studies indicate that algorithmic recommendations have not locked large segments of the audience into bubbles [3] [6]. We built an agent-based simulation tool to study the dynamic and complex interplay between individual choices and social and technological interaction. The model includes different recommendation algorithms and a range of cognitive filters that can simulate different social network dynamics. The cognitive filters are based on the triple-filter bubble model [2]. The tool can be used to understand under which circumstances algorithmic filtering and social network dynamics affect users' innate opinions and which interventions on recommender systems can mitigate adverse side effects like the presence of filter bubbles. The resulting tool is an open-source interactive web interface, allowing the simulation with different parameters such as users' characteristics, social networks and recommender system settings (see Fig. 1). The ABM model, implemented in Python Mesa [4], allows users to visualise, compare and analyse the consequence of combining various factors. Experiment results are similar to the ones published in the Triple Filter Bubble paper [2]. The novelty is the option to use a real collaborative-filter recommendation system and a new metric to measure the distance between users' innate and final opinions. We observed that slight modifications in the recommendation system, exposing items within the boundaries of users' latitude of acceptance, could increase content diversity.References 1. Pariser, E.: The filter bubble: What the internet is hiding from you. Penguin, New York, NY (2011) 2. Geschke, D., Lorenz, J., Holtz, P.: The triple-filter bubble: Using agent-based modelling to test a meta-theoretical framework for the emergence of filter bubbles and echo chambers. British Journal of Social Psychology (2019), 58, 129–149 3. Möller, J., Trilling, D., Helberger, N. , and van Es, B.: Do Not Blame It on the Algorithm: An Empirical Assessment of Multiple Recommender Systems and Their Impact on Content Diversity. Information, Communication and Society 21, no. 7 (2018): 959–77 4. Mesa: Agent-based modeling in Python, https://mesa.readthedocs.io/. Last accessed 2 Sep 2022 5. Fletcher, R.: The truth behind filter bubbles: Bursting some myths. Digital News Report - Reuters Institute (2020). https://reutersinstitute.politics.ox.ac.uk/news/truth-behind-filter-bubblesbursting-some-myths. Last accessed 2 Sep 2022 6. Haim, M., Graefe, A, Brosius, H: Burst of the Filter Bubble?: Effects of Personalization on the Diversity of Google News. Digital Journalism 6, no. 3 (2018): 330–43.
MULTIFILE
The paper introduced an automatic score detection model using object detection techniques. The performance of sevenmodels belonging to two different architectural setups was compared. Models like YOLOv8n, YOLOv8s, YOLOv8m, RetinaNet-50, and RetinaNet-101 are single-shot detectors, while Faster RCNN-50 and Faster RCNN-101 belong to the two-shot detectors category. The dataset was manually captured from the shooting range and expanded by generating more versatile data using Python code. Before the dataset was trained to develop models, it was resized (640x640) and augmented using Roboflow API. The trained models were then assessed on the test dataset, and their performance was compared using matrices like mAP50, mAP50-90, precision, and recall. The results showed that YOLOv8 models can detect multiple objects with good confidence scores.
DOCUMENT
PV systems are used more and more. Not always is it possible to install them in the optimal direction for maximum energy output over the year. At the Johan Cruijff ArenA the PV panels are placed all around the roof in all possible directions. Panels oriented to the north will have a lower energy gain than those oriented to the south. The 42 panel groups are connected to 8 electricity meters. Of these 8 energy meters monthly kWh produced are available. The first assignment is to calculate the energy gains of the 42 panel groups, and connect these in the correct way with the 8 energy meter readings, so simulated data is in accordance with measured data.Of the year 2017 there are also main electricity meter readings available for every quarter of an hour. A problem with these readings is that only absolute values are given. When electricity is taken of the grid this is a positive reading, but when there is a surplus of solar energy and electricity is delivered to the grid, this is also a positive reading. To see the effect on the electricity demand of future energy measures, and to use the Seev4-City detailed CO2 savings calculation with the electricity mix of the grid, it is necessary to know the real electricity demand of the building.The second assignment is to use the calculations of the first assignment to separate the 15 minute electricity meter readings in that for real building demand and for PV production.This document first gives information for teachers (learning goals, possible activities, time needed, further reading), followed by the assignment for students.
DOCUMENT
Masonry structures represent the highest proportion of building stock worldwide. Currently, the structural condition of such structures is predominantly manually inspected which is a laborious, costly and subjective process. With developments in computer vision, there is an opportunity to use digital images to automate the visual inspection process. The aim of this study is to examine deep learning techniques for crack detection on images from masonry walls. A dataset with photos from masonry structures is produced containing complex backgrounds and various crack types and sizes. Different deep learning networks are considered and by leveraging the effect of transfer learning crack detection on masonry surfaces is performed on patch level with 95.3% accuracy and on pixel level with 79.6% F1 score. This is the first implementation of deep learning for pixel-level crack segmentation on masonry surfaces. Codes, data and networks relevant to the herein study are available in: github.com/dimitrisdais/crack_detection_CNN_masonry.
DOCUMENT
Er zijn veel verschillende sensoren beschikbaar die gebruikt kunnen worden om data in te winnen. Daarnaast zijn er veel verschillende werkwijzen om aan de slag te gaan met sensoren. Om een gestandaardiseerde werkwijze op te stellen, is een groep 4e-jaars AGIS studenten van de HAS green academy in het kader van het SURF project SMART sensordata infrastructuur aan de slag gegaan met het proces omtrent het inwinnen van data met sensoren. Hier is een werkwijze uit komen rollen die voor iedereen en overal werkt. In deze handleiding wordt de werkwijze stap voor stap uitgelegd.
DOCUMENT
Rapport van de pilot SMART Sensordata Infrastructuur (SSI). Deze pilot is uitgevoerd door docenten en studenten van de opleiding AGIS van de HAS green academy in de periode van juni t/m december 2022 in samenwerking met en met financiële steun van het DCC voor Praktijkgericht onderzoek van SURF. Dit rapport bevat de volgende op te leveren resultaten:1. Ontwerp en praktische beschrijving van algemeen toepasbare datadriven-workflow voor sensordata2. Ontwerp en praktische beschrijving van metadata-model van sensor-data, gericht op datadefinitie en datakwaliteit
DOCUMENT