This article discusses Deep Mapping in Geography teaching and learning by drawing on a case study of a summer school organised during the COVID-19 pandemic. Deep Mapping was used to foster deep learning among the students and teach them about a distant place and people. The exercise tasked the students to work on the creation of layered maps representing the fieldwork site, the city of Vancouver, Canada. Critical student reflections about the Deep Mapping process are used to address some of the benefits and challenges. The Deep Mapping exercise stimulated the students to critically engage with the diverse summer school materials, move beyond a superficial view of the city, maps and mapping, and reflect on their positionality. The method is promising in light of making deep engagement with other places more accessible to those who might not have or be inclined to access such international educational experience and also offers another opportunity for blended learning. In conclusion, we argue that Deep Mapping offers a timely and highly engaging approach to learn about a place and people from another part of the world – be it on location or at a distance.
DOCUMENT
Background: Profiling the plant root architecture is vital for selecting resilient crops that can efficiently take up water and nutrients. The high-performance imaging tools available to study root-growth dynamics with the optimal resolution are costly and stationary. In addition, performing nondestructive high-throughput phenotyping to extract the structural and morphological features of roots remains challenging. Results: We developed the MultipleXLab: a modular, mobile, and cost-effective setup to tackle these limitations. The system can continuously monitor thousands of seeds from germination to root development based on a conventional camera attached to a motorized multiaxis-rotational stage and custom-built 3D-printed plate holder with integrated light-emitting diode lighting. We also developed an image segmentation model based on deep learning that allows the users to analyze the data automatically. We tested the MultipleXLab to monitor seed germination and root growth of Arabidopsis developmental, cell cycle, and auxin transport mutants non-invasively at high-throughput and showed that the system provides robust data and allows precise evaluation of germination index and hourly growth rate between mutants. Conclusion: MultipleXLab provides a flexible and user-friendly root phenotyping platform that is an attractive mobile alternative to high-end imaging platforms and stationary growth chambers. It can be used in numerous applications by plant biologists, the seed industry, crop scientists, and breeding companies.
LINK
An illustrative non-technical review was published on Towards Data Science regarding our recent Journal paper “Automatic crack classification and segmentation on masonry surfaces using convolutional neural networks and transfer learning”.While new technologies have changed almost every aspect of our lives, the construction field seems to be struggling to catch up. Currently, the structural condition of a building is still predominantly manually inspected. In simple terms, even nowadays when a structure needs to be inspected for any damage, an engineer will manually check all the surfaces and take a bunch of photos while keeping notes of the position of any cracks. Then a few more hours need to be spent at the office to sort all the photos and notes trying to make a meaningful report out of it. Apparently this a laborious, costly, and subjective process. On top of that, safety concerns arise since there are parts of structures with access restrictions and difficult to reach. To give you an example, the Golden Gate Bridge needs to be periodically inspected. In other words, up to very recently there would be specially trained people who would climb across this picturesque structure and check every inch of it.
LINK
Industrial robot manipulators are widely used for repetitive applications that require high precision, like pick-and-place. In many cases, the movements of industrial robot manipulators are hard-coded or manually defined, and need to be adjusted if the objects being manipulated change position. To increase flexibility, an industrial robot should be able to adjust its configuration in order to grasp objects in variable/unknown positions. This can be achieved by off-the-shelf vision-based solutions, but most require prior knowledge about each object tobe manipulated. To address this issue, this work presents a ROS-based deep reinforcement learning solution to robotic grasping for a Collaborative Robot (Cobot) using a depth camera. The solution uses deep Q-learning to process the color and depth images and generate a greedy policy used to define the robot action. The Q-values are estimated using Convolutional Neural Network (CNN) based on pre-trained models for feature extraction. Experiments were carried out in a simulated environment to compare the performance of four different pre-trained CNNmodels (RexNext, MobileNet, MNASNet and DenseNet). Results showthat the best performance in our application was reached by MobileNet,with an average of 84 % accuracy after training in simulated environment.
DOCUMENT
Cone beam CT scanners use much less radiation than to normal CT scans. However, compared to normal CT scans the images are noisy, showing several artifacts. The UNet Convolutional Neural Network may provide a way to reconstruct the a CT image from cone beam scans.
MULTIFILE
Masonry structures represent the highest proportion of building stock worldwide. Currently, the structural condition of such structures is predominantly manually inspected which is a laborious, costly and subjective process. With developments in computer vision, there is an opportunity to use digital images to automate the visual inspection process. The aim of this study is to examine deep learning techniques for crack detection on images from masonry walls. A dataset with photos from masonry structures is produced containing complex backgrounds and various crack types and sizes. Different deep learning networks are considered and by leveraging the effect of transfer learning crack detection on masonry surfaces is performed on patch level with 95.3% accuracy and on pixel level with 79.6% F1 score. This is the first implementation of deep learning for pixel-level crack segmentation on masonry surfaces. Codes, data and networks relevant to the herein study are available in: github.com/dimitrisdais/crack_detection_CNN_masonry.
DOCUMENT
Background: Manual muscle mass assessment based on Computed Tomography (CT) scans is recognized as a good marker for malnutrition, sarcopenia, and adverse outcomes. However, manual muscle mass analysis is cumbersome and time consuming. An accurate fully automated method is needed. In this study, we evaluate if manual psoas annotation can be substituted by a fully automatic deep learning-based method.Methods: This study included a cohort of 583 patients with severe aortic valve stenosis planned to undergo Transcatheter Aortic Valve Replacement (TAVR). Psoas muscle area was annotated manually on the CT scan at the height of lumbar vertebra 3 (L3). The deep learning-based method mimics this approach by first determining the L3 level and subsequently segmenting the psoas at that level. The fully automatic approach was evaluated as well as segmentation and slice selection, using average bias 95% limits of agreement, Intraclass Correlation Coefficient (ICC) and within-subject Coefficient of Variation (CV). To evaluate performance of the slice selection visual inspection was performed. To evaluate segmentation Dice index was computed between the manual and automatic segmentations (0 = no overlap, 1 = perfect overlap).Results: Included patients had a mean age of 81 ± 6 and 45% was female. The fully automatic method showed a bias and limits of agreement of -0.69 [-6.60 to 5.23] cm2, an ICC of 0.78 [95% CI: 0.74-0.82] and a within-subject CV of 11.2% [95% CI: 10.2-12.2]. For slice selection, 84% of the selections were on the same vertebra between methods, bias and limits of agreement was 3.4 [-24.5 to 31.4] mm. The Dice index for segmentation was 0.93 ± 0.04, bias and limits of agreement was -0.55 [1.71-2.80] cm2.Conclusion: Fully automatic assessment of psoas muscle area demonstrates accurate performance at the L3 level in CT images. It is a reliable tool that offers great opportunities for analysis in large scale studies and in clinical applications.
DOCUMENT
Background & aims: Accurate diagnosis of sarcopenia requires evaluation of muscle quality, which refers to the amount of fat infiltration in muscle tissue. In this study, we aim to investigate whether we can independently predict mortality risk in transcatheter aortic valve implantation (TAVI) patients, using automatic deep learning algorithms to assess muscle quality on procedural computed tomography (CT) scans. Methods: This study included 1199 patients with severe aortic stenosis who underwent transcatheter aortic valve implantation (TAVI) between January 2010 and January 2020. A procedural CT scan was performed as part of the preprocedural-TAVI evaluation, and the scans were analyzed using deep-learning-based software to automatically determine skeletal muscle density (SMD) and intermuscular adipose tissue (IMAT). The association of SMD and IMAT with all-cause mortality was analyzed using a Cox regression model, adjusted for other known mortality predictors, including muscle mass. Results: The mean age of the participants was 80 ± 7 years, 53% were female. The median observation time was 1084 days, and the overall mortality rate was 39%. We found that the lowest tertile of muscle quality, as determined by SMD, was associated with an increased risk of mortality (HR 1.40 [95%CI: 1.15–1.70], p < 0.01). Similarly, low muscle quality as defined by high IMAT in the lowest tertile was also associated with increased mortality risk (HR 1.24 [95%CI: 1.01–1.52], p = 0.04). Conclusions: Our findings suggest that deep learning-assessed low muscle quality, as indicated by fat infiltration in muscle tissue, is a practical, useful and independent predictor of mortality after TAVI.
DOCUMENT
This video offers a concise exploration of the distinctions between Data Science, AI, Machine Learning, and Deep Learning. Starting with the foundational role of Data Science, it navigates through the various machine learning categories and touches upon the capabilities and constraints of Deep Learning. The discussion culminates in understanding the nuances of AI, differentiating between narrow and general AI. Through insightful examples, viewers are guided on selecting the right technique for specific projects, ensuring both clarity and cost-effectiveness in the realm of data science.
VIDEO
Estimating the remaining useful life (RUL) of an asset lies at the heart of prognostics and health management (PHM) of many operations-critical industries such as aviation. Mod- ern methods of RUL estimation adopt techniques from deep learning (DL). However, most of these contemporary tech- niques deliver only single-point estimates for the RUL without reporting on the confidence of the prediction. This practice usually provides overly confident predictions that can have severe consequences in operational disruptions or even safety. To address this issue, we propose a technique for uncertainty quantification (UQ) based on Bayesian deep learning (BDL). The hyperparameters of the framework are tuned using a novel bi-objective Bayesian optimization method with objectives the predictive performance and predictive uncertainty. The method also integrates the data pre-processing steps into the hyperparameter optimization (HPO) stage, models the RUL as a Weibull distribution, and returns the survival curves of the monitored assets to allow informed decision-making. We vali- date this method on the widely used C-MAPSS dataset against a single-objective HPO baseline that aggregates the two ob- jectives through the harmonic mean (HM). We demonstrate the existence of trade-offs between the predictive performance and the predictive uncertainty and observe that the bi-objective HPO returns a larger number of hyperparameter configurations compared to the single-objective baseline. Furthermore, we see that with the proposed approach, it is possible to configure models for RUL estimation that exhibit better or comparable performance to the single-objective baseline when validated on the test sets.
DOCUMENT