Background: Manual muscle mass assessment based on Computed Tomography (CT) scans is recognized as a good marker for malnutrition, sarcopenia, and adverse outcomes. However, manual muscle mass analysis is cumbersome and time consuming. An accurate fully automated method is needed. In this study, we evaluate if manual psoas annotation can be substituted by a fully automatic deep learning-based method.Methods: This study included a cohort of 583 patients with severe aortic valve stenosis planned to undergo Transcatheter Aortic Valve Replacement (TAVR). Psoas muscle area was annotated manually on the CT scan at the height of lumbar vertebra 3 (L3). The deep learning-based method mimics this approach by first determining the L3 level and subsequently segmenting the psoas at that level. The fully automatic approach was evaluated as well as segmentation and slice selection, using average bias 95% limits of agreement, Intraclass Correlation Coefficient (ICC) and within-subject Coefficient of Variation (CV). To evaluate performance of the slice selection visual inspection was performed. To evaluate segmentation Dice index was computed between the manual and automatic segmentations (0 = no overlap, 1 = perfect overlap).Results: Included patients had a mean age of 81 ± 6 and 45% was female. The fully automatic method showed a bias and limits of agreement of -0.69 [-6.60 to 5.23] cm2, an ICC of 0.78 [95% CI: 0.74-0.82] and a within-subject CV of 11.2% [95% CI: 10.2-12.2]. For slice selection, 84% of the selections were on the same vertebra between methods, bias and limits of agreement was 3.4 [-24.5 to 31.4] mm. The Dice index for segmentation was 0.93 ± 0.04, bias and limits of agreement was -0.55 [1.71-2.80] cm2.Conclusion: Fully automatic assessment of psoas muscle area demonstrates accurate performance at the L3 level in CT images. It is a reliable tool that offers great opportunities for analysis in large scale studies and in clinical applications.
We examined the effects of age on automatic and voluntary motor adjustments in pointing tasks. To this end, young (20–25 years) and middle-aged adults (48–62 years) were instructed to point at a target that could unexpectedly change its location (to the left or right) or its color (to green or red) during the movement. In the location change conditions, participants were asked to either adjust their pointing movement toward the new location (i.e., normal pointing) or in the opposite direction (i.e., anti-pointing). In the color change conditions, participants were instructed to adjust their movement to the left or right depending on the change in color. The results showed that in a large proportion of the anti-pointing trials, participants made two adjustments: an early initial automatic adjustment in the direction of the target shift followed by a late voluntary adjustment toward the opposite direction. It was found that the late voluntary adjustments were delayed for the middle-aged participants relative to the young participants. There were no age differences for the fast automatic adjustment in normal pointing, but the early adjustment in anti-pointing tended to be later in the middle-aged adults. Finally, the difference in the onset of early and late adjustments in anti-pointing adjustments was greater among the middle-aged adults. Hence, this study is the first to show that aging slows down voluntary goal-directed movement control processes to greater extent than the automatic stimulus-driven processes.
With the proliferation of misinformation on the web, automatic misinformation detection methods are becoming an increasingly important subject of study. Large language models have produced the best results among content-based methods, which rely on the text of the article rather than the metadata or network features. However, finetuning such a model requires significant training data, which has led to the automatic creation of large-scale misinformation detection datasets. In these datasets, articles are not labelled directly. Rather, each news site is labelled for reliability by an established fact-checking organisation and every article is subsequently assigned the corresponding label based on the reliability score of the news source in question. A recent paper has explored the biases present in one such dataset, NELA-GT-2018, and shown that the models are at least partly learning the stylistic and other features of different news sources rather than the features of unreliable news. We confirm a part of their findings. Apart from studying the characteristics and potential biases of the datasets, we also find it important to examine in what way the model architecture influences the results. We therefore explore which text features or combinations of features are learned by models based on contextual word embeddings as opposed to basic bag-of-words models. To elucidate this, we perform extensive error analysis aided by the SHAP post-hoc explanation technique on a debiased portion of the dataset. We validate the explanation technique on our inherently interpretable baseline model.
Prompt and timely response to incoming cyber-attacks and incidents is a core requirement for business continuity and safe operations for organizations operating at all levels (commercial, governmental, military). The effectiveness of these measures is significantly limited (and oftentimes defeated altogether) by the inefficiency of the attack identification and response process which is, effectively, a show-stopper for all attack prevention and reaction activities. The cognitive-intensive, human-driven alarm analysis procedures currently employed by Security Operation Centres are made ineffective (as opposed to only inefficient) by the sheer amount of alarm data produced, and the lack of mechanisms to automatically and soundly evaluate the arriving evidence to build operable risk-based metrics for incident response. This project will build foundational technologies to achieve Security Response Centres (SRC) based on three key components: (1) risk-based systems for alarm prioritization, (2) real-time, human-centric procedures for alarm operationalization, and (3) technology integration in response operations. In doing so, SeReNity will develop new techniques, methods, and systems at the intersection of the Design and Defence domains to deliver operable and accurate procedures for efficient incident response. To achieve this, this project will develop semantically and contextually rich alarm data to inform risk-based metrics on the mounting evidence of incoming cyber-attacks (as opposed to firing an alarm for each match of an IDS signature). SeReNity will achieve this by means of advanced techniques from machine learning and information mining and extraction, to identify attack patterns in the network traffic, and automatically identify threat types. Importantly, SeReNity will develop new mechanisms and interfaces to present the gathered evidence to SRC operators dynamically, and based on the specific threat (type) identified by the underlying technology. To achieve this, this project unifies Dutch excellence in intrusion detection, threat intelligence, and human-computer interaction with an industry-leading partner operating in the market of tailored solutions for Security Monitoring.