Background: The diagnosis of sarcopenia is essential for early treatment of sarcopenia in older adults, for which assessment of appendicular lean mass (ALM) is needed. Multi-frequency bio-electrical impedance analysis (MF-BIA) may be a valid assessment tool to assess ALM in older adults, but the evidences are limited. Therefore, we validated the BIA to diagnose low ALM in older adults.Methods: ALM was assessed by a standing-posture 8 electrode MF-BIA (Tanita MC-780) in 202 community-dwelling older adults (age ≥ 55 years), and compared with dual-energy X-ray absorptiometry (DXA) (Hologic Inc., Marlborough, MA, United States; DXA). The validity for assessing the absolute values of ALM was evaluated by: (1) bias (mean difference), (2) percentage of accurate predictions (within 5% of DXA values), (3) the mean absolute error (MAE), and (4) limits of agreement (Bland-Altman analysis). The lowest quintile of ALM by DXA was used as proxy for low ALM (< 22.8 kg for men, < 16.1 kg for women). Sensitivity and specificity of diagnosing low ALM by BIA were assessed.Results: The mean age of the subjects was 72.1 ± 6.4 years, with a BMI of 25.4 ± 3.6 kg/m2, and 71% were women. BIA slightly underestimated ALM compared to DXA with a mean bias of -0.6 ± 1.2 kg. The percentage of accurate predictions was 54% with a MAE of 1.1 kg, and limits of agreement were -3.0 to + 1.8 kg. The sensitivity for ALM was 80%, indicating that 80% of subjects who were diagnosed as low ALM according to DXA were also diagnosed low ALM by BIA. The specificity was 90%, indicating that 90% of subjects who were diagnosed as normal ALM by DXA were also diagnosed as normal ALM by the BIA.Conclusion: This comparison showed a poor validity of MF-BIA to assess the absolute values of ALM, but a reasonable sensitivity and specificity to recognize the community-dwelling older adults with the lowest muscle mass.
ObjectiveTo compare estimates of effect and variability resulting from standard linear regression analysis and hierarchical multilevel analysis with cross-classified multilevel analysis under various scenarios.Study design and settingWe performed a simulation study based on a data structure from an observational study in clinical mental health care. We used a Markov chain Monte Carlo approach to simulate 18 scenarios, varying sample sizes, cluster sizes, effect sizes and between group variances. For each scenario, we performed standard linear regression, multilevel regression with random intercept on patient level, multilevel regression with random intercept on nursing team level and cross-classified multilevel analysis.ResultsApplying cross-classified multilevel analyses had negligible influence on the effect estimates. However, ignoring cross-classification led to underestimation of the standard errors of the covariates at the two cross-classified levels and to invalidly narrow confidence intervals. This may lead to incorrect statistical inference. Varying sample size, cluster size, effect size and variance had no meaningful influence on these findings.ConclusionIn case of cross-classified data structures, the use of a cross-classified multilevel model helps estimating valid precision of effects, and thereby, support correct inferences.
We conducted a descriptive study among first-year engineering students at the Anton de Kom University of Suriname. We analyzed students’ errors regarding necessary prior knowledge in a calculus A exam. We found that the stage of the solution in which prior knowledge is required impacts the importance of prior knowledge. We also found that many errors concerned basic algebra and trigonometry concepts and skills. We concluded that even though the required prior knowledge concerns basic algebra and trigonometry, the stage of the solution in which prior knowledge is needed is of great importance.
Developing a framework that integrates Advanced Language Models into the qualitative research process.Qualitative research, vital for understanding complex phenomena, is often limited by labour-intensive data collection, transcription, and analysis processes. This hinders scalability, accessibility, and efficiency in both academic and industry contexts. As a result, insights are often delayed or incomplete, impacting decision-making, policy development, and innovation. The lack of tools to enhance accuracy and reduce human error exacerbates these challenges, particularly for projects requiring large datasets or quick iterations. Addressing these inefficiencies through AI-driven solutions like AIDA can empower researchers, enhance outcomes, and make qualitative research more inclusive, impactful, and efficient.The AIDA project enhances qualitative research by integrating AI technologies to streamline transcription, coding, and analysis processes. This innovation enables researchers to analyse larger datasets with greater efficiency and accuracy, providing faster and more comprehensive insights. By reducing manual effort and human error, AIDA empowers organisations to make informed decisions and implement evidence-based policies more effectively. Its scalability supports diverse societal and industry applications, from healthcare to market research, fostering innovation and addressing complex challenges. Ultimately, AIDA contributes to improving research quality, accessibility, and societal relevance, driving advancements across multiple sectors.
Today, embedded devices such as banking/transportation cards, car keys, and mobile phones use cryptographic techniques to protect personal information and communication. Such devices are increasingly becoming the targets of attacks trying to capture the underlying secret information, e.g., cryptographic keys. Attacks not targeting the cryptographic algorithm but its implementation are especially devastating and the best-known examples are so-called side-channel and fault injection attacks. Such attacks, often jointly coined as physical (implementation) attacks, are difficult to preclude and if the key (or other data) is recovered the device is useless. To mitigate such attacks, security evaluators use the same techniques as attackers and look for possible weaknesses in order to “fix” them before deployment. Unfortunately, the attackers’ resourcefulness on the one hand and usually a short amount of time the security evaluators have (and human errors factor) on the other hand, makes this not a fair race. Consequently, researchers are looking into possible ways of making security evaluations more reliable and faster. To that end, machine learning techniques showed to be a viable candidate although the challenge is far from solved. Our project aims at the development of automatic frameworks able to assess various potential side-channel and fault injection threats coming from diverse sources. Such systems will enable security evaluators, and above all companies producing chips for security applications, an option to find the potential weaknesses early and to assess the trade-off between making the product more secure versus making the product more implementation-friendly. To this end, we plan to use machine learning techniques coupled with novel techniques not explored before for side-channel and fault analysis. In addition, we will design new techniques specially tailored to improve the performance of this evaluation process. Our research fills the gap between what is known in academia on physical attacks and what is needed in the industry to prevent such attacks. In the end, once our frameworks become operational, they could be also a useful tool for mitigating other types of threats like ransomware or rootkits.