A construction method is given for all factors that satisfy the assumptions of the model for factor analysis, including partially determined factors where certain error variances are zero. Various criteria for the seriousness of indeterminacy are related. It is shown that B. F. Green's (1976) conjecture holds: For a linear factor predictor the mean squared error of prediction is constant over all possible factors. A simple and general geometric interpretation of factor indeterminacy is given on the basis of the distance between multiple factors. It is illustrated that variable elimination can have a large effect on the seriousness of factor indeterminacy. A simulation study reveals that if the mean square error of factor prediction equals .5, then two thirds of the persons are "correctly" selected by the best linear factor predictor. (PsycINFO Database Record (c) 2009 APA, all rights reserved)
MULTIFILE
Rationale: Diagnosis of sarcopenia in older adults is essential for early treatment in clinical practice. Bio-electrical impedanceanalysis (BIA) may be a valid means to assess appendicular lean mass (ALM) in older adults, but limited evidence is available.Therefore, we aim to evaluate the validity of BIA to assess ALM in older adults.Methods: In 215 community dwelling older adults (age ≥ 55 years), ALM was measured by BIA (Tanita MC-780; 8-points) andcompared with dual-energy X-ray absorptiometry (DXA, Hologic Discovery A) as reference. Validity for assessing absolute values ofALM was evaluated by: 1) bias (mean difference), 2) percentage of accurate predictions (within 5% of DXA values), 3) individualerror (root mean squared error (RMSE), mean absolute deviation), 4) limits of agreement (Bland-Altman analysis). For diagnosis oflow ALM, the lowest quintile of ALM by DXA was used (below 21.4 kg for males and 15.4 for females). Sensitivity and specificityof detecting low ALM by BIA were assessed.Results: Mean age of the subjects was 71.9 ± 6.4, with a BMI of 25.8 ± 4.2 kg/m2, and 70% were females. BIA slightlyunderestimated ALM compared to DXA with a mean bias of -0.6 ± 0.2 kg. The percentage accurate predictions was 54% withRMSE 1.6 kg and limits of agreements -3.0 – +1.8 kg. Sensitivity was 79%, indicating that 79% of subjects with low ALMaccording to DXA also had low ALM with the BIA. Specificity was 90%, indicating that 90% of subjects with ‘no low’ ALMaccording to DXA also had ‘no low’ ALM with the BIA.Conclusions: This comparison showed a poor validity of BIA to assess absolute values of ALM, but a reasonable sensitivity andspecificity to diagnose a low level of ALM in community-dwelling older adults in clinical practice.
AbstractIn many biomechanical motion studies, kinematic parameters are estimated from position measurements on a number of landmarks. In the present investigation, dummy motion experiments are performed in order to study the error dependence of kinematic parameters on geometric factors (number of markers, isotropic vs anisotropic landmark distributions, landmark distribution size), on kinematic factors (rotation step magnitude, the presence of translational displacements, the distance of the landmarks' mean position to the rotation axis), and on anisotropically distributed measurement errors. The experimental results are compared with theoretical predictions of a previous error analysis assuming isotropic conditions for the measurement errors and for the spatial landmark distribution. In general, the experimental findings agree with the predictions of the error model. The kinematic parameters such as translations and rotations are well-determined by the model. In the helical motion description, the same applies for the finite rotation angle about and the finite shift along the helical axis. However, the direction and position of the helical axis are ill-determined. An anisotropic landmark distribution with relatively few markers located in the direction of the rotation axis will even aggravate the ill-posed nature of the finite helical axis estimation.
Developing a framework that integrates Advanced Language Models into the qualitative research process.Qualitative research, vital for understanding complex phenomena, is often limited by labour-intensive data collection, transcription, and analysis processes. This hinders scalability, accessibility, and efficiency in both academic and industry contexts. As a result, insights are often delayed or incomplete, impacting decision-making, policy development, and innovation. The lack of tools to enhance accuracy and reduce human error exacerbates these challenges, particularly for projects requiring large datasets or quick iterations. Addressing these inefficiencies through AI-driven solutions like AIDA can empower researchers, enhance outcomes, and make qualitative research more inclusive, impactful, and efficient.The AIDA project enhances qualitative research by integrating AI technologies to streamline transcription, coding, and analysis processes. This innovation enables researchers to analyse larger datasets with greater efficiency and accuracy, providing faster and more comprehensive insights. By reducing manual effort and human error, AIDA empowers organisations to make informed decisions and implement evidence-based policies more effectively. Its scalability supports diverse societal and industry applications, from healthcare to market research, fostering innovation and addressing complex challenges. Ultimately, AIDA contributes to improving research quality, accessibility, and societal relevance, driving advancements across multiple sectors.
Today, embedded devices such as banking/transportation cards, car keys, and mobile phones use cryptographic techniques to protect personal information and communication. Such devices are increasingly becoming the targets of attacks trying to capture the underlying secret information, e.g., cryptographic keys. Attacks not targeting the cryptographic algorithm but its implementation are especially devastating and the best-known examples are so-called side-channel and fault injection attacks. Such attacks, often jointly coined as physical (implementation) attacks, are difficult to preclude and if the key (or other data) is recovered the device is useless. To mitigate such attacks, security evaluators use the same techniques as attackers and look for possible weaknesses in order to “fix” them before deployment. Unfortunately, the attackers’ resourcefulness on the one hand and usually a short amount of time the security evaluators have (and human errors factor) on the other hand, makes this not a fair race. Consequently, researchers are looking into possible ways of making security evaluations more reliable and faster. To that end, machine learning techniques showed to be a viable candidate although the challenge is far from solved. Our project aims at the development of automatic frameworks able to assess various potential side-channel and fault injection threats coming from diverse sources. Such systems will enable security evaluators, and above all companies producing chips for security applications, an option to find the potential weaknesses early and to assess the trade-off between making the product more secure versus making the product more implementation-friendly. To this end, we plan to use machine learning techniques coupled with novel techniques not explored before for side-channel and fault analysis. In addition, we will design new techniques specially tailored to improve the performance of this evaluation process. Our research fills the gap between what is known in academia on physical attacks and what is needed in the industry to prevent such attacks. In the end, once our frameworks become operational, they could be also a useful tool for mitigating other types of threats like ransomware or rootkits.