The aim of the present study was to investigate if the presence of anterior cruciate ligament (ACL) injury risk factors depicted in the laboratory would reflect at-risk patterns in football-specific field data. Twenty-four female footballers (14.9 ± 0.9 year) performed unanticipated cutting maneuvers in a laboratory setting and on the football pitch during football-specific exercises (F-EX) and games (F-GAME). Knee joint moments were collected in the laboratory and grouped using hierarchical agglomerative clustering. The clusters were used to investigate the kinematics collected on field through wearable sensors. Three clusters emerged: Cluster 1 presented the lowest knee moments; Cluster 2 presented high knee extension but low knee abduction and rotation moments; Cluster 3 presented the highest knee abduction, extension, and external rotation moments. In F-EX, greater knee abduction angles were found in Cluster 2 and 3 compared to Cluster 1 (p = 0.007). Cluster 2 showed the lowest knee and hip flexion angles (p < 0.013). Cluster 3 showed the greatest hip external rotation angles (p = 0.006). In F-GAME, Cluster 3 presented the greatest knee external rotation and lowest knee flexion angles (p = 0.003). Clinically relevant differences towards ACL injury identified in the laboratory reflected at-risk patterns only in part when cutting on the field: in the field, low-risk players exhibited similar kinematic patterns as the high-risk players. Therefore, in-lab injury risk screening may lack ecological validity.
MULTIFILE
Graphs are ubiquitous. Many graphs, including histograms, bar charts, and stacked dotplots, have proven tricky to interpret. Students’ gaze data can indicate students’ interpretation strategies on these graphs. We therefore explore the question: In what way can machine learning quantify differences in students’ gaze data when interpreting two near-identical histograms with graph tasks in between? Our work provides evidence that using machine learning in conjunction with gaze data can provide insight into how students analyze and interpret graphs. This approach also sheds light on the ways in which students may better understand a graph after first being presented with other graph types, including dotplots. We conclude with a model that can accurately differentiate between the first and second time a student solved near-identical histogram tasks.
Are professionals better at assessing the evidential strength of different types of forensic conclusions compared to students? In an online questionnaire 96 crime investigation and law students, and 269 crime investigation and legal professionals assessed three fingerprint examination reports. All reports were similar, except for the conclusion part which was stated in a categorical (CAT), verbal likelihood ratio (VLR) or numerical likelihood ratio (NLR) conclusion with high or low evidential strength. The results showed no significant difference between the groups of students and professionals in their assessment of the conclusions. They all overestimated the strength of the strong CAT conclusion compared to the other conclusion types and underestimated the strength of the weak CAT conclusion. Their background (legal vs. crime investigation) did have a significant effect on their understanding. Whereas the legal professionals performed better compared to the crime investigators, the legal students performed worse compared to crime investigation students.