Graphs are ubiquitous. Many graphs, including histograms, bar charts, and stacked dotplots, have proven tricky to interpret. Students’ gaze data can indicate students’ interpretation strategies on these graphs. We therefore explore the question: In what way can machine learning quantify differences in students’ gaze data when interpreting two near-identical histograms with graph tasks in between? Our work provides evidence that using machine learning in conjunction with gaze data can provide insight into how students analyze and interpret graphs. This approach also sheds light on the ways in which students may better understand a graph after first being presented with other graph types, including dotplots. We conclude with a model that can accurately differentiate between the first and second time a student solved near-identical histogram tasks.
This paper proposes an innovative method for factor analyzing data that potentially contains individual response bias. Past methods include the use of “ipsative” data, or, related to that, “ipsatized” data. Unfortunately, factor analysis as the main method used for analyzing the dimensionality of data, cannot be applied to ipsative data. In contrast, normalization of data as an alternative method to filter out response bias, is not hampered by the technical statistical issues inherent to applying multivariate techniques to ipsative data. Using high-quality data from a survey in Nepal that makes use of – among others – the High-Performance Organizations (HPO) framework, this paper shows that the traditional approach of directly applying Confirmatory Factor Analysis (CFA) starting from an existing model or theory, is inferior to our approach. Even applying Exploratory Factor Analysis (EFA) to the raw (non-normalized) data before using CFA, is unable to detect the optimal dimensionality, or structure, in the data. A better structure can be obtained by performing EFA on normalized data that corrects for response bias in the raw data. This paper convincingly shows that the newly identified structure is superior to the original structure suggested by the HPO framework. Applying a CFA using the newly detected structure on the raw data, gives excellent goodness-of-fit statistics, with more items retained, and no need of forced methods to improve the model fit. The findings suggest that existing models and questionnaires based on these models, are not necessarily as valid and reliable as empirical studies that make use of traditional analyses seem to suggest. When adopting existing instruments, researchers are advised to critically check the validity and reliability of these instruments – especially those vulnerable to response bias - and to apply the procedures laid out in this paper, in order to enhance the quality of their research, and to inform future researchers who consider using the same instruments or to warn them about the potential shortcomings of these instruments.
LINK
Are professionals better at assessing the evidential strength of different types of forensic conclusions compared to students? In an online questionnaire 96 crime investigation and law students, and 269 crime investigation and legal professionals assessed three fingerprint examination reports. All reports were similar, except for the conclusion part which was stated in a categorical (CAT), verbal likelihood ratio (VLR) or numerical likelihood ratio (NLR) conclusion with high or low evidential strength. The results showed no significant difference between the groups of students and professionals in their assessment of the conclusions. They all overestimated the strength of the strong CAT conclusion compared to the other conclusion types and underestimated the strength of the weak CAT conclusion. Their background (legal vs. crime investigation) did have a significant effect on their understanding. Whereas the legal professionals performed better compared to the crime investigators, the legal students performed worse compared to crime investigation students.