Academic cheating poses a significant challenge to conducting fair online assessments. One common way is collusion, where students unethically share answers during the assessment. While several researchers proposed solutions, there is lack of clarity regarding the specific types they target among the different types of collusion. Researchers have used statistical techniques to analyze basic attributes collected by the platforms, for collusion detection. Only few works have used machine learning, considering two or three attributes only; the use of limited features leading to reduced accuracy and increased risk of false accusations. In this work, we focus on In-Parallel Collusion, where students simultaneously work together on an assessment. For data collection, a quiz tool is improvised to capture clickstream data at a finer level of granularity. We use feature engineering to derive seven features and create a machine learning model for collusion detection. The results show: 1) Random Forest exhibits the best accuracy (98.8%), and 2) In contrast to less features as used in earlier works, the full feature set provides the best result; showing that considering multiple facets of similarity enhance the model accuracy. The findings provide platform designers and teachers with insights into optimizing quiz platforms and creating cheat-proof assessments.
DOCUMENT
The security of online assessments is a major concern due to widespread cheating. One common form of cheating is impersonation, where students invite unauthorized persons to take assessments on their behalf. Several techniques exist to handle impersonation. Some researchers recommend use of integrity policy, but communicating the policy effectively to the students is a challenge. Others propose authentication methods like, password and fingerprint; they offer initial authentication but are vulnerable thereafter. Face recognition offers post-login authentication but necessitates additional hardware. Keystroke Dynamics (KD) has been used to provide post-login authentication without any additional hardware, but its use is limited to subjective assessment. In this work, we address impersonation in assessments with Multiple Choice Questions (MCQ). Our approach combines two key strategies: reinforcement of integrity policy for prevention, and keystroke-based random authentication for detection of impersonation. To the best of our knowledge, it is the first attempt to use keystroke dynamics for post-login authentication in the context of MCQ. We improve an online quiz tool for the data collection suited to our needs and use feature engineering to address the challenge of high-dimensional keystroke datasets. Using machine learning classifiers, we identify the best-performing model for authenticating the students. The results indicate that the highest accuracy (83%) is achieved by the Isolation Forest classifier. Furthermore, to validate the results, the approach is applied to Carnegie Mellon University (CMU) benchmark dataset, thereby achieving an improved accuracy of 94%. Though we also used mouse dynamics for authentication, but its subpar performance leads us to not consider it for our approach.
DOCUMENT
Digital innovation in education – as in any other sector – is not only about developing and implementing novel ideas, but also about having these ideas effectively used as well as widely accepted and adopted, so that many students can benefit from innovations improving education. Effectiveness, transferability and scalability cannot be added afterwards; it must be integrated from the start in the design, development and implementation processes, as is proposed in the movement towards evidence-informed practice (EIP). The impact an educational innovation has on the values of various stakeholders is often overlooked. Value Sensitive Design (VSD) is an approach to integrate values in technological design. In this paper we discuss how EIP and VSD may be combined into an integrated approach to digital innovation in education, which we call value-informed innovation. This approach not only considers educational effectiveness, but also incorporates the innovation’s impact on human values, its scalability and transferability to other contexts. We illustrate the integrated approach with an example case of an educational innovation involving digital peer feedback.
DOCUMENT