The recent advancement in the field of information technology has resulted in the proliferation of online education all over the world. Much like traditional classroom education, assessments are an integral component of online education. During the online assessment, evaluation of the learning outcomes presents challenges mainly due to academic dishonesty among students. It results in unfair evaluations that raises questions about the credibility of online assessments. There exist several types of dishonesty in online assessments including exploiting the available Internet for finding solutions (Internet-as-a-Forbidden-Aid), illicit collaboration among students (Collusion) and third-party persons taking assessment on behalf of the genuine student (Impersonation). Several researchers have proposed solutions for addressing dishonesty in online assessments. These solutions include strategies for designing assessments that are resistant to cheating, implementing proctoring and formulating integrity policies. While these methods can be effective, their implementation is often resource-intensive and laborious, posing challenges. Other studies propose the use of Machine Learning (ML) for automated dishonesty detection. However, these approaches often lack clarity in selecting appropriate features and classifiers, impacting the quality of results. The lack of training data further leads to poorly tuned models. There is a need to develop robust ML models to detect different types of dishonesty in online assessments. In this thesis, we focus on Multiple Choice Questions (MCQ)-based assessments. We consider three types of dishonesty: (1) Internet-as-a-Forbidden-Aid, (2) Collusion, and (3) Impersonation prevalent in MCQ-based assessments. We developed individual ML models to detect students involved in each type of dishonesty during the assessment. The results also facilitate understanding the test-taking pattern of students and providing recommendations for cheat-proof assessment design. Finally, we present an Academic Dishonesty Mitigation Plan (ADMP) that addresses the diverse forms of academic dishonesty and provides integrity solutions for mitigating dishonesty in online assessments.
DOCUMENT
Academic cheating poses a significant challenge to conducting fair online assessments. One common way is collusion, where students unethically share answers during the assessment. While several researchers proposed solutions, there is lack of clarity regarding the specific types they target among the different types of collusion. Researchers have used statistical techniques to analyze basic attributes collected by the platforms, for collusion detection. Only few works have used machine learning, considering two or three attributes only; the use of limited features leading to reduced accuracy and increased risk of false accusations. In this work, we focus on In-Parallel Collusion, where students simultaneously work together on an assessment. For data collection, a quiz tool is improvised to capture clickstream data at a finer level of granularity. We use feature engineering to derive seven features and create a machine learning model for collusion detection. The results show: 1) Random Forest exhibits the best accuracy (98.8%), and 2) In contrast to less features as used in earlier works, the full feature set provides the best result; showing that considering multiple facets of similarity enhance the model accuracy. The findings provide platform designers and teachers with insights into optimizing quiz platforms and creating cheat-proof assessments.
DOCUMENT
The security of online assessments is a major concern due to widespread cheating. One common form of cheating is impersonation, where students invite unauthorized persons to take assessments on their behalf. Several techniques exist to handle impersonation. Some researchers recommend use of integrity policy, but communicating the policy effectively to the students is a challenge. Others propose authentication methods like, password and fingerprint; they offer initial authentication but are vulnerable thereafter. Face recognition offers post-login authentication but necessitates additional hardware. Keystroke Dynamics (KD) has been used to provide post-login authentication without any additional hardware, but its use is limited to subjective assessment. In this work, we address impersonation in assessments with Multiple Choice Questions (MCQ). Our approach combines two key strategies: reinforcement of integrity policy for prevention, and keystroke-based random authentication for detection of impersonation. To the best of our knowledge, it is the first attempt to use keystroke dynamics for post-login authentication in the context of MCQ. We improve an online quiz tool for the data collection suited to our needs and use feature engineering to address the challenge of high-dimensional keystroke datasets. Using machine learning classifiers, we identify the best-performing model for authenticating the students. The results indicate that the highest accuracy (83%) is achieved by the Isolation Forest classifier. Furthermore, to validate the results, the approach is applied to Carnegie Mellon University (CMU) benchmark dataset, thereby achieving an improved accuracy of 94%. Though we also used mouse dynamics for authentication, but its subpar performance leads us to not consider it for our approach.
DOCUMENT
The wide diffusion of the "Entrapped Suitors" story-type has often been observed: examples are found in a remarkable number of literatures, ranging from English, French and Greek in the West, to Persian, Arabic and Kashmiri in the East. However, a text of this type that is often overlooked is the Middle Dutch play Een Speel Van Drie Minners ("A Play of Three Suitors"). This is despite the fact that it represents a highly idiosyncratic variation on the story, as it replaces the central moral with something more scabrous. We offer here a comprehensive discussion of this singular text and its narrative form, with an English verse-translation appended.
DOCUMENT
The purpose of this paper is to reflect on the experiences of safety and security management students, enrolled in an undergraduate course in the Netherlands, and present quantitative data from an online survey that aimed to explore the factors that have contributed to students’ satisfaction with, and engagement in, online classes during the COVID-19 pandemic. The main findings suggest an interesting paradox of technology, which is worth further exploration in future research. Firstly, students with self perceived higher technological skill levels tend to reject online education more often as they see substantial shortcomings of classes in the way they are administered as compared to the vast available opportunities for real innovation. Secondly, as opposed to democratising education and allowing for custom-made, individualistic education schedules that help less-privileged students, online education can also lead to the displacement of education by income-generating activities altogether. Lastly, as much as technology allowed universities during the COVID-19 pandemic to continue with education, the transition to the environment, which is defined by highly interactive and engaging potential, may in fact be a net contributor to the feelings of social isolation, digital educational inequality and tension around commercialisation in higher education.
MULTIFILE