The assessment of workplace learning by educators at the workplace is a complex and inherently social process, as the workplace is a participatory learning environment. We therefore propose seeing assessment as a process of judgment embedded in a community of practice and to this purpose use the philosophy of inferentialism to unravel the judgment process of workplace educators by seeing it as an interrelated system of judgments, actions and reasons. Focussing on the unfolding of a process, we applied a longitudinal holistic case study design. Results show that educators are engaged in a constant judgment process during which they use multiple and adaptive frames of reference when forming their judgment about students. They construct an overarching image of students that develops throughout the placement, and their judgments about students go hand in hand with their actions relating to fostering independent practice.
LINK
The aim of the present investigation was to evaluate the effect of visual feedback on rating voice quality severity level and the reliability of voice quality judgment by inexperienced listeners. For this purpose two training programs were created, each lasting 2 hours. In total 37 undergraduate speech–language therapy students participated in the study and were divided into a visual plus auditory-perceptual feedback group (V + AF), an auditory-perceptual feedback group (AF), and a control group with no feedback (NF). All listeners completed two rating sessions judging overall severity labeled as grade (G), roughness (R), and breathiness (B). The judged voice samples contained the concatenation of continuous speech and sustained phonation. No significant rater reliability changes were found in the pre- and posttest between the three groups in every GRB-parameter (all p > 0.05). There was a training effect seen in the significant improvement of rater reliability for roughness within the NF and AF groups (all p < 0.05), and for breathiness within the V + AF group (p < 0.01). The rating of the severity level of roughness changed significantly after the training in the AF and V + AF groups (p < 0.01), and the breathiness severity level changed significantly after the training in the V + AF group (p < 0.01). The training of V + AF and AF may only minimally influence the reliability in the judgment of voice quality but showed significant influence on rating the severity level of GRB parameters. Therefore, the use of both visual and auditory anchors while rating as well as longer training sessions may be required to draw a firm conclusion.
LINK
Purpose: The aim of this study is to measure the concurrent validity of the Athletic Skills Track (AST) by examining whether its outcome score correlates with the holistic judgments of experts about the quality of movement. Method: Video recordings of children performing the AST were shown to physical education teachers who independently gave a holistic rating of the movement quality of each child. Results: Both intra- and interrater reliability of the teachers’ ratings were moderate to good. The holistic judgments on movement quality were significantly correlated with AST time, showing that higher ratings were associated with less time required to complete the track. Next, hierarchical stepwise regression indicated that in addition to the holistic rating, also age, but not gender, explained part of the variance in AST time. Conclusion: The findings show that the AST has good concurrent validity and provides a fast, indirect indication for quality of movement.
DOCUMENT
-Chatbots are being used at an increasing rate, for instance, for simple Q&A conversations, flight reservations, online shopping and news aggregation. However, users expect to be served as effective and reliable as they were with human-based systems and are unforgiving once the system fails to understand them, engage them or show them human empathy. This problem is more prominent when the technology is used in domains such as health care, where empathy and the ability to give emotional support are most essential during interaction with the person. Empathy, however, is a unique human skill, and conversational agents such as chatbots cannot yet express empathy in nuanced ways to account for its complex nature and quality. This project focuses on designing emotionally supportive conversational agents within the mental health domain. We take a user-centered co-creation approach to focus on the mental health problems of sexual assault victims. This group is chosen specifically, because of the high rate of the sexual assault incidents and its lifetime destructive effects on the victim and the fact that although early intervention and treatment is necessary to prevent future mental health problems, these incidents largely go unreported due to the stigma attached to sexual assault. On the other hand, research shows that people feel more comfortable talking to chatbots about intimate topics since they feel no fear of judgment. We think an emotionally supportive and empathic chatbot specifically designed to encourage self-disclosure among sexual assault victims could help those who remain silent in fear of negative evaluation and empower them to process their experience better and take the necessary steps towards treatment early on.
In this project, we explore how healthcare providers and the creative industry can collaborate to develop effective digital mental health interventions, particularly for survivors of sexual assault. Sexual assault victims face significant barriers to seeking professional help, including shame, self-blame, and fear of judgment. With over 100,000 cases reported annually in the Netherlands the need for accessible, stigma-free support is urgent. Digital interventions, such as chatbots, offer a promising solution by providing a safe, confidential, and cost-effective space for victims to share their experiences before seeking professional care. However, existing commercial AI chatbots remain unsuitable for complex mental health support. While widely used for general health inquiries and basic therapy, they lack the human qualities essential for empathetic conversations. Additionally, training AI for this sensitive context is challenging due to limited caregiver-patient conversation data. A key concern raised by professionals worldwide is the risk of AI-driven chatbots being misused as therapy substitutes. Without proper safeguards, they may offer inappropriate responses, potentially harming users. This highlights the urgent need for strict design guidelines, robust safety measures, and comprehensive oversight in AI-based mental health solutions. To address these challenges, this project brings together experts from healthcare and design fields—especially conversation designers—to explore the power of design in developing a trustworthy, user-centered chatbot experience tailored to survivors' needs. Through an iterative process of research, co-creation, prototyping, and evaluation, we aim to integrate safe and effective digital support into mental healthcare. Our overarching goal is to bridge the gap between digital healthcare and the creative sector, fostering long-term collaboration. By combining clinical expertise with design innovation, we seek to develop personalized tools that ethically and effectively support individuals with mental health problems.