There is emerging evidence that the performance of risk assessment instruments is weaker when used for clinical decision‐making than for research purposes. For instance, research has found lower agreement between evaluators when the risk assessments are conducted during routine practice. We examined the field interrater reliability of the Short‐Term Assessment of Risk and Treatability: Adolescent Version (START:AV). Clinicians in a Dutch secure youth care facility completed START:AV assessments as part of the treatment routine. Consistent with previous literature, interrater reliability of the items and total scores was lower than previously reported in non‐field studies. Nevertheless, moderate to good interrater reliability was found for final risk judgments on most adverse outcomes. Field studies provide insights into the actual performance of structured risk assessment in real‐world settings, exposing factors that affect reliability. This information is relevant for those who wish to implement structured risk assessment with a level of reliability that is defensible considering the high stakes.
DOCUMENT
Author-supplied abstract: Developing large-scale complex systems in student projects is not common, due to various constraints like available time, student team sizes, or maximal complexity. However, we succeeded to design a project that was of high complexity and comparable to real world projects. The execution of the project and the results were both successful in terms of quality, scope, and student/teacher satisfaction. In this experience report we describe how we combined a variety of principles and properties in the project design and how these have contributed to the success of the project. This might help other educators with setting up student projects of comparable complexity which are similar to real world projects.
DOCUMENT
Algorithmic affordances—interactive mechanisms that allow users to exercise tangible control over algorithms—play a crucial role in recommender systems. They can facilitate users’ sense of autonomy, transparency, and ultimately ownership over a recommender’s results, all qualities that are central to responsible AI. Designers, among others, are tasked with creating these interactions, yet state that they lack resources to do so effectively. At the same time, academic research into these interactions rarely crosses the research-practice gap. As a solution, designers call for a structured library of algorithmic affordances containing well-tested, well-founded, and up-to-date examples sourced from both real-world and experimental interfaces. Such a library should function as a boundary object, bridging academia and professional design practice. Academics could use it as a supplementary platform to disseminate their findings, while both practitioners and educators could draw upon it for inspiration and as a foundation for innovation. However, developing a library that accommodates multiple stakeholders presents several challenges, including the need to establish a common language for categorizing algorithmic affordances and devising a categorization of algorithmic affordances that is meaningful to all target groups. This research attempts to bring the designer perspective into this categorization.
LINK