A model for programmatic assessment in action is proposed that optimizes assessment for learning as well as decision making on learner progress. It is based on a set of assessment principles that are interpreted from empirical research. The model specifies cycles of training, assessment and learner support activities that are completed by intermediate and final moments of evaluation on aggregated data-points. Essential is that individual data-points are maximized for their learning and feedback value, whereas high stake decisions are based on the aggregation of many data-points. Expert judgment plays an important role in the program. Fundamental is the notion of sampling and bias reduction for dealing with subjectivity. Bias reduction is sought in procedural assessment strategies that are derived from qualitative research criteria. A number of challenges and opportunities are discussed around the proposed model. One of the virtues would be to move beyond the dominating psychometric discourse around individual instruments towards a systems approach of assessment design based on empirically grounded theory.
MULTIFILE
Artificial Intelligence (AI) offers organizations unprecedented opportunities. However, one of the risks of using AI is that its outcomes and inner workings are not intelligible. In industries where trust is critical, such as healthcare and finance, explainable AI (XAI) is a necessity. However, the implementation of XAI is not straightforward, as it requires addressing both technical and social aspects. Previous studies on XAI primarily focused on either technical or social aspects and lacked a practical perspective. This study aims to empirically examine the XAI related aspects faced by developers, users, and managers of AI systems during the development process of the AI system. To this end, a multiple case study was conducted in two Dutch financial services companies using four use cases. Our findings reveal a wide range of aspects that must be considered during XAI implementation, which we grouped and integrated into a conceptual model. This model helps practitioners to make informed decisions when developing XAI. We argue that the diversity of aspects to consider necessitates an XAI “by design” approach, especially in high-risk use cases in industries where the stakes are high such as finance, public services, and healthcare. As such, the conceptual model offers a taxonomy for method engineering of XAI related methods, techniques, and tools.
MULTIFILE
The model of the Best Practice Unit (BPU) is a specific form of practice based research. It is a variation of the Community of Practice (CoP) as developed by Wenger, McDermott and Snyder (2002) with the specific aim to innovate a professional practice by combining learning, development and research. We have applied the model over the past 10 years in the domain of care and social welfare in the Netherlands. Characteristics of the model are: the interaction between individual and collective learning processes, the development of (new or better) working methods, and the implementation of these methods in daily practice. Multiple knowledge sources are being used: experiential knowledge, professional knowledge and scientific knowledge. Research is serving diverse purposes: articulating tacit knowledge, documenting the learning and innovation process, systematically describing the revealed or developed ways of working, and evaluating the efficacy of new methods. An analysis of 10 different research projects shows that the BPU is an effective model.