In 2015, the Object Management Group published the Decision Model and Notation with the goal to structure and connect business processes, decisions and underlying business logic. Practice shows that several vendors adopted the DMN standard and (started to) integrate the standard with their tooling. However, practice also shows that there are vendors who (consciously) deviate from the DMN standard while still trying to achieve the goal DMN is set out to reach. This research aims to 1) analyze and benchmark available tooling and their accompanied languages according to the DMN-standard and 2) understand the different approaches to modeling decisions and underlying business logic of these vendor specific languages. We achieved the above by analyzing secondary data. In total, 22 decision modelling tools together with their languages were analyzed. The results of this study reveal six propositions with regards to the adoption of DMN with regards to the sample of tools. These results could be utilized to improve the tools as well as the DMN standard itself to improve adoption. Possible future research directions comprise the improvement of the generalizability of the results by including more tools available and utilizing different methods for the data collection and analysis as well as deeper analysis into the generation of DMN directly from tool-native languages.
We present a novel hierarchical model for human activity recognition. In contrast with approaches that successively recognize actions and activities, our approach jointly models actions and activities in a unified framework, and their labels are simultaneously predicted. The model is embedded with a latent layer that is able to capture a richer class of contextual information in both state-state and observation-state pairs. Although loops are present in the model, the model has an overall linear-chain structure, where the exact inference is tractable. Therefore, the model is very efficient in both inference and learning. The parameters of the graphical model are learned with a structured support vector machine. A data-driven approach is used to initialize the latent variables; therefore, no manual labeling for the latent states is required. The experimental results from using two benchmark datasets show that our model outperforms the state-of-the-art approach, and our model is computationally more efficient.
Artificial Intelligence (AI) offers organizations unprecedented opportunities. However, one of the risks of using AI is that its outcomes and inner workings are not intelligible. In industries where trust is critical, such as healthcare and finance, explainable AI (XAI) is a necessity. However, the implementation of XAI is not straightforward, as it requires addressing both technical and social aspects. Previous studies on XAI primarily focused on either technical or social aspects and lacked a practical perspective. This study aims to empirically examine the XAI related aspects faced by developers, users, and managers of AI systems during the development process of the AI system. To this end, a multiple case study was conducted in two Dutch financial services companies using four use cases. Our findings reveal a wide range of aspects that must be considered during XAI implementation, which we grouped and integrated into a conceptual model. This model helps practitioners to make informed decisions when developing XAI. We argue that the diversity of aspects to consider necessitates an XAI “by design” approach, especially in high-risk use cases in industries where the stakes are high such as finance, public services, and healthcare. As such, the conceptual model offers a taxonomy for method engineering of XAI related methods, techniques, and tools.
MULTIFILE