People with dementia are confronted with many decisions. However, they are often not involved in the process of the decision-making. Shared Decision-Making (SDM) enables involvement of persons with dementia in the decision-making process. In our study, we develop a supportive IT application aiming to facilitate the decision-making process in care networks of people with dementia. A key feature in the development of this SDM tool is the participation of all network members during the design and development process, including the person with dementia. In this paper, we give insight into the first phases of this design and development process in which we conducted extensive user studies and translated wishes and needs of network members into user requirements
Living labs are complex multi-stakeholder collaborations that often employ a usercentred and design-driven methodology to foster innovation. Conventional management tools fall short in evaluating them. However, some methods and tools dedicated to living labs' special characteristics and goals have already been developed. Most of them are still in their testing phase. Those tools are not easily accessible and can only be found in extensive research reports, which are difficult to dissect. Therefore, this paper reviews seven evaluation methods and tools specially developed for living labs. Each section of this paper is structured in the following manner: tool’s introduction (1), who uses the tool (2), and how it should be used (3). While the first set of tools, namely “ENoLL 20 Indicators”, “SISCODE Self-assessment”, and “SCIROCCO Exchange Tool” assess a living lab as an organisation and are diving deeper into the organisational activities and the complex context, the second set of methods and tools, “FormIT” and “Living Lab Markers”, evaluate living labs’ methodologies: the process they use to come to innovations. The paper's final section presents “CheRRIes Monitoring and Evaluation Tool” and “TALIA Indicator for Benchmarking Service for Regions”, which assess the regional impact made by living labs. As every living lab is different regarding its maturity (as an organisation and in its methodology) and the scope of impact it wants to make, the most crucial decision when evaluating is to determine the focus of the assessment. This overview allows for a first orientation on worked-out methods and on possible indicators to use. It also concludes that the existing tools are quite managerial in their method and aesthetics and calls for designers and social scientists to develop more playful, engaging and (possibly) learning-oriented tools to evaluate living labs in the future. LinkedIn: https://www.linkedin.com/in/overdiek12345/ https://www.linkedin.com/in/mari-genova-17a727196/?originalSubdomain=nl
Analyzing historical decision-related data can help support actual operational decision-making processes. Decision mining can be employed for such analysis. This paper proposes the Decision Discovery Framework (DDF) designed to develop, adapt, or select a decision discovery algorithm by outlining specific guidelines for input data usage, classifier handling, and decision model representation. This framework incorporates the use of Decision Model and Notation (DMN) for enhanced comprehensibility and normalization to simplify decision tables. The framework’s efficacy was tested by adapting the C4.5 algorithm to the DM45 algorithm. The proposed adaptations include (1) the utilization of a decision log, (2) ensure an unpruned decision tree, (3) the generation DMN, and (4) normalize decision table. Future research can focus on supporting on practitioners in modeling decisions, ensuring their decision-making is compliant, and suggesting improvements to the modeled decisions. Another future research direction is to explore the ability to process unstructured data as input for the discovery of decisions.
MULTIFILE
Developing a framework that integrates Advanced Language Models into the qualitative research process.Qualitative research, vital for understanding complex phenomena, is often limited by labour-intensive data collection, transcription, and analysis processes. This hinders scalability, accessibility, and efficiency in both academic and industry contexts. As a result, insights are often delayed or incomplete, impacting decision-making, policy development, and innovation. The lack of tools to enhance accuracy and reduce human error exacerbates these challenges, particularly for projects requiring large datasets or quick iterations. Addressing these inefficiencies through AI-driven solutions like AIDA can empower researchers, enhance outcomes, and make qualitative research more inclusive, impactful, and efficient.The AIDA project enhances qualitative research by integrating AI technologies to streamline transcription, coding, and analysis processes. This innovation enables researchers to analyse larger datasets with greater efficiency and accuracy, providing faster and more comprehensive insights. By reducing manual effort and human error, AIDA empowers organisations to make informed decisions and implement evidence-based policies more effectively. Its scalability supports diverse societal and industry applications, from healthcare to market research, fostering innovation and addressing complex challenges. Ultimately, AIDA contributes to improving research quality, accessibility, and societal relevance, driving advancements across multiple sectors.
The focus of the research is 'Automated Analysis of Human Performance Data'. The three interconnected main components are (i)Human Performance (ii) Monitoring Human Performance and (iii) Automated Data Analysis . Human Performance is both the process and result of the person interacting with context to engage in tasks, whereas the performance range is determined by the interaction between the person and the context. Cheap and reliable wearable sensors allow for gathering large amounts of data, which is very useful for understanding, and possibly predicting, the performance of the user. Given the amount of data generated by such sensors, manual analysis becomes infeasible; tools should be devised for performing automated analysis looking for patterns, features, and anomalies. Such tools can help transform wearable sensors into reliable high resolution devices and help experts analyse wearable sensor data in the context of human performance, and use it for diagnosis and intervention purposes. Shyr and Spisic describe Automated Data Analysis as follows: Automated data analysis provides a systematic process of inspecting, cleaning, transforming, and modelling data with the goal of discovering useful information, suggesting conclusions and supporting decision making for further analysis. Their philosophy is to do the tedious part of the work automatically, and allow experts to focus on performing their research and applying their domain knowledge. However, automated data analysis means that the system has to teach itself to interpret interim results and do iterations. Knuth stated: Science is knowledge which we understand so well that we can teach it to a computer; and if we don't fully understand something, it is an art to deal with it.[Knuth, 1974]. The knowledge on Human Performance and its Monitoring is to be 'taught' to the system. To be able to construct automated analysis systems, an overview of the essential processes and components of these systems is needed.Knuth Since the notion of an algorithm or a computer program provides us with an extremely useful test for the depth of our knowledge about any given subject, the process of going from an art to a science means that we learn how to automate something.
Collaborative networks for sustainability are emerging rapidly to address urgent societal challenges. By bringing together organizations with different knowledge bases, resources and capabilities, collaborative networks enhance information exchange, knowledge sharing and learning opportunities to address these complex problems that cannot be solved by organizations individually. Nowhere is this more apparent than in the apparel sector, where examples of collaborative networks for sustainability are plenty, for example Sustainable Apparel Coalition, Zero Discharge Hazardous Chemicals, and the Fair Wear Foundation. Companies like C&A and H&M but also smaller players join these networks to take their social responsibility. Collaborative networks are unlike traditional forms of organizations; they are loosely structured collectives of different, often competing organizations, with dynamic membership and usually lack legal status. However, they do not emerge or organize on their own; they need network orchestrators who manage the network in terms of activities and participants. But network orchestrators face many challenges. They have to balance the interests of diverse companies and deal with tensions that often arise between them, like sharing their innovative knowledge. Orchestrators also have to “sell” the value of the network to potential new participants, who make decisions about which networks to join based on the benefits they expect to get from participating. Network orchestrators often do not know the best way to maintain engagement, commitment and enthusiasm or how to ensure knowledge and resource sharing, especially when competitors are involved. Furthermore, collaborative networks receive funding from grants or subsidies, creating financial uncertainty about its continuity. Raising financing from the private sector is difficult and network orchestrators compete more and more for resources. When networks dissolve or dysfunction (due to a lack of value creation and capture for participants, a lack of financing or a non-functioning business model), the collective value that has been created and accrued over time may be lost. This is problematic given that industrial transformations towards sustainability take many years and durable organizational forms are required to ensure ongoing support for this change. Network orchestration is a new profession. There are no guidelines, handbooks or good practices for how to perform this role, nor is there professional education or a professional association that represents network orchestrators. This is urgently needed as network orchestrators struggle with their role in governing networks so that they create and capture value for participants and ultimately ensure better network performance and survival. This project aims to foster the professionalization of the network orchestrator role by: (a) generating knowledge, developing and testing collaborative network governance models, facilitation tools and collaborative business modeling tools to enable network orchestrators to improve the performance of collaborative networks in terms of collective value creation (network level) and private value capture (network participant level) (b) organizing platform activities for network orchestrators to exchange ideas, best practices and learn from each other, thereby facilitating the formation of a professional identity, standards and community of network orchestrators.