Business decisions and business logic are an important part of an organization’s daily activities. In the not so near past they were modelled as integrative part of business processes, however, during the last years, they are managed as a separate entity. Still, decisions and underlying business logic often remain a black box. Therefore, the call for transparency increases. Current theory does not provide a measurable and quantitative way to measure transparency for business decisions. This paper extends the understanding of different views on transparency with regards to business decisions and underlying business logic and presents a framework including Key Transparency Indicators (KTI) to measure the transparency of business decisions and business logic. The framework is validated by means of an experiment using case study data. Results show that the framework and KTI’s are useful to measure transparency. Further research will focus on further refinement of the measurements as well as further validation of the current measurements.
DOCUMENT
Business decisions and business logic are important organizational assets. As transparency is becoming an increasingly important aspect for organizations, business decisions and underlying business logic, i.e., their business rules, must be implemented, in information systems, in such a way that transparency is guaranteed as much as possible. Based on previous research, in this study, we aim to identify how current design principles for business rules management add value in terms of transparency. To do so, a recently published transparency framework is decomposed into criteria, which are evaluated against the current business rules management principles. This evaluation revealed that eight out of twenty-two design principles do not add value to transparency, which should be taken into account when the goal of an organization is to increase transparency. Future research should focus on how to implement the design principles that add to transparency.
DOCUMENT
Calls have been made for improving transparency in conducting and reporting research, improving work climates, and preventing detrimental research practices. To assess attitudes and practices regarding these topics, we sent a survey to authors, reviewers, and editors. We received 3,659 (4.9%) responses out of 74,749 delivered emails. We found no significant differences between authors’, reviewers’, and editors’ attitudes towards transparency in conducting and reporting research, or towards their perceptions of work climates. Undeserved authorship was perceived by all groups as the most prevalent detrimental research practice, while fabrication, falsification, plagiarism, and not citing prior relevant research, were seen as more prevalent by editors than authors or reviewers. Overall, 20% of respondents admitted sacrificing the quality of their publications for quantity, and 14% reported that funders interfered in their study design or reporting. While survey respondents came from 126 different countries, due to the survey’s overall low response rate our results might not necessarily be generalizable. Nevertheless, results indicate that greater involvement of all stakeholders is needed to align actual practices with current recommendations.
MULTIFILE
Objective: To automatically recognize self-acknowledged limitations in clinical research publications to support efforts in improving research transparency.Methods: To develop our recognition methods, we used a set of 8431 sentences from 1197 PubMed Central articles. A subset of these sentences was manually annotated for training/testing, and inter-annotator agreement was calculated. We cast the recognition problem as a binary classification task, in which we determine whether a given sentence from a publication discusses self-acknowledged limitations or not. We experimented with three methods: a rule-based approach based on document structure, supervised machine learning, and a semi-supervised method that uses self-training to expand the training set in order to improve classification performance. The machine learning algorithms used were logistic regression (LR) and support vector machines (SVM).Results: Annotators had good agreement in labeling limitation sentences (Krippendorff's α = 0.781). Of the three methods used, the rule-based method yielded the best performance with 91.5% accuracy (95% CI [90.1-92.9]), while self-training with SVM led to a small improvement over fully supervised learning (89.9%, 95% CI [88.4-91.4] vs 89.6%, 95% CI [88.1-91.1]).Conclusions: The approach presented can be incorporated into the workflows of stakeholders focusing on research transparency to improve reporting of limitations in clinical studies.
DOCUMENT
In December of 2004 the Directorate General for Research and Technological Development (DG RTD) of the European Commission (EC) set up a High-Level Expert Group to propose a series of measures to stimulate the reporting of Intellectual Capital in research intensive Small and Medium-Sized Enterprises (SMEs). The Expert Group has focused on enterprises that either perform Research and Development (R&D), or use the results of R&D to innovate and has also considered the implications for the specialist R&D units of larger enterprises, dedicated Research & Technology Organizations and Universities. In this report the Expert Group presents its findings, leading to six recommendations to stimulate the reporting of Intellectual Capital in SMEs by raising awareness, improving reporting competencies, promoting the use of IC Reporting and facilitating standardization.
DOCUMENT
This book offers a comprehensive, practice-based exploration of Systemic Co-Design (SCD) as it is applied to society’s most complex and urgent transitions. Drawing on collaborative projects from the Expertisenetwork Systemic Co-Design (ESC), it portrays Systemic Co-Design not as a fixed framework but as a reflexive, evolving practice. The chapters present diverse collaborations and inquiries, ranging from inclusive design and digital accessibility to fostering safety cultures and urban co-creation, that illustrate Systemic Co-Design’s capacity to build awareness, trust, and communities, as well as systemic capabilities. The book promotes mutual learning and generates knowledge products such as maps, canvases, cards, games, and embodied interactions that enable meaningful engagement. Key themes that run throughout include continuous reflection, the blending of action research and design experimentation, and collective sense-making across disciplines. The contributions demonstrate how new values, methods, and communities are co-developed with design practitioners, policymakers, educators, and citizens. Together, they demonstrate how Systemic Co-Design achieves practical outcomes while fostering the longterm capacities and cultural shifts necessary for systemic change.
LINK
As artificial intelligence (AI) reshapes hiring, organizations increasingly rely on AI-enhanced selection methods such as chatbot-led interviews and algorithmic resume screening. While AI offers efficiency and scalability, concerns persist regarding fairness, transparency, and trust. This qualitative study applies the Artificially Intelligent Device Use Acceptance (AIDUA) model to examine how job applicants perceive and respond to AI-driven hiring. Drawing on semi-structured interviews with 15 professionals, the study explores how social influence, anthropomorphism, and performance expectancy shape applicant acceptance, while concerns about transparency and fairness emerge as key barriers. Participants expressed a strong preference for hybrid AI-human hiring models, emphasizing the importance of explainability and human oversight. The study refines the AIDUA model in the recruitment context and offers practical recommendations for organizations seeking to implement AI ethically and effectively in selection processes.
MULTIFILE
Lecture in PhD Programme Life Science Education Research UMCU. Course Methods of Life Science Education Research. Utrecht, The Netherlands. abstract Audit trail procedures are applied as a way to check the validity of qualitative research designs, qualitative analyses, and the claims that are made. Audit trail procedures can be conducted based on the three criteria of visibility, comprehensibility, and acceptability (Akkerman et al., 2008). During an audit trail procedure, all documents and materials resulting from the data gathering and the data analysis are assessed by an auditor. In this presentation, we presented a summative audit trail procedure (Agricola, Prins, Van der Schaaf & Van Tartwijk, 2021), whereas in a second study we used a formative one (Agricola, Van der Schaaf, Prins & Van Tartwijk, 2022). For both studies, two different auditors were chosen. For the study presented in Agricola et al. (2021) the auditor was one of the PhD supervisors, while in that presented Agricola et al. (2022) was a junior researcher not involved in the project. The first auditor had a high level of expertise in the study’s topic and methodology. As a result, he was able to provide a professional and critical assessment report. Although the second auditor might be considered to be more objective than the first, as she was not involved in the project, more meetings were needed to explain the aim of the study and the aim of the audit trail procedure. There are many ideas about the criteria that qualitative studies should meet (De Kleijn en Van Leeuwen, 2018). I argue that procedures of checking for interrater agreement and understanding, the triangulation, and audit trail procedures can increase the internal validity of qualitative studies. Agricola, B. T., Prins, F. J., van der Schaaf, M. F., & van Tartwijk, J. (2021). Supervisor and Student Perspectives on Undergraduate Thesis Supervision in Higher Education. Scandinavian Journal of Educational Research, 65(5), 877-897. doi: https://doi.org/10.1080/00313831.2020.1775115 Agricola, B. T., van der Schaaf, M. F., Prins, F. J., & van Tartwijk, J. (2022). The development of research supervisors’ pedagogical content knowledge in a lesson study project. Educational Action Research. doi: https://doi.org/10.1080/09650792.2020.1832551 de Kleijn, R. A. M., & Van Leeuwen, A. (2018). Reflections and review on the audit procedure: Guidelines for more transparency. International Journal of Qualitative Methods, 17(1), 1-8. doi: https://doi.org/10.1177/1609406918763214 Akkerman, S., Admiraal, W., Brekelmans, M., & Oost, H. (2008). Auditing quality of research in social sciences. Quality & Quantity, 42(2), 257-274. doi: https://doi.org/10.1007/s11135-006-9044-4
DOCUMENT
DOCUMENT
Valuation judgement bias has been a research topic for several years due to its proclaimed effect on valuation accuracy. However, little is known on the emphasis of literature on judgement bias, with regard to, for instance, research methodologies, research context and robustness of research evidence. A synthesis of available research will establish consistency in the current knowledge base on valuer judgement, identify future research opportunities and support decision-making policy by educational and regulatory stakeholders how to cope with judgement bias. This article therefore, provides a systematic review of empirical research on real estate valuer judgement over the last 30 years. Based on a number of inclusion and exclusion criteria, we have systematically analysed 32 relevant papers on valuation judgement bias. Although we find some consistency in evidence, we also find the underlying research to be biased; the methodology adopted is dominated by a quantitative approach; research context is skewed by timing and origination; and research evidence seems fragmented and needs replication. In order to obtain a deeper understanding of valuation judgement processes and thus extend the current knowledge base, we advocate more use of qualitative research methods and scholars to adopt an interpretative paradigm when studying judgement behaviour.
DOCUMENT