BACKGROUND: The concept of osteoarthritis (OA) heterogeneity is evolving and gaining renewed interest. According to this concept, distinct subtypes of OA need to be defined that will likely require recognition in research design and different approaches to clinical management. Although seemingly plausible, a wide range of views exist on how best to operationalize this concept. The current project aimed to provide consensus-based definitions and recommendations that together create a framework for conducting and reporting OA phenotype research.METHODS: A panel of 25 members with expertise in OA phenotype research was composed. First, panel members participated in an online Delphi exercise to provide a number of basic definitions and statements relating to OA phenotypes and OA phenotype research. Second, panel members provided input on a set of recommendations for reporting on OA phenotype studies.RESULTS: Four Delphi rounds were required to achieve sufficient agreement on 11 definitions and statements. OA phenotypes were defined as subtypes of OA that share distinct underlying pathobiological and pain mechanisms and their structural and functional consequences. Reporting recommendations pertaining to the study characteristics, study population, data collection, statistical analysis, and appraisal of OA phenotype studies were provided.CONCLUSIONS: This study provides a number of consensus-based definitions and recommendations relating to OA phenotypes. The resulting framework is intended to facilitate research on OA phenotypes and increase combined efforts to develop effective OA phenotype classification. Success in this endeavor will hopefully translate into more effective, differentiated OA management that will benefit a multitude of OA patients.
MULTIFILE
Analyzing historical decision-related data can help support actual operational decision-making processes. Decision mining can be employed for such analysis. This paper proposes the Decision Discovery Framework (DDF) designed to develop, adapt, or select a decision discovery algorithm by outlining specific guidelines for input data usage, classifier handling, and decision model representation. This framework incorporates the use of Decision Model and Notation (DMN) for enhanced comprehensibility and normalization to simplify decision tables. The framework’s efficacy was tested by adapting the C4.5 algorithm to the DM45 algorithm. The proposed adaptations include (1) the utilization of a decision log, (2) ensure an unpruned decision tree, (3) the generation DMN, and (4) normalize decision table. Future research can focus on supporting on practitioners in modeling decisions, ensuring their decision-making is compliant, and suggesting improvements to the modeled decisions. Another future research direction is to explore the ability to process unstructured data as input for the discovery of decisions.
MULTIFILE
Developing a framework that integrates Advanced Language Models into the qualitative research process.Qualitative research, vital for understanding complex phenomena, is often limited by labour-intensive data collection, transcription, and analysis processes. This hinders scalability, accessibility, and efficiency in both academic and industry contexts. As a result, insights are often delayed or incomplete, impacting decision-making, policy development, and innovation. The lack of tools to enhance accuracy and reduce human error exacerbates these challenges, particularly for projects requiring large datasets or quick iterations. Addressing these inefficiencies through AI-driven solutions like AIDA can empower researchers, enhance outcomes, and make qualitative research more inclusive, impactful, and efficient.The AIDA project enhances qualitative research by integrating AI technologies to streamline transcription, coding, and analysis processes. This innovation enables researchers to analyse larger datasets with greater efficiency and accuracy, providing faster and more comprehensive insights. By reducing manual effort and human error, AIDA empowers organisations to make informed decisions and implement evidence-based policies more effectively. Its scalability supports diverse societal and industry applications, from healthcare to market research, fostering innovation and addressing complex challenges. Ultimately, AIDA contributes to improving research quality, accessibility, and societal relevance, driving advancements across multiple sectors.
The overall purpose of this consultancy was to support the activities under the Environmental Monitoring and Assessment Programme of the UN Economic Commission for Europe (UNECE) in developing the 7th pan-European environmental assessment, an indicator based and thematic assessment, implemented jointly with the United Nations Environment Programme (UNEP) and in support of the 2030 Agenda for Sustainable Development. The series of environmental assessments of the pan-European region provide up to-date and policy-relevant information on the interactions between the environment and society. This consultancy was to:> Draft the input on drivers and developments to chapter 1.2 of the assessment related to the environmental theme “4.2 Applying principles of circular economy to sustainable tourism”.> Suggest to UNECE and UNEP the most policy relevant indicators from UNECE-environmental, SDG indicators and from other indicator frameworks such as EEA or OECD for the environmental theme for the sub-chapter 4.2.> Assess the current state, trends and recent developments and prepare the substantive part of sub-chapter 4.2 (summary - part I) and an annex (part II) with the detailed analysis and findings.
Today, embedded devices such as banking/transportation cards, car keys, and mobile phones use cryptographic techniques to protect personal information and communication. Such devices are increasingly becoming the targets of attacks trying to capture the underlying secret information, e.g., cryptographic keys. Attacks not targeting the cryptographic algorithm but its implementation are especially devastating and the best-known examples are so-called side-channel and fault injection attacks. Such attacks, often jointly coined as physical (implementation) attacks, are difficult to preclude and if the key (or other data) is recovered the device is useless. To mitigate such attacks, security evaluators use the same techniques as attackers and look for possible weaknesses in order to “fix” them before deployment. Unfortunately, the attackers’ resourcefulness on the one hand and usually a short amount of time the security evaluators have (and human errors factor) on the other hand, makes this not a fair race. Consequently, researchers are looking into possible ways of making security evaluations more reliable and faster. To that end, machine learning techniques showed to be a viable candidate although the challenge is far from solved. Our project aims at the development of automatic frameworks able to assess various potential side-channel and fault injection threats coming from diverse sources. Such systems will enable security evaluators, and above all companies producing chips for security applications, an option to find the potential weaknesses early and to assess the trade-off between making the product more secure versus making the product more implementation-friendly. To this end, we plan to use machine learning techniques coupled with novel techniques not explored before for side-channel and fault analysis. In addition, we will design new techniques specially tailored to improve the performance of this evaluation process. Our research fills the gap between what is known in academia on physical attacks and what is needed in the industry to prevent such attacks. In the end, once our frameworks become operational, they could be also a useful tool for mitigating other types of threats like ransomware or rootkits.