Both because of the shortcomings of existing risk assessment methodologies, as well as newly available tools to predict hazard and risk with machine learning approaches, there has been an emerging emphasis on probabilistic risk assessment. Increasingly sophisticated AI models can be applied to a plethora of exposure and hazard data to obtain not only predictions for particular endpoints but also to estimate the uncertainty of the risk assessment outcome. This provides the basis for a shift from deterministic to more probabilistic approaches but comes at the cost of an increased complexity of the process as it requires more resources and human expertise. There are still challenges to overcome before a probabilistic paradigm is fully embraced by regulators. Based on an earlier white paper (Maertens et al., 2022), a workshop discussed the prospects, challenges and path forward for implementing such AI-based probabilistic hazard assessment. Moving forward, we will see the transition from categorized into probabilistic and dose-dependent hazard outcomes, the application of internal thresholds of toxicological concern for data-poor substances, the acknowledgement of user-friendly open-source software, a rise in the expertise of toxicologists required to understand and interpret artificial intelligence models, and the honest communication of uncertainty in risk assessment to the public.
DOCUMENT
Editorial on the Research Topic "Leveraging artificial intelligence and open science for toxicological risk assessment"
LINK
The first Stakeholder Network Meeting of the EU Horizon 2020-funded ONTOX project was held on 13-14 March 2023, in Brussels, Belgium. The discussion centred around identifying specific challenges, barriers and drivers in relation to the implementation of non-animal new approach methodologies (NAMs) and probabilistic risk assessment (PRA), in order to help address the issues and rank them according to their associated level of difficulty. ONTOX aims to advance the assessment of chemical risk to humans, without the use of animal testing, by developing non-animal NAMs and PRA in line with 21st century toxicity testing principles. Stakeholder groups (regulatory authorities, companies, academia, non-governmental organisations) were identified and invited to participate in a meeting and a survey, by which their current position in relation to the implementation of NAMs and PRA was ascertained, as well as specific challenges and drivers highlighted. The survey analysis revealed areas of agreement and disagreement among stakeholders on topics such as capacity building, sustainability, regulatory acceptance, validation of adverse outcome pathways, acceptance of artificial intelligence (AI) in risk assessment, and guaranteeing consumer safety. The stakeholder network meeting resulted in the identification of barriers, drivers and specific challenges that need to be addressed. Breakout groups discussed topics such as hazard versus risk assessment, future reliance on AI and machine learning, regulatory requirements for industry and sustainability of the ONTOX Hub platform. The outputs from these discussions provided insights for overcoming barriers and leveraging drivers for implementing NAMs and PRA. It was concluded that there is a continued need for stakeholder engagement, including the organisation of a 'hackathon' to tackle challenges, to ensure the successful implementation of NAMs and PRA in chemical risk assessment.
LINK
The inefficiency of maintaining static and long-lasting safety zones in environments where actual risks are limited is likely to increase in the coming decades, as autonomous systems become more common and human workers fewer in numbers. Nevertheless, an uncompromising approach to safety remains paramount, requiring the introduction of novel methods that are simultaneously more flexible and capable of delivering the same level of protection against potentially hazardous situations. We present such a method to create dynamic safety zones, the boundaries of which can be redrawn in real-time, taking into account explicit positioning data when available and using conservative extrapolation from last known location when information is missing or unreliable. Simulation and statistical methods were used to investigate performance gains compared to static safety zones. The use of a more advanced probabilistic framework to further improve flexibility is also discussed, although its implementation would not offer the same level of protection and is currently not recommended.
MULTIFILE
For almost fifteen years, the availability and regulatory acceptance of new approach methodologies (NAMs) to assess the absorption, distribution, metabolism and excretion (ADME/biokinetics) in chemical risk evaluations are a bottleneck. To enhance the field, a team of 24 experts from science, industry, and regulatory bodies, including new generation toxicologists, met at the Lorentz Centre in Leiden, The Netherlands. A range of possibilities for the use of NAMs for biokinetics in risk evaluations were formulated (for example to define species differences and human variation or to perform quantitative in vitro-in vivo extrapolations). To increase the regulatory use and acceptance of NAMs for biokinetics for these ADME considerations within risk evaluations, the development of test guidelines (protocols) and of overarching guidance documents is considered a critical step. To this end, a need for an expert group on biokinetics within the Organisation of Economic Cooperation and Development (OECD) to supervise this process was formulated. The workshop discussions revealed that method development is still required, particularly to adequately capture transporter mediated processes as well as to obtain cell models that reflect the physiology and kinetic characteristics of relevant organs. Developments in the fields of stem cells, organoids and organ-on-a-chip models provide promising tools to meet these research needs in the future.
DOCUMENT
Since 1990, natural hazards have led to over 1.6 million fatalities globally, and economic losses are estimated at an average of around USD 260–310 billion per year. The scientific and policy communities recognise the need to reduce these risks. As a result, the last decade has seen a rapid development of global models for assessing risk from natural hazards at the global scale. In this paper, we review the scientific literature on natural hazard risk assessments at the global scale, and we specifically examine whether and how they have examined future projections of hazard, exposure, and/or vulnerability. In doing so, we examine similarities and differences between the approaches taken across the different hazards, and we identify potential ways in which different hazard communities can learn from each other. For example, there are a number of global risk studies focusing on hydrological, climatological, and meteorological hazards that have included future projections and disaster risk reduction measures (in the case of floods), whereas fewer exist in the peer-reviewed literature for global studies related to geological hazards. On the other hand, studies of earthquake and tsunami risk are now using stochastic modelling approaches to allow for a fully probabilistic assessment of risk, which could benefit the modelling of risk from other hazards. Finally, we discuss opportunities for learning from methods and approaches being developed and applied to assess natural hazard risks at more continental or regional scales. Through this paper, we hope to encourage further dialogue on knowledge sharing between disciplines and communities working on different hazards and risk and at different spatial scales.
DOCUMENT
To study the ways in which compounds can induce adverse effects, toxicologists have been constructing Adverse Outcome Pathways (AOPs). An AOP can be considered as a pragmatic tool to capture and visualize mechanisms underlying different types of toxicity inflicted by any kind of stressor, and describes the interactions between key entities that lead to the adverse outcome on multiple biological levels of organization. The construction or optimization of an AOP is a labor intensive process, which currently depends on the manual search, collection, reviewing and synthesis of available scientific literature. This process could however be largely facilitated using Natural Language Processing (NLP) to extract information contained in scientific literature in a systematic, objective, and rapid manner that would lead to greater accuracy and reproducibility. This would support researchers to invest their expertise in the substantive assessment of the AOPs by replacing the time spent on evidence gathering by a critical review of the data extracted by NLP. As case examples, we selected two frequent adversities observed in the liver: namely, cholestasis and steatosis denoting accumulation of bile and lipid, respectively. We used deep learning language models to recognize entities of interest in text and establish causal relationships between them. We demonstrate how an NLP pipeline combining Named Entity Recognition and a simple rules-based relationship extraction model helps screen compounds related to liver adversities in the literature, but also extract mechanistic information for how such adversities develop, from the molecular to the organismal level. Finally, we provide some perspectives opened by the recent progress in Large Language Models and how these could be used in the future. We propose this work brings two main contributions: 1) a proof-of-concept that NLP can support the extraction of information from text for modern toxicology and 2) a template open-source model for recognition of toxicological entities and extraction of their relationships. All resources are openly accessible via GitHub (https://github.com/ontox-project/en-tox).
DOCUMENT
The aim of this paper is to show the benefits of enhancing classic Risk Based Inspection (without fatigue monitoring data) with an Advisory Hull Monitoring System (AHMS) to monitor and justify lifetime consumption to provide more thorough grounds for operational, inspection, repair and maintenance decisions whilst demonstrating regulatory compliance.
DOCUMENT
DOCUMENT
In summarizing the research on collaborative learning, the quest for the holy grail of effective collaborative learning has not yet ended. The use of the GLAID framework tool for the design of collaborative learning in higher education may contribute to better aligned designs and hereby contribute to more effective collaborative learning. The GLAID framework may help monitor, evaluate and redesign projects and group assignments. We know that the perception of the quality of the task, and the extent to which students feel engaged, influences the perception of students of how much they learn from a GLA. However, perceptions alone are only an indication of what is learned. A next step is to study exactly what those learning outcomes are. This leads to a more difficult question: how can we measure the learning outcomes? Although a variety of research underlines the large potential of collaboration for learning outcomes, the exact learning outcomes of team learning can only be partly foretold. During collaborative learning students could partly achieve the same or similar learning outcomes, but as each individual learning internalizes what is learned from the collaborative learning by his/her given prior experiences and knowledge, the learning outcomes of collaborative learning are probabilistic (Strijbos, 2011), and therefore attaining specific learning outcomes is likely but not guaranteed. If learning outcomes are different per individual and are probabilistic, how can we measure those learning outcomes? Wenger, Trayner, & De Laat (2011) regard the outcomes of learning communities as value creations that have an individual outcome and a group outcome. This value creation induced by collaborative learning consists, for example, of changed behaviour in the working environment as well as the production of useful products or artefacts. Tillema (2006) also describes that communities of inquiry can lead to the design of conceptual artefacts: products that are useful for a professional working environment.
DOCUMENT