Both because of the shortcomings of existing risk assessment methodologies, as well as newly available tools to predict hazard and risk with machine learning approaches, there has been an emerging emphasis on probabilistic risk assessment. Increasingly sophisticated AI models can be applied to a plethora of exposure and hazard data to obtain not only predictions for particular endpoints but also to estimate the uncertainty of the risk assessment outcome. This provides the basis for a shift from deterministic to more probabilistic approaches but comes at the cost of an increased complexity of the process as it requires more resources and human expertise. There are still challenges to overcome before a probabilistic paradigm is fully embraced by regulators. Based on an earlier white paper (Maertens et al., 2022), a workshop discussed the prospects, challenges and path forward for implementing such AI-based probabilistic hazard assessment. Moving forward, we will see the transition from categorized into probabilistic and dose-dependent hazard outcomes, the application of internal thresholds of toxicological concern for data-poor substances, the acknowledgement of user-friendly open-source software, a rise in the expertise of toxicologists required to understand and interpret artificial intelligence models, and the honest communication of uncertainty in risk assessment to the public.
DOCUMENT
In the production of fermented foods, microbes play an important role. Optimization of fermentation processes or starter culture production traditionally was a trial-and-error approach inspired by expert knowledge of the fermentation process. Current developments in high-throughput 'omics' technologies allow developing more rational approaches to improve fermentation processes both from the food functionality as well as from the food safety perspective. Here, the authors thematically review typical bioinformatics techniques and approaches to improve various aspects of the microbial production of fermented food products and food safety.
DOCUMENT
Editorial on the Research Topic "Leveraging artificial intelligence and open science for toxicological risk assessment"
LINK
Standard SARS-CoV-2 testing protocols using nasopharyngeal/throat (NP/T) swabs are invasive and require trained medical staff for reliable sampling. In addition, it has been shown that PCR is more sensitive as compared to antigen-based tests. Here we describe the analytical and clinical evaluation of our in-house RNA extraction-free saliva-based molecular assay for the detection of SARS-CoV-2. Analytical sensitivity of the test was equal to the sensitivity obtained in other Dutch diagnostic laboratories that process NP/T swabs. In this study, 955 individuals participated and provided NP/T swabs for routine molecular analysis (with RNA extraction) and saliva for comparison. Our RT-qPCR resulted in a sensitivity of 82,86% and a specificity of 98,94% compared to the gold standard. A false-negative ratio of 1,9% was found. The SARS-CoV-2 detection workflow described here enables easy, economical, and reliable saliva processing, useful for repeated testing of individuals.
LINK
From the article: Abstract Over the last decades, philosophers and cognitive scientists have argued that the brain constitutes only one of several contributing factors to cognition, the other factors being the body and the world. This position we refer to as Embodied Embedded Cognition (EEC). The main purpose of this paper is to consider what EEC implies for the task interpretation of the control system. We argue that the traditional view of the control system as involved in planning and decision making based on beliefs about the world runs into the problem of computational intractability. EEC views the control system as relying heavily on the naturally evolved fit between organism and environment. A ‘lazy’ control structure could be ‘ignorantly successful’ in a ‘user friendly’ world, by facilitating the transitory creation of a flexible and integrated set of behavioral layers that are constitutive of ongoing behavior. We close by discussing the types of questions this could imply for empirical research in cognitive neuroscience and robotics.
LINK
The exploitation of the metagenome for novel biocatalysts by functional screening is determined by the ability to express the respective genes in a surrogate host. The probability of recovering a certain gene thereby depends on its abundance in the environmental DNA used for library construction, the chosen insert size, the length of the target gene, and the presence of expression signals that are functional in the host organism. In this paper, we present a set of formulas that describe the chance of isolating a gene by random expression cloning, taking into account the three different modes of heterologous gene expression: independent expression, expression as a transcriptional fusion and expression as a translational fusion. Genes of the last category are shown to be virtually inaccessible by shotgun cloning because of the low frequency of functional constructs. To evaluate which part of the metagenome might in this way evade exploitation, 32 complete genome sequences of prokaryotic organisms were analysed for the presence of expression signals functional in E. coli hosts, using bioinformatics tools. Our study reveals significant differences in the predicted expression modes between distinct taxonomic groups of organisms and suggests that about 40% of the enzymatic activities may be readily recovered by random cloning in E. coli.
DOCUMENT
The scientific literature represents a rich source for retrieval of knowledge on associations between biomedical concepts such as genes, diseases and cellular processes. A commonly used method to establish relationships between biomedical concepts from literature is co-occurrence. Apart from its use in knowledge retrieval, the co-occurrence method is also wellsuited to discover new, hidden relationships between biomedical concepts following a simple ABC-principle, in which A and C have no direct relationship, but are connected via shared B-intermediates. In this paper we describe CoPub Discovery, a tool that mines the literature for new relationships between biomedical concepts. Statistical analysis using ROC curves showed that CoPub Discovery performed well over a wide range of settings and keyword thesauri. We subsequently used CoPub Discovery to search for new relationships between genes, drugs, pathways and diseases. Several of the newly found relationships were validated using independent literature sources. In addition, new predicted relationships between compounds and cell proliferation were validated and confirmed experimentally in an in vitro cell proliferation assay. The results show that CoPub Discovery is able to identify novel associations between genes, drugs, pathways and diseases that have a high probability of being biologically valid. This makes CoPub Discovery a useful tool to unravel the mechanisms behind disease, to find novel drug targets, or to find novel applications for existing drugs. © 2010 Frijters et al.
DOCUMENT
The methodology of biomimicry design thinking is based on and builds upon the overarching patterns that all life abides by. “Cultivating cooperative relationships” within an ecosystem is one such pattern we as humans can learn from to nurture our own mutualistic and symbiotic relationships. While form and process translations from biology to design have proven accessible by students learning biomimicry, the realm of translating biological functions in a systematic approach has proven to be more difficult. This study examines how higher education students can approach the gap that many companies in transition are struggling with today; that of thinking within the closed loops of their own ecosystem, to do good without damaging the system itself. Design students should be able to assess and advise on product design choices within such systems after graduation. We know when tackling a design challenge, teams have difficulties sifting through the mass of information they encounter, and many obstacles are encountered by students and their professional clients when trying to implement systems thinking into their design process. While biomimicry offers guidelines and methodology, there is insufficient research on complex, systems-level problem solving that systems thinking biomimicry requires. This study looks at factors found in course exercises, through student surveys and interviews that helped (novice) professionals initiate systems thinking methods as part of their strategy. The steps found in this research show characteristics from student responses and matching educational steps which enabled them to develop their own approach to challenges in a systems thinking manner. Experiences from the 2022 cohort of the semester “Design with Nature” within the Industrial Design Engineering program at The Hague University of Applied Sciences in the Netherlands have shown that the mixing and matching of connected biological design strategies to understand integrating functions and relationships within a human system is a promising first step. Stevens LL, Whitehead C, Singhal A. Cultivating Cooperative Relationships: Identifying Learning Gaps When Teaching Students Systems Thinking Biomimicry. Biomimetics. 2022; 7(4):184. https://doi.org/10.3390/biomimetics7040184
DOCUMENT
Adverse Outcome Pathways (AOPs) are conceptual frameworks that tie an initial perturbation (molecular initiat- ing event) to a phenotypic toxicological manifestation (adverse outcome), through a series of steps (key events). They provide therefore a standardized way to map and organize toxicological mechanistic information. As such, AOPs inform on key events underlying toxicity, thus supporting the development of New Approach Methodologies (NAMs), which aim to reduce the use of animal testing for toxicology purposes. However, the establishment of a novel AOP relies on the gathering of multiple streams of evidence and infor- mation, from available literature to knowledge databases. Often, this information is in the form of free text, also called unstructured text, which is not immediately digestible by a computer. This information is thus both tedious and increasingly time-consuming to process manually with the growing volume of data available. The advance- ment of machine learning provides alternative solutions to this challenge. To extract and organize information from relevant sources, it seems valuable to employ deep learning Natural Language Processing techniques. We review here some of the recent progress in the NLP field, and show how these techniques have already demonstrated value in the biomedical and toxicology areas. We also propose an approach to efficiently and reliably extract and combine relevant toxicological information from text. This data can be used to map underlying mechanisms that lead to toxicological effects and start building quantitative models, in particular AOPs, ultimately allowing animal-free human-based hazard and risk assessment.
DOCUMENT
Summary: Xpaths is a collection of algorithms that allow for the prediction of compound-induced molecular mechanisms of action by integrating phenotypic endpoints of different species; and proposes follow-up tests for model organisms to validate these pathway predictions. The Xpaths algorithms are applied to predict developmental and reproductive toxicity (DART) and implemented into an in silico platform, called DARTpaths.
DOCUMENT