Genomics has multiplied the number of targets for new therapeutic interventions, but this has not yet lead to a marked increase of pharma pipeline outputs. The complexity of protein function in higher order biological systems is often underestimated. Translation from in vitro and in vivo results to the human setting frequently fails due to unforeseen toxicity and efficacy issues. Biosimulation addresses these issues by capturing the complex dynamics of interacting molecules and cells in mechanistic, predictive models. A central concept is that of the virtual patient, an encapsulation of a specific pathophysiological behaviour in a biosimulation model. The authors describe how virtual patients are being used in target identification, target validation and clinical development, and discuss challenges for the acceptance of biosimulation methods.
DOCUMENT
The exploitation of the metagenome for novel biocatalysts by functional screening is determined by the ability to express the respective genes in a surrogate host. The probability of recovering a certain gene thereby depends on its abundance in the environmental DNA used for library construction, the chosen insert size, the length of the target gene, and the presence of expression signals that are functional in the host organism. In this paper, we present a set of formulas that describe the chance of isolating a gene by random expression cloning, taking into account the three different modes of heterologous gene expression: independent expression, expression as a transcriptional fusion and expression as a translational fusion. Genes of the last category are shown to be virtually inaccessible by shotgun cloning because of the low frequency of functional constructs. To evaluate which part of the metagenome might in this way evade exploitation, 32 complete genome sequences of prokaryotic organisms were analysed for the presence of expression signals functional in E. coli hosts, using bioinformatics tools. Our study reveals significant differences in the predicted expression modes between distinct taxonomic groups of organisms and suggests that about 40% of the enzymatic activities may be readily recovered by random cloning in E. coli.
DOCUMENT
How to encourage students to choose for a future in agrifood? Not like we always did. The labor market shows an increasing shortage. The agrifood sector plays a significant role in achieving global food security and environmental sustainability. Scholars hardly realize what they can contribute to these social, ecologic and economic issues. The sector needs to expand the range of career opportunities in the agriculture-food-nutrition-environment nexus. Most importantly, it means creating incentives that encourage young people to see agrifood as one of the best options for a career choice. We developed inspiring learning materials to achieve awareness in secondary schools in the Netherlands. A Genomics Cookbook with food metaphors to explain biological principles is highly appreciated by both teachers and students. It is a way to increase influx into green colleges and universities, and thereby efflux to the agrifood sector.
MULTIFILE
Publicatie bij de rede, uitgesproken bij de aanvaarding van het ambt als lector Green Biotechnology aan Hogeschool Inholland te Amsterdam op 20 mei2015 door dr. C.M. Kreike
DOCUMENT
What options are open for peoplecitizens, politicians, and other nonscientiststo become actively involved in and anticipate new directions in the life sciences? In addressing this question, this article focuses on the start of the Human Genome Project (1985-1990). By contrasting various models of democracy (liberal, republican, deliberative), I examine the democratic potential the models provide for citizens' involvement in setting priorities and funding patterns related to big science projects. To enhance the democratizing of big science projects and give citizens opportunities to reflect, anticipate, and negotiate on newdirections in science and technology at a global level, liberal democracy with its national scope and representative structure does not suffice. Although republican (communicative) and deliberative (associative) democracy models meet the need for greater citizen involvement, the ways to achieve the ideal at a global level still remain to be developed.
DOCUMENT
Summary: Xpaths is a collection of algorithms that allow for the prediction of compound-induced molecular mechanisms of action by integrating phenotypic endpoints of different species; and proposes follow-up tests for model organisms to validate these pathway predictions. The Xpaths algorithms are applied to predict developmental and reproductive toxicity (DART) and implemented into an in silico platform, called DARTpaths.
DOCUMENT
BACKGROUND: In many genomics projects, numerous lists containing biological identifiers are produced. Often it is useful to see the overlap between different lists, enabling researchers to quickly observe similarities and differences between the data sets they are analyzing. One of the most popular methods to visualize the overlap and differences between data sets is the Venn diagram: a diagram consisting of two or more circles in which each circle corresponds to a data set, and the overlap between the circles corresponds to the overlap between the data sets. Venn diagrams are especially useful when they are 'area-proportional' i.e. the sizes of the circles and the overlaps correspond to the sizes of the data sets. Currently there are no programs available that can create area-proportional Venn diagrams connected to a wide range of biological databases.RESULTS: We designed a web application named BioVenn to summarize the overlap between two or three lists of identifiers, using area-proportional Venn diagrams. The user only needs to input these lists of identifiers in the textboxes and push the submit button. Parameters like colors and text size can be adjusted easily through the web interface. The position of the text can be adjusted by 'drag-and-drop' principle. The output Venn diagram can be shown as an SVG or PNG image embedded in the web application, or as a standalone SVG or PNG image. The latter option is useful for batch queries. Besides the Venn diagram, BioVenn outputs lists of identifiers for each of the resulting subsets. If an identifier is recognized as belonging to one of the supported biological databases, the output is linked to that database. Finally, BioVenn can map Affymetrix and EntrezGene identifiers to Ensembl genes.CONCLUSION: BioVenn is an easy-to-use web application to generate area-proportional Venn diagrams from lists of biological identifiers. It supports a wide range of identifiers from the most used biological databases currently available. Its implementation on the World Wide Web makes it available for use on any computer with internet connection, independent of operating system and without the need to install programs locally. BioVenn is freely accessible at http://www.cmbi.ru.nl/cdd/biovenn/.
DOCUMENT
International Innovation is the leading global dissemination resource for the wider scientific, technology and research communities, dedicated to disseminating the latest science, research and technological innovations on a global level. More information and a complimentary subscription offer to the publication can be found at: www.researchmedia.eu
DOCUMENT
The relationship between race and biology is complex. In contemporary medical science, race is a social construct that is measured via self-identification of study participants. But even though race has no biological essence, it is often used as variable in medical guidelines (e.g., treatment recommendations specific for Black people with hypertension). Such recommendations are based on clinical trials in which there was a significant correlation between self-identified race and actual, but often unmeasured, health-related factors such as (pharmaco) genetics, diet, sun exposure, etc. Many teachers are insufficiently aware of this complexity. In their classes, they (unintentionally) portray self-reported race as having a biological essence. This may cause students to see people of shared race as biologically or genetically homogeneous, and believe that race-based recommendations are true for all individuals (rather than reflecting the average of a heterogeneous group). This medicalizes race and reinforces already existing healthcare disparities. Moreover, students may fail to learn that the relation between race and health is easily biased by factors such as socioeconomic status, racism, ancestry, and environment and that this limits the generalizability of race-based recommendations. We observed that the clinical case vignettes that we use in our teaching contain many stereotypes and biases, and do not generally reflect the diversity of actual patients. This guide, written by clinical pharmacology and therapeutics teachers, aims to help our colleagues and teachers in other health professions to reflect on and improve our teaching on race-based medical guidelines and to make our clinical case vignettes more inclusive and diverse.
MULTIFILE
Both because of the shortcomings of existing risk assessment methodologies, as well as newly available tools to predict hazard and risk with machine learning approaches, there has been an emerging emphasis on probabilistic risk assessment. Increasingly sophisticated AI models can be applied to a plethora of exposure and hazard data to obtain not only predictions for particular endpoints but also to estimate the uncertainty of the risk assessment outcome. This provides the basis for a shift from deterministic to more probabilistic approaches but comes at the cost of an increased complexity of the process as it requires more resources and human expertise. There are still challenges to overcome before a probabilistic paradigm is fully embraced by regulators. Based on an earlier white paper (Maertens et al., 2022), a workshop discussed the prospects, challenges and path forward for implementing such AI-based probabilistic hazard assessment. Moving forward, we will see the transition from categorized into probabilistic and dose-dependent hazard outcomes, the application of internal thresholds of toxicological concern for data-poor substances, the acknowledgement of user-friendly open-source software, a rise in the expertise of toxicologists required to understand and interpret artificial intelligence models, and the honest communication of uncertainty in risk assessment to the public.
DOCUMENT