Computational thinking (CT) skills are crucial for every modern profession in which large amounts of data are processed. In K-12 curricula, CT skills are often taught in separate programming courses. However, without specific instructions, CT skills are not automatically transferred to other domains in the curriculum when they are developed while learning to program in a separate programming course. In modern professions, CT is often applied in the context of a specific domain. Therefore, learning CT skills in other domains, as opposed to computer science, could be of great value. CT and domain-specific subjects can be combined in different ways. In the CT literature, a distinction can be made among CT applications that substitute, augment, modify or redefine the original subject. On the substitute level, CT replaces exercises but CT is not necessary for reaching the learning outcomes. On the redefining level, CT changes the questions that can be posed within the subject, and learning objectives and assessment are integrated. In this short paper, we present examples of how CT and history, mathematics, biology and language subjects can be combined at all four levels. These examples and the framework on which they are based provide a guideline for design-based research on CT and subject integration.
DOCUMENT
Small RNAs are important regulators of genome function, yet their prediction in genomes is still a major computational challenge. Statistical analyses of pre-miRNA sequences indicated that their 2D structure tends to have a minimal free energy (MFE) significantly lower than MFE values of equivalently randomized sequences with the same nucleotide composition, in contrast to other classes of non-coding RNA. The computation of many MFEs is, however, too intensive to allow for genome-wide screenings.
DOCUMENT
Both because of the shortcomings of existing risk assessment methodologies, as well as newly available tools to predict hazard and risk with machine learning approaches, there has been an emerging emphasis on probabilistic risk assessment. Increasingly sophisticated AI models can be applied to a plethora of exposure and hazard data to obtain not only predictions for particular endpoints but also to estimate the uncertainty of the risk assessment outcome. This provides the basis for a shift from deterministic to more probabilistic approaches but comes at the cost of an increased complexity of the process as it requires more resources and human expertise. There are still challenges to overcome before a probabilistic paradigm is fully embraced by regulators. Based on an earlier white paper (Maertens et al., 2022), a workshop discussed the prospects, challenges and path forward for implementing such AI-based probabilistic hazard assessment. Moving forward, we will see the transition from categorized into probabilistic and dose-dependent hazard outcomes, the application of internal thresholds of toxicological concern for data-poor substances, the acknowledgement of user-friendly open-source software, a rise in the expertise of toxicologists required to understand and interpret artificial intelligence models, and the honest communication of uncertainty in risk assessment to the public.
DOCUMENT