New approach methodologies predicting human cardiotoxicity are of interest to support or even replace in vivo-based drug safety testing. The present study presents an in vitro–in silico approach to predict the effect of inter-individual and inter-ethnic kinetic variations in the cardiotoxicity of R- and S-methadone in the Caucasian and the Chinese population. In vitro cardiotoxicity data, and metabolic data obtained from two approaches, using either individual human liver microsomes or recombinant cytochrome P450 enzymes (rCYPs), were integrated with physiologically based kinetic (PBK) models and Monte Carlo simulations to predict inter-individual and inter-ethnic variations in methadone-induced cardiotoxicity. Chemical specific adjustment factors were defined and used to derive dose–response curves for the sensitive individuals. Our simulations indicated that Chinese are more sensitive towards methadone-induced cardiotoxicity with Margin of Safety values being generally two-fold lower than those for Caucasians for both methadone enantiomers. Individual PBK models using microsomes and PBK models using rCYPs combined with Monte Carlo simulations predicted similar inter-individual and inter-ethnic variations in methadone-induced cardiotoxicity. The present study illustrates how inter-individual and inter-ethnic variations in cardiotoxicity can be predicted by combining in vitro toxicity and metabolic data, PBK modelling and Monte Carlo simulations. The novel methodology can be used to enhance cardiac safety evaluations and risk assessment of chemicals.
DOCUMENT
DOCUMENT
Epidemiological miner cohort data used to estimate lung cancer risks related to occupational radon exposure often lack cohort-wide information on exposure to tobacco smoke, a potential confounder and important effect modifier. We have developed a method to project data on smoking habits from a case-control study onto an entire cohort by means of a Monte Carlo resampling technique. As a proof of principle, this method is tested on a subcohort of 35,084 former uranium miners employed at the WISMUT company (Germany), with 461 lung cancer deaths in the follow-up period 1955–1998. After applying the proposed imputation technique, a biologically-based carcinogenesis model is employed to analyze the cohort's lung cancer mortality data. A sensitivity analysis based on a set of 200 independent projections with subsequent model analyses yields narrow distributions of the free model parameters, indicating that parameter values are relatively stable and independent of individual projections. This technique thus offers a possibility to account for unknown smoking habits, enabling us to unravel risks related to radon, to smoking, and to the combination of both.
DOCUMENT
PURPOSE: Advanced radiotherapy treatments require appropriate quality assurance (QA) to verify 3D dose distributions. Moreover, increase in patient numbers demand efficient QA-methods. In this study, a time efficient method that combines model-based QA and measurement-based QA was developed; i.e., the hybrid-QA. The purpose of this study was to determine the reliability of the model-based QA and to evaluate time efficiency of the hybrid-QA method.METHODS: Accuracy of the model-based QA was determined by comparison of COMPASS calculated dose with Monte Carlo calculations for heterogeneous media. In total, 330 intensity modulated radiation therapy (IMRT) treatment plans were evaluated based on the mean gamma index (GI) with criteria of 3%∕3mm and classification of PASS (GI ≤ 0.4), EVAL (0.4 < GI > 0.6), and FAIL (GI ≥ 0.6). Agreement between model-based QA and measurement-based QA was determined for 48 treatment plans, and linac stability was verified for 15 months. Finally, time efficiency improvement of the hybrid-QA was quantified for four representative treatment plans.RESULTS: COMPASS calculated dose was in agreement with Monte Carlo dose, with a maximum error of 3.2% in heterogeneous media with high density (2.4 g∕cm(3)). Hybrid-QA results for IMRT treatment plans showed an excellent PASS rate of 98% for all cases. Model-based QA was in agreement with measurement-based QA, as shown by a minimal difference in GI of 0.03 ± 0.08. Linac stability was high with an average GI of 0.28 ± 0.04. The hybrid-QA method resulted in a time efficiency improvement of 15 min per treatment plan QA compared to measurement-based QA.CONCLUSIONS: The hybrid-QA method is adequate for efficient and accurate 3D dose verification. It combines time efficiency of model-based QA with reliability of measurement-based QA and is suitable for implementation within any radiotherapy department.
DOCUMENT
We propose a novel deception detection system based on Rapid Serial Visual Presentation (RSVP). One motivation for the new method is to present stimuli on the fringe of awareness, such that it is more difficult for deceivers to confound the deception test using countermeasures. The proposed system is able to detect identity deception (by using the first names of participants) with a 100% hit rate (at an alpha level of 0.05). To achieve this, we extended the classic Event-Related Potential (ERP) techniques (such as peak-to-peak) by applying Randomisation, a form of Monte Carlo resampling, which we used to detect deception at an individual level. In order to make the deployment of the system simple and rapid, we utilised data from three electrodes only: Fz, Cz and Pz. We then combined data from the three electrodes using Fisher's method so that each participant was assigned a single p-value, which represents the combined probability that a specific participant was being deceptive. We also present subliminal salience search as a general method to determine what participants find salient by detecting breakthrough into conscious awareness using EEG.
DOCUMENT
Spontaneous speech is an important source of information for aphasia research. It is essential to collect the right amount of data: enough for distinctions in the data to become meaningful, but not so much that the data collection becomes too expensive or places an undue burden on participants. The latter issue is an ethical consideration when working with participants that find speaking difficult, such as speakers with aphasia. So, how much speech data is enough to draw meaningful conclusions? How does the uncertainty around the estimation of model parameters in a predictive model vary as a function of the length of texts used for training?
DOCUMENT
Dit proefschrift heeft als onderwerp de toepassing van agenttechnologie in productie en productondersteuning. Onder een agent verstaan we in deze context een autonoom opererende software entiteit die gemaakt is om een zeker doel te realiseren en daartoe met de omgeving comuniceert en zelfstandig acties kan uitvoeren. In moderne productiesystemen streeft men ernaar om de tijd van ontwerp tot productie zo kort mogelijk te houden en de productie af te stemmen op de wensen van de individuele eindgebruiker. Vooral dit laatste streven past niet in het concept van massaproductie. Een methode moet gezocht worden om kleine hoeveelheden of zelfs unieke producten tegen een lage kostprijs te fabriceren. Om dit te verwezenlijken zijn voor dit onderzoek speciale goedkope productieplatforms ontwikkeld. Deze hercongureerbare productiemachines noemen we equiplets. Een verzameling van deze equiplets in een gridopstelling geplaatst en gekoppeld met een snelle netwerkverbinding is in staat om een aantal verschillende producten tegelijk te produceren. Dit noemen we exibele parallelle productie. Voor de softwareinfrastructuur is agenttechnologie toegepast. Twee typen agenten spelen hierin een hoofdrol. Een productagent is verantwoordelijk voor de totstandkoming van een enkel product. De productiemachines worden voorgesteld door zogenoemde equipletagenten. De productagent weet wat er moet gebeuren voor het maken van een product terwijl de equipletagent weet hoe een of meer productiestappen moeten worden uitgevoerd. Het hier voorgesteld concept verschilt in veel opzichten van standaard massaproductie. Elk product in wording volgt zijn eigen, mogelijk unieke pad langs de equiplets, de productie wordt per product gescheduled en niet per batch en er is geen sprake van een productielijn. Dit proefschrift stelt de softwarearchitectuur voor en beschrijft oplossingen voor de routeplanning waarbij het aantal wisselingen tussen equiplets geminimaliseerd is, een scheduling die gebaseerd is op schedulingschema's zoals toegepast in real-time operating systems en een op autonome voertuigen gebaseerd transportsysteem. Bij al deze oplossingen speelt de productagent een belangrijke rol. (uit de samenvatting van het proefschrift) SIKS Dissertation Series No. 2014-31 The research reported in this thesis has been carried out under the auspices of SIKS, the Dutch Research School for Information and Knowledge Systems.
DOCUMENT
Aim: Optimise a set of exposure factors, with the lowest effective dose, to delineate spinal curvature with the modified Cobb method in a full spine using computed radiography (CR) for a 5-year-old paediatric anthropomorphic phantom.Methods: Images were acquired by varying a set of parameters: positions (antero-posterior (AP), posteroanterior (PA) and lateral), kilo-voltage peak (kVp) (66-90), source-to-image distance (SID) (150 to 200cm), broad focus and the use of a grid (grid in/out) to analyse the impact on E and image quality(IQ). IQ was analysed applying two approaches: objective [contrast-to-noise-ratio/(CNR] and perceptual, using 5 observers. Monte-Carlo modelling was used for dose estimation. Cohen’s Kappa coefficient was used to calculate inter-observer-variability. The angle was measured using Cobb’s method on lateralprojections under different imaging conditions.Results: PA promoted the lowest effective dose (0.013 mSv) compared to AP (0.048 mSv) and lateral (0.025 mSv). The exposure parameters that allowed lower dose were 200cm SID, 90 kVp, broad focus and grid out for paediatrics using an Agfa CR system. Thirty-seven images were assessed for IQ andthirty-two were classified adequate. Cobb angle measurements varied between 16°±2.9 and 19.9°±0.9.Conclusion: Cobb angle measurements can be performed using the lowest dose with a low contrast-tonoise ratio. The variation on measurements for this was ±2.9° and this is within the range of acceptable clinical error without impact on clinical diagnosis. Further work is recommended on improvement tothe sample size and a more robust perceptual IQ assessment protocol for observers.
DOCUMENT
This paper describes a concept where products are equipped with agents that will assist in recycling and repairing the product. These so-called product agents represent the product in cyberspace and are capable to negotiate with other products in case of recycling or repair. Some product agents of broken products will offer spare parts, other agents will look for spare parts to repair a broken product. On the average this will enlarge the lifetime of a product and in some cases prevent wasting resources. Apart from reuse of spare parts these agents will also help to locate rare elements in a device, so these elements can be recycled more easily.
DOCUMENT
BackgroundConfounding bias is a common concern in epidemiological research. Its presence is often determined by comparing exposure effects between univariable- and multivariable regression models, using an arbitrary threshold of a 10% difference to indicate confounding bias. However, many clinical researchers are not aware that the use of this change-in-estimate criterion may lead to wrong conclusions when applied to logistic regression coefficients. This is due to a statistical phenomenon called noncollapsibility, which manifests itself in logistic regression models. This paper aims to clarify the role of noncollapsibility in logistic regression and to provide guidance in determining the presence of confounding bias.MethodsA Monte Carlo simulation study was designed to uncover patterns of confounding bias and noncollapsibility effects in logistic regression. An empirical data example was used to illustrate the inability of the change-in-estimate criterion to distinguish confounding bias from noncollapsibility effects.ResultsThe simulation study showed that, depending on the sign and magnitude of the confounding bias and the noncollapsibility effect, the difference between the effect estimates from univariable- and multivariable regression models may underestimate or overestimate the magnitude of the confounding bias. Because of the noncollapsibility effect, multivariable regression analysis and inverse probability weighting provided different but valid estimates of the confounder-adjusted exposure effect. In our data example, confounding bias was underestimated by the change in estimate due to the presence of a noncollapsibility effect.ConclusionIn logistic regression, the difference between the univariable- and multivariable effect estimate might not only reflect confounding bias but also a noncollapsibility effect. Ideally, the set of confounders is determined at the study design phase and based on subject matter knowledge. To quantify confounding bias, one could compare the unadjusted exposure effect estimate and the estimate from an inverse probability weighted model.
MULTIFILE