Using fMRI, cerebral activations were studied in 24 classically-trained keyboard performers and 12 musically unskilled control subjects. Two groups of musicians were recruited: improvising (n=12) and score-dependent (non-improvising) musicians (n=12). While listening to both familiar and unfamiliar music, subjects either (covertly) appraised the presented music performance or imagined they were playing the music themselves. We hypothesized that improvising musicians would exhibit enhanced efficiency of audiomotor transformation reflected by stronger ventral premotor activation. Statistical Parametric Mapping revealed that, while virtually 'playing along' with the music, improvising musicians exhibited activation of a right-hemisphere distribution of cerebral areas including posterior-superior parietal and dorsal premotor cortex. Involvement of these right-hemisphere dorsal stream areas suggests that improvising musicians recruited an amodal spatial processing system subserving pitch-to-space transformations to facilitate their virtual motor performance. Score-dependent musicians recruited a primarily left-hemisphere pattern of motor areas together with the posterior part of the right superior temporal sulcus, suggesting a relationship between aural discrimination and symbolic representation. Activations in bilateral auditory cortex were significantly larger for improvising musicians than for score-dependent musicians, suggesting enhanced top-down effects on aural perception. Our results suggest that learning to play a music instrument primarily from notation predisposes musicians toward aural identification and discrimination, while learning by improvisation involves audio-spatial-motor transformations, not only during performance, but also perception.
DOCUMENT
Background and purpose: Automatic approaches are widely implemented to automate dose optimization in radiotherapy treatment planning. This study systematically investigates how to configure automatic planning in order to create the best possible plans. Materials and methods: Automatic plans were generated using protocol based automatic iterative optimization. Starting from a simple automation protocol which consisted of the constraints for targets and organs at risk (OAR), the performance of the automatic approach was evaluated in terms of target coverage, OAR sparing, conformity, beam complexity, and plan quality. More complex protocols were systematically explored to improve the quality of the automatic plans. The protocols could be improved by adding a dose goal on the outer 2 mm of the PTV, by setting goals on strategically chosen subparts of OARs, by adding goals for conformity, and by limiting the leaf motion. For prostate plans, development of an automated post-optimization procedure was required to achieve precise control over the dose distribution. Automatic and manually optimized plans were compared for 20 head and neck (H&N), 20 prostate, and 20 rectum cancer patients. Results: Based on simple automation protocols, the automatic optimizer was not always able to generate adequate treatment plans. For the improved final configurations for the three sites, the dose was lower in automatic plans compared to the manual plans in 12 out of 13 considered OARs. In blind tests, the automatic plans were preferred in 80% of cases. Conclusions: With adequate, advanced, protocols the automatic planning approach is able to create high-quality treatment plans.
DOCUMENT
Functional Magnetic Resonance Imaging (fMRI) was used to study the activation of cerebral motor networks during auditory perception of music in professional keyboard musicians (n=12). The activation paradigm implied that subjects listened to two-part polyphonic music, while either critically appraising the performance or imagining they were performing themselves. Two-part polyphonic audition and bimanual motor imagery circumvented a hemisphere bias associated with the convention of playing the melody with the right hand. Both tasks activated ventral premotor and auditory cortices, bilaterally, and the right anterior parietal cortex, when contrasted to 12 musically unskilled controls. Although left ventral premotor activation was increased during imagery (compared to judgment), bilateral dorsal premotor and right posterior-superior parietal activations were quite unique to motor imagery. The latter suggests that musicians not only recruited their manual motor repertoire but also performed a spatial transformation from the vertically perceived pitch axis (high and low sound) to the horizontal axis of the keyboard. Imagery-specific activations in controls were seen in left dorsal parietal-premotor and supplementary motor cortices. Although these activations were less strong compared to musicians, this overlapping distribution indicated the recruitment of a general 'mirror-neuron' circuitry. These two levels of sensori-motor transformations point towards common principles by which the brain organizes audition-driven music performance and visually guided task performance.
DOCUMENT
In this paper we propose a head detection method using range data from a stereo camera. The method is based on a technique that has been introduced in the domain of voxel data. For application in stereo cameras, the technique is extended (1) to be applicable to stereo data, and (2) to be robust with regard to noise and variation in environmental settings. The method consists of foreground selection, head detection, and blob separation, and, to improve results in case of misdetections, incorporates a means for people tracking. It is tested in experiments with actual stereo data, gathered from three distinct real-life scenarios. Experimental results show that the proposed method performs well in terms of both precision and recall. In addition, the method was shown to perform well in highly crowded situations. From our results, we may conclude that the proposed method provides a strong basis for head detection in applications that utilise stereo cameras.
MULTIFILE
BackgroundPhysical exercise in cancer patients is a promising intervention to improve cognition and increase brain volume, including hippocampal volume. We investigated whether a 6-month exercise intervention primarily impacts total hippocampal volume and additionally hippocampal subfield volumes, cortical thickness and grey matter volume in previously physically inactive breast cancer patients. Furthermore, we evaluated associations with verbal memory.MethodsChemotherapy-exposed breast cancer patients (stage I-III, 2–4 years post diagnosis) with cognitive problems were included and randomized in an exercise intervention (n = 70, age = 52.5 ± 9.0 years) or control group (n = 72, age = 53.2 ± 8.6 years). The intervention consisted of 2x1 hours/week of supervised aerobic and strength training and 2x1 hours/week Nordic or power walking. At baseline and at 6-month follow-up, volumetric brain measures were derived from 3D T1-weighted 3T magnetic resonance imaging scans, including hippocampal (subfield) volume (FreeSurfer), cortical thickness (CAT12), and grey matter volume (voxel-based morphometry CAT12). Physical fitness was measured with a cardiopulmonary exercise test. Memory functioning was measured with the Hopkins Verbal Learning Test-Revised (HVLT-R total recall) and Wordlist Learning of an online cognitive test battery, the Amsterdam Cognition Scan (ACS Wordlist Learning). An explorative analysis was conducted in highly fatigued patients (score of ≥ 39 on the symptom scale ‘fatigue’ of the European Organisation for Research and Treatment of Cancer Quality of Life Questionnaire), as previous research in this dataset has shown that the intervention improved cognition only in these patients.ResultsMultiple regression analyses and voxel-based morphometry revealed no significant intervention effects on brain volume, although at baseline increased physical fitness was significantly related to larger brain volume (e.g., total hippocampal volume: R = 0.32, B = 21.7 mm3, 95 % CI = 3.0 – 40.4). Subgroup analyses showed an intervention effect in highly fatigued patients. Unexpectedly, these patients had significant reductions in hippocampal volume, compared to the control group (e.g., total hippocampal volume: B = −52.3 mm3, 95 % CI = −100.3 – −4.4)), which was related to improved memory functioning (HVLT-R total recall: B = −0.022, 95 % CI = −0.039 – −0.005; ACS Wordlist Learning: B = −0.039, 95 % CI = −0.062 – −0.015).ConclusionsNo exercise intervention effects were found on hippocampal volume, hippocampal subfield volumes, cortical thickness or grey matter volume for the entire intervention group. Contrary to what we expected, in highly fatigued patients a reduction in hippocampal volume was found after the intervention, which was related to improved memory functioning. These results suggest that physical fitness may benefit cognition in specific groups and stress the importance of further research into the biological basis of this finding.
MULTIFILE
Being able to classify experienced emotions by identifying distinct neural responses has tremendous value in both fundamental research (e.g. positive psychology, emotion regulation theory) and in applied settings (clinical, healthcare, commercial). We aimed to decode the neural representation of the experience of two discrete emotions: sadness and disgust, devoid of differences in valence and arousal. In a passive viewing paradigm, we showed emotion evoking images from the International Affective Picture System to participants while recording their EEG. We then selected a subset of those images that were distinct in evoking either sadness or disgust (20 for each), yet were indistinguishable on normative valence and arousal. Event-related potential analysis of 69 participants showed differential responses in the N1 and EPN components and a support-vector machine classifier was able to accurately classify (58%) whole-brain EEG patterns of sadness and disgust experiences. These results support and expand on earlier findings that discrete emotions do have differential neural responses that are not caused by differences in valence or arousal.
DOCUMENT
This study presents an automated method for detecting and measuring the apex head thickness of tomato plants, a critical phenotypic trait associated with plant health, fruit development, and yield forecasting. Due to the apex's sensitivity to physical contact, non-invasive monitoring is essential. This paper addresses the demand for automated, contactless systems among Dutch growers. Our approach integrates deep learning models (YOLO and Faster RCNN) with RGB-D camera imaging to enable accurate, scalable, and non-invasive measurement in greenhouse environments. A dataset of 600 RGB-D images captured in a controlled greenhouse, was fully preprocessed, annotated, and augmented for optimal training. Experimental results show that YOLOv8n achieved superior performance with a precision of 91.2 %, recall of 86.7 %, and an Intersection over Union (IoU) score of 89.4 %. Other models, such as YOLOv9t, YOLOv10n, YOLOv11n, and Faster RCNN, demonstrated lower precision scores of 83.6 %, 74.6 %, 75.4 %, and 78 %, respectively. Their IoU scores were also lower, indicating less reliable detection. This research establishes a robust, real-time method for precision agriculture through automated apex head thickness measurement.
DOCUMENT
The colour-word Stroop task and the picture-word interference task (PWI) have been used extensively to study the functional processes underlying spoken word production. One of the consistent behavioural effects in both tasks is the Stroop-like effect: The reaction time (RT) is longer on incongruent trials than on congruent trials. The effect in the Stroop task is usually linked to word planning, whereas the effect in the PWI task is associated with either word planning or perceptual encoding. To adjudicate between the word planning and perceptual encoding accounts of the effect in PWI, we conducted an EEG experiment consisting of three tasks: a standard colour-word Stroop task (three colours), a standard PWI task (39 pictures), and a Stroop-like version of the PWI task (three pictures). Participants overtly named the colours and pictures while their EEG was recorded. A Stroop-like effect in RTs was observed in all three tasks. ERPs at centro-parietal sensors started to deflect negatively for incongruent relative to congruent stimuli around 350 ms after stimulus onset for the Stroop, Stroop-like PWI, and the Standard PWI tasks: an N400 effect. No early differences were found in the PWI tasks. The onset of the Stroop-like effect at about 350 ms in all three tasks links the effect to word planning rather than perceptual encoding, which has been estimated in the literature to be finished around 200-250 ms after stimulus onset. We conclude that the Stroop-like effect arises during word planning in both Stroop and PWI.
MULTIFILE
We propose a novel deception detection system based on Rapid Serial Visual Presentation (RSVP). One motivation for the new method is to present stimuli on the fringe of awareness, such that it is more difficult for deceivers to confound the deception test using countermeasures. The proposed system is able to detect identity deception (by using the first names of participants) with a 100% hit rate (at an alpha level of 0.05). To achieve this, we extended the classic Event-Related Potential (ERP) techniques (such as peak-to-peak) by applying Randomisation, a form of Monte Carlo resampling, which we used to detect deception at an individual level. In order to make the deployment of the system simple and rapid, we utilised data from three electrodes only: Fz, Cz and Pz. We then combined data from the three electrodes using Fisher's method so that each participant was assigned a single p-value, which represents the combined probability that a specific participant was being deceptive. We also present subliminal salience search as a general method to determine what participants find salient by detecting breakthrough into conscious awareness using EEG.
DOCUMENT