The historically developed practice of learning to play a music instrument from notes instead of by imitation or improvisation makes it possible to contrast two types of skilled musicians characterized not only by dissimilar performance practices, but also disparate methods of audiomotor learning. In a recent fMRI study comparing these two groups of musicians while they either imagined playing along with a recording or covertly assessed the quality of the performance, we observed activation of a right-hemisphere network of posterior superior parietal and dorsal premotor cortices in improvising musicians, indicating more efficient audiomotor transformation. In the present study, we investigated the detailed performance characteristics underlying the ability of both groups of musicians to replicate music on the basis of aural perception alone. Twenty-two classically trained improvising and score-dependent musicians listened to short, unfamiliar two-part excerpts presented with headphones. They played along or replicated the excerpts by ear on a digital piano, either with or without aural feedback. In addition, they were asked to harmonize or transpose some of the excerpts either to a different key or to the relative minor. MIDI recordings of their performances were compared with recordings of the aural model. Concordance was expressed in an audiomotor alignment score computed with the help of music information retrieval algorithms. Significantly higher alignment scores were found when contrasting groups, voices, and tasks. The present study demonstrates the superior ability of improvising musicians to replicate both the pitch and rhythm of aurally perceived music at the keyboard, not only in the original key, but also in other tonalities. Taken together with the enhanced activation of the right dorsal frontoparietal network found in our previous fMRI study, these results underscore the conclusion that the practice of improvising music can be associated with enhanced audiomotor transformation in response to aurally perceived music.
LINK
Communication between healthcare professionals and deaf patients has been particularly challenging during the COVID-19 pandemic. We have explored the possibility to automatically translate phrases that are frequently used in the diagnosis and treatment of hospital patients, in particular phrases related to COVID-19, from Dutch or English to Dutch Sign Language (NGT). The prototype system we developed displays translations either by means of pre-recorded videos featuring a deaf human signer (for a limited number of sentences) or by means of animations featuring a computer-generated signing avatar (for a larger, though still restricted number of sentences). We evaluated the comprehensibility of the signing avatar, as compared to the human signer. We found that, while individual signs are recognized correctly when signed by the avatar almost as frequently as when signed by a human, sentence comprehension rates and clarity scores for the avatar are substantially lower than for the human signer. We identify a number of concrete limitations of the JASigning avatar engine that underlies our system. Namely, the engine currently does not offer sufficient control over mouth shapes, the relative speed and intensity of signs in a sentence (prosody), and transitions between signs. These limitations need to be overcome in future work for the engine to become usable in practice.