Background Children with speech sound disorders (SSD) are at higher risk of communication breakdown, but the impact of having an SSD may vary from child to child. Determining the severity of SSD helps speech-language therapists (SLTs) to recognise the extent of the problem and to identify and prioritise children who require intervention. Aims This study aimed to identify severity factors for SSD in order to develop a multifactorial Speech Sound Disorder Severity Construct (SSDSC) using SLTs’ views and the International Classification of Functioning, Disability and Health (ICF). Method In an explorative five-staged qualitative study, the research question was answered: ‘How do SLTs determine the severity of SSD in children?’. A total of 91 SLTs from The Netherlands participated in data collection and analysis. The iterative process included three different qualitative research methodologies (thematic analysis [TA], constructivist grounded theory [CGT] and content analysis [CA]) to ensure validation of the results by means of method triangulation. Results SLTs considered nine themes: intelligibility, speech accuracy, persistence, the child's perception, impact, communicative participation, concomitant factors, professional point of view, and environmental factors. The themes were summarised in three main severity factors: (I) Speech accuracy, (II) The child's perception of the impact of their speech, and (III) Intelligibility in communication. Other severity factors were concomitant factors and impact. Expertise and support were identified as facilitators or barriers that may worsen or relieve the severity of SSD. Conclusions This study highlights the need for SLTs to rethink how they think about severity as a simplistic construct reflecting only speech accuracy. It is recommended that a broader holistic approach to measuring severity is adopted.
LINK
The interplay between sound and vision is a key determinant of human perception. With the development of Virtual Reality (VR) technologies and their commercial applications, there is emergent need to better understand how audio-visual signals manipulated in virtual environments influence perception and human behaviour. The current study addresses this challenge in simulated VR environments mirroring real life scenarios. In particular, we investigated the parameters that might enhance perception, and thus VR experiences when sound and vision are manipulated. A VR museum was created mimicking a real art gallery featuring Japanese paintings. Participants were exposed to the gallery via Samsung Gear VR, head mounted display, and could freely walk in. To half of the participants newly composed music clips were played, during the VR gallery visit. The other participants were exposed to the same environment, but no music was played (control condition). The results showed that music played altered the way people are engaged in, perceive and experience the VR art gallery. Opposite to our expectation, the VR experience was liked more when no music was played. The naturalness and presence were perceived to be relatively high, and did not differ significantly depending on whether music was played or not. Regression modelling further explored the relationship between the parameters hypothesised to influence the VR experiences. The findings are summarised in a theoretical model. The study outcomes could be implemented to successfully develop efficient VR applications for art and entertainment.
DOCUMENT
INTRODUCTION: Delirium in critically-ill patients is a common multifactorial disorder that is associated with various negative outcomes. It is assumed that sleep disturbances can result in an increased risk of delirium. This study hypothesized that implementing a protocol that reduces overall nocturnal sound levels improves quality of sleep and reduces the incidence of delirium in Intensive Care Unit (ICU) patients.METHODS: This interrupted time series study was performed in an adult mixed medical and surgical 24-bed ICU. A pre-intervention group of 211 patients was compared with a post-intervention group of 210 patients after implementation of a nocturnal sound-reduction protocol. Primary outcome measures were incidence of delirium, measured by the Intensive Care Delirium Screening Checklist (ICDSC) and quality of sleep, measured by the Richards-Campbell Sleep Questionnaire (RCSQ). Secondary outcome measures were use of sleep-inducing medication, delirium treatment medication, and patient-perceived nocturnal noise.RESULTS: A significant difference in slope in the percentage of delirium was observed between the pre- and post-intervention periods (-3.7% per time period, p=0.02). Quality of sleep was unaffected (0.3 per time period, p=0.85). The post-intervention group used significantly less sleep-inducing medication (p<0.001). Nocturnal noise rating improved after intervention (median: 65, IQR: 50-80 versus 70, IQR: 60-80, p=0.02).CONCLUSIONS: The incidence of delirium in ICU patients was significantly reduced after implementation of a nocturnal sound-reduction protocol. However, reported sleep quality did not improve.
DOCUMENT
A substantial amount of studies have addressed the influence of sound on human performance. In many of these, however, the large acoustic differences between experimental conditions prevent a direct translation of the results to realistic effects of room acoustic interventions. This review identifies those studies which can be, in principle, translated to (changes in) room acoustic parameters and adds to the knowledge about the influence of the indoor sound environment on people. The review procedure is based on the effect room acoustics can have on the relevant quantifiers of the sound environment in a room or space. 272 papers containing empirical findings on the influence of sound or noise on some measure of human performance were found. Of these, only 12 papers complied with this review's criteria. A conceptual framework is suggested based on the analysis of results, positioning the role of room acoustics in the influence of sound on task performance. Furthermore, valuable insights are pre- sented that can be used in future studies on this topic. Whi le the influence of the sound environment on performance is clearly an issue in many situations, evidence regarding the effectiveness of strategies to control the sound environment by room acoustic design is lacking and should be a focus area in future studies.
DOCUMENT
Hoe klonk het landschap van je jeugd? Wat hoor je buiten aan geluiden? Welk lied herinnert jou aan een geliefde?Muziek verbindt, ontroert, verruimt en prikkelt. Zo koesteren we de stilte of het ruisen van de zee. Zo gidst muziek ons naar onze ervaringen, herinneringen en verlangens en markeert het de bijzondere en alledaagse momenten in ons leven. We zijn intiem verweven met geluid en muziek.Hoe geven mensen in landelijke gebieden in Noord-Nederland muziek en geluid bewust of onbewust een plek in hun leven en leefomgeving? En hoe zijn deze ervaringen te vertalen naar het werk van musici in de regio. Wandel in gedachten mee in het muzikale- en geluidslandschap van mensen die je collega, buurman of muziekleraar kunnen zijn.
DOCUMENT
Abstract In this paper several meaningful audio icons of classic arcade games such as Pong, Donkey Kong, Mario World and Pac-Man are analyzed, using the PRAAT software for speech analysis and musical theory. The analysis results are used to describe how these examples of best practice sound design obtain their meaning in the player's perception. Some aspects can be related to the use of tonal hierarchy (e.g. Donkey Kong and Mario World) which is a western culture related aspect of musical meaning. Other aspects are related to universal expressions of meaning such as the theory of misattribution, prosody, vocalization and cross-modal perceptions such as brightness and the uncanny valley hypothesis. Recent studies in the field of cognitive neuroscience support the universal and meaningful potential of all these aspects. The relationship between language related prosody, vocalization and phonology, and music seems to be an especially successful design principle for universally meaningful music icons in game sound design.
DOCUMENT
Children with speech sound disorders (SSD) have speech disorders due to problems in articulation, phonology, execution (eg. dysarthria), planning (eg. apraxia), orofacial anomalies (eg. cleft palate) or hearing impairment (ASHA). How do children with speech sound disorders perform on language and motor (experimental) tests compared to typically developing children?
LINK
Predation risk is a major driver of the distribution of prey animals, which typically show strong responses to cues for predator presence. An unresolved question is whether naïve individuals respond to mimicked cues, and whether such cues can be used to deter prey. We investigated whether playback of wolf sounds induces fear responses in naïve ungulates in a human-dominated landscape from which wolves have been eradicated since 1879. We conducted a playback experiment in mixed-coniferous and broadleaved forest that harboured three cervid and one suid species. At 36 locations, we played wolf sounds, sounds of local sheep or no sounds, consecutively, in random order, and recorded visit rate and group size, using camera traps. Visit rates of cervids and wild boar showed a clear initial reduction to playback of both wolf and sheep sounds, but the type of response differed between sound, forest type and species. For naïve wild boar in particular, responses to predator cues depended on forest type. Effects on visit rate disappeared within 21 days. Group sizes in all the species were not affected by the sound treatment. Our findings suggest that the responses of naïve ungulates to wolf sound seem to be species specific, depend on forest type and wear off in time, indicating habituation. Before we can successfully deter ungulates using predator sound, we should further investigate how different forest types affect the perception of naïve ungulates to these sounds, as responses to predator sound may depend on habitat characteristics.
MULTIFILE