There is a growing literature investigating the relationship between oscillatory neural dynamics measured using electroencephalography (EEG) and/or magnetoencephalography (MEG), and sentence-level language comprehension. Recent proposals have suggested a strong link between predictive coding accounts of the hierarchical flow of information in the brain, and oscillatory neural dynamics in the beta and gamma frequency ranges. We propose that findings relating beta and gamma oscillations to sentence-level language comprehension might be unified under such a predictive coding account. Our suggestion is that oscillatory activity in the beta frequency range may reflect both the active maintenance of the current network configuration responsible for representing the sentence-level meaning under construction, and the top-down propagation of predictions to hierarchically lower processing levels based on that representation. In addition, we suggest that oscillatory activity in the low and middle gamma range reflect the matching of top-down predictions with bottom-up linguistic input, while evoked high gamma might reflect the propagation of bottom-up prediction errors to higher levels of the processing hierarchy. We also discuss some of the implications of this predictive coding framework, and we outline ideas for how these might be tested experimentally.
LINK
The relationship between the evoked responses (ERPs/ERFs) and the event-related changes in EEG/MEG power that can be observed during sentence-level language comprehension is as yet unclear. This study addresses a possible relationship between MEG power changes and the N400m component of the event-related field. Whole-head MEG was recorded while subjects listened to spoken sentences with incongruent (IC) or congruent (C) sentence endings. A clear N400m was observed over the left hemisphere, and was larger for the IC sentences than for the C sentences. A time-frequency analysis of power revealed a decrease in alpha and beta power over the left hemisphere in roughly the same time range as the N400m for the IC relative to the C condition. A linear regression analysis revealed a positive linear relationship between N400m and beta power for the IC condition, not for the C condition. No such linear relation was found between N400m and alpha power for either condition. The sources of the beta decrease were estimated in the LIFG, a region known to be involved in semantic unification operations. One source of the N400m was estimated in the left superior temporal region, which has been related to lexical retrieval. We interpret our data within a framework in which beta oscillations are inversely related to the engagement of task-relevant brain networks. The source reconstructions of the beta power suppression and the N400m effect support the notion of a dynamic communication between the LIFG and the left superior temporal region during language comprehension.
LINK
"Speak the Future" presents a novel test case at the intersection of scientific innovation and public engagement. Leveraging the power of real-time AI image generation, the project empowers festival participants to verbally describe their visions for a sustainable and regenerative future. These descriptions are instantly transformed into captivating imagery using SDXL Turbo, fostering collective engagement and tangible visualisation of abstract sustainability concepts. This unique interplay of speech recognition, AI, and projection technology breaks new ground in public engagement methods. The project offers valuable insights into public perceptions and aspirations for sustainability, as well as understanding the effectiveness of AI-powered visualisation and regenerative applications of AI. Ultimately, this will serve as a springboard for PhD research that will aim to understand How AI can serve as a vehicle for crafting regenerative futures? By employing real-time AI image generation, the project directly tests its effectiveness in fostering public engagement with sustainable futures. Analysing participant interaction and feedback sheds light on how AI-powered visualisation tools can enhance comprehension and engagement. Furthermore, the project fosters public understanding and appreciation of research. The interactive and accessible nature of "Speak the Future" demystifies the research process, showcasing its relevance and impact on everyday life. Moreover, by directly involving the public in co-creating visual representations of their aspirations, the project builds an emotional connection and sense of ownership, potentially leading to continued engagement and action beyond the festival setting. "Speak the Future" promises to be a groundbreaking initiative, bridging the gap between scientific innovation and public engagement in sustainability discourse. By harnessing the power of AI for collective visualisation, the project not only gathers valuable data for researchers but also empowers the public to envision and work towards a brighter, more sustainable future.