Associative acquisition of word meaning by trial-and-error learning
Embodied cognition theory implies that speech is largely based on the body motor and sensory experience. The question, which is crucial for understanding the origin of language, is how our brain transforms sensory-motor experience and gets access to word semantic representation. We developed an auditory-motor experimental procedure that allowed investigating neural underpinning of word meaning acquisition by way of associative "trial-and-error" learning paradigm that mimics basic aspects of natural language learning. Participants were presented with eight pseudowords; four of them were assigned to specific body part movements during learning blocks – through commencing actions by one of participant’s left or right extremities and receiving a feedback. The other pseudowords did not require actions and were used as controls. Magnetoencephalogram was recorded during passive listening of the pseudowords before and after learning blocks. The cortical sources of the magnetic evoked responses were reconstructed using distributed source modeling. Learning of novel word meaning through word-action association selectively increased neural specificity for these words in the auditory parabelt areas responsible for spectrotemporal analysis, as well as in articulatory areas, both located in the left hemisphere. The extent of neural changes was linked to the degree of language learning, specifically implicating the physiological contribution of the left perisylvian cortex in the speech learning success.
The Abstract book contains the abstracts of the posters presentations of the participants of the Methodological school: Methods of data processing in EEg and MEG, Moscow, 16-30th of April, 2013. The School was devoted to the theoretical and practical aspects of the contemporary methods of the dynamic mapping of brain activity by analysis of multichannel MEG and EEG.
We present Visbrain, a Python open-source package that offers a comprehensive visualization suite for neuroimaging and electrophysiological brain data. Visbrain consists of two levels of abstraction: (1) objects which represent highly configurable neuro-oriented visual primitives (3D brain, sources connectivity, etc.) and (2) graphical user interfaces for higher level interactions. The object level offers flexible and modular tools to produce and automate the production of figures using an approach similar to that of Matplotlib with subplots. The second level visually connects these objects by controlling properties and interactions through graphical interfaces. The current release of Visbrain (version 0.4.2) contains 14 different objects and three responsive graphical user interfaces, built with PyQt: Signal, for the inspection of time-series and spectral properties, Brain for any type of visualization involving a 3D brain and Sleep for polysomnographic data visualization and sleep analysis. Each module has been developed in tight collaboration with end-users, i.e., primarily neuroscientists and domain experts, who bring their experience to make Visbrain as transparent as possible to the recording modalities (e.g., intracranial EEG, scalp-EEG, MEG, anatomical and functional MRI). Visbrain is developed on top of VisPy, a Python package providing high-performance 2D and 3D visualization by leveraging the computational power of the graphics card. Visbrain is available on Github and comes with a documentation, examples, and datasets (http://visbrain.org).
Neuronal oscillations are ubiquitous in the human brain and are implicated in virtually all brain functions. Although they can be described by a prominent peak in the power spectrum, their waveform is not necessarily sinusoidal and shows rather complex morphology. Both frequency and temporal descriptions of such non-sinusoidal neuronal oscillations can be utilized. However, in non-invasive EEG/MEG recordings the waveform of oscillations often takes a sinusoidal shape which in turn leads to a rather oversimplified view on oscillatory processes. In this study, we show in simulations how spatial synchronization can mask non-sinusoidal features of the underlying rhythmic neuronal processes. Consequently, the degree of non-sinusoidality can serve as a measure of spatial synchronization. To confirm this empirically, we show that a mixture of EEG components is indeed associated with more sinusoidal oscillations compared to the waveform of oscillations in each constituent component. Using simulations, we also show that the spatial mixing of the non-sinusoidal neuronal signals strongly affects the amplitude ratio of the spectral harmonics constituting the waveform. Finally, our simulations show how spatial mixing can affect the strength and even the direction of the amplitude coupling between constituent neuronal harmonics at different frequencies. Validating these simulations, we also demonstrate these effects in real EEG recordings. Our findings have far reaching implications for the neurophysiological interpretation of spectral profiles, cross-frequency interactions, as well as for the unequivocal determination of oscillatory phase.
Speech is largely based on the body motor and sensory experience. The question, which is crucial for understanding the brain mechanisms of human language, is how our brain transforms sensory-motor experience into word meaning. Multiple evidence hints that natural language acquisition involves biological mechanisms of associative learning. The ability to quickly acquire word-picture associations was shown to depend on reorganization in neocortical networks including the left temporal area, especially the left temporal pole, as well as temporoparietal, premotor, and prefrontal regions.
We developed an auditory-motor experimental procedure that allowed investigating neural underpinning of word meaning acquisition by way of associative "trial-and-error" learning that mimics important aspects of natural word learning. Participants were presented with eight pseudowords; four of them were assigned to specific body part movements during the course of learning – through commencing actions by one of participant’s left or right extremities and receiving a feedback. The other four pseudowords did not require actions and were used as controls. Magnetoencephalogram was recorded during passive listening of the pseudowords before and after learning. The cortical sources of the magnetic evoked responses were reconstructed using distributed source modeling.
We found a significant effect in the middle part of the STS/STG that mostly includes the auditory parabelt areas responsible for spectrotemporal analysis and initial steps of word recognition. Processing of new words also activated the posterior opercular part of the inferior frontal gyrus that is involved in subvocal rehearsal and articulatory coding of the perceived speech sounds, this fact emphasizing the role of articulatory sensory-motor experience in acquisition of word meaning. Our analysis did not reveal significant effects in the temporal pole or in the temporoparietal regions.
Juxtaposition of our findings with the current body of literature may imply that rooting the word meaning into one's sensory-motor experience is an initial stage, which is prerequisite but not sufficient for its embedding into the full associative structure of semantic memory.
Taken together, our findings show that learning of novel word meaning through word-action association selectively increased neural specificity for these words in the auditory areas responsible for spectrotemporal analysis, as well as in articulatory areas, both located in the left hemisphere. The extent of neural changes was linked to the degree of language learning, specifically implicating the physiological contribution of the left perisylvian cortex in the learning success.
The Abstract book contains the abstracts of the posters presentations of the participants of the Methodological school: Methods of data processing in EEG and MEG, Moscow, 16-30th of April, 2013. The School was devoted to the theoretical and practical aspects of the contemporary methods of the dynamic mapping of brain activity by analysis of multichannel MEG and EEG.
Recent theories of cognitive control put large emphasis on theta oscillations in relation to action monitoring. Multiple EEG studies of cognitive control revealed increased power of theta oscillations restricted to midfrontal areas, while there is a substantial body of functional connectivity data demonstrating that theta oscillations may be a carrier of informational exchange over multiple cortical regions. fMRI studies revealed immense distributed networks involved in cognitive control. Paradoxically, MEG has been considered almost insensitive to theta oscillations in such an experimental context. It also remains debatable what is the functional role of such theta oscillations. An influential line of evidence links feedback-related theta oscillations to two types of prediction errors (unsigned and signed), but this distinction has not been tested during trial-end-error learning with theta activity measured beyond the midfrontal cortex.
We recorded MEG while participants were involved in trial-and-error learning within a novel multiple-choice behavioral task with complex stimulus-to-response mapping. Three conditions were analyzed: correct and erroneous trials during the initial stage of learning acquisition, as well as correct trials during stable performance. Sources of MEG activity were analyzed using minimum-norm estimation method within 4-6 Hz frequency range.
We revealed a number of bilateral cortical areas that displayed theta oscillations to the feedback signal: in addition to the "classical" medial frontal areas (the anterior part of the medial cingulate cortex and the pre-supplementary motor area), this network included the insula and the auditory cortex, the frontal operculum and posterior inferior frontal gyrus, the premotor cortex, the paracentral lobule, and the posterior part of the medial cingulate cortex. Granger causality analysis revealed overall communication directed from lateral to medial sites. During the initial stage of trial-and-error learning, we observed a strong non-differential response to feedback signal that reflected an unsigned component of the prediction error. The signed component of the prediction error was observed later – with greater theta activations after errors compared with correct responses.
Thus, using MEG, we were able to reveal a distributed network of brain areas in relation to feedback-related processing that included not only medial frontal, but also auditory areas, insula, lateral frontal, and medial parietal areas. The data obtained confirm the existence of two components of the prediction error, and this distinction was evident all over the network revealed.
The study was implemented in the framework of the Basic Research Program at the National Research University Higher School of Economics (HSE) in 2018.
Synchronization between oscillatory signals is considered to be one of the main mechanisms through which neuronal populations interact with each other. It is conventionally studied with mass-bivariate measures utilizing either sensor-to-sensor or voxel-to-voxel signals. However, none of these approaches aims at maximizing synchronization, especially when two multichannel datasets are present. Examples include cortico-muscular coherence (CMC), cortico-subcortical interactions or hyperscanning (where electroencephalographic EEG/magnetoencephalographic MEG activity is recorded simultaneously from two or more subjects). For all of these cases, a method which could find two spatial projections maximizing the strength of synchronization would be desirable. Here we present such method for the maximization of coherence between two sets of EEG/MEG/EMG (electromyographic)/LFP (local field potential) recordings. We refer to it as canonical Coherence (caCOH). caCOH maximizes the absolute value of the coherence between the two multivariate spaces in the frequency domain. This allows very fast optimization for many frequency bins. Apart from presenting details of the caCOH algorithm, we test its efficacy with simulations using realistic head modelling and focus on the application of caCOH to the detection of cortico-muscular coherence. For this, we used diverse multichannel EEG and EMG recordings and demonstrate the ability of caCOH to extract complex patterns of CMC distributed across spatial and frequency domains. Finally, we indicate other scenarios where caCOH can be used for the extraction of neuronal interactions.