As the EEG inverse problem does not have a unique solution, the sources reconstructed from EEG and their connectivity properties depend on forward and inverse modeling parameters such as the choice of an anatomical template and electrical model, prior assumptions on the sources, and further implementational details. In order to use source connectivity analysis as a reliable research tool, there is a need for stability across a wider range of standard estimation routines. Using resting state EEG recordings of N=65 participants acquired within two studies, we present the first comprehensive assessment of the consistency of EEG source localization and functional/effective connectivity metrics across two anatomical templates (ICBM152 and Colin27), three electrical models (BEM, FEM and spherical harmonics expansions), three inverse methods (WMNE, eLORETA and LCMV), and three software implementations (Brainstorm, Fieldtrip and our own toolbox). Source localizations were found to be more stable across reconstruction pipelines than subsequent estimations of functional connectivity, while effective connectivity estimates where the least consistent. All results were relatively unaffected by the choice of the electrical head model, while the choice of the inverse method and source imaging package induced a considerable variability. In particular, a relatively strong difference was found between LCMV beamformer solutions on one hand and eLORETA/WMNE distributed inverse solutions on the other hand. We also observed a gradual decrease of consistency when results are compared between studies, within individual participants, and between individual participants. In order to provide reliable findings in the face of the observed variability, additional simulations involving interacting brain sources are required. Meanwhile, we encourage verification of the obtained results using more than one source imaging procedure.
To help us live in the three-dimensional world, our brain integrates incoming spatial information into reference frames, which are based either on our own body (egocentric) or independent from it (allocentric). Such frames, however, may be crucial not only when interacting with the visual world, but also in language comprehension, since even the simplest utterance can be understood from different perspectives. While significant progress has been made in elucidating how linguistic factors, such as pronouns, influence reference frame adoption, the neural underpinnings of this ability are largely unknown. Building on the neural reuse framework, this study tested the hypothesis that reference frame processing in language comprehension involves mechanisms used in navigation and spatial cognition. We recorded EEG activity in 28 healthy volunteers to identify spatiotemporal dynamics in (1) spatial navigation, and (2) a language comprehension task (sentence-picture matching). By decomposing the EEG signal into a set of maximally independent activity patterns, we localised and identified a subset of components which best characterised perspective-taking in both domains. Remarkably, we find individual co-variability across these tasks: people's strategies in spatial navigation are also reflected in their construction of sentential perspective. Furthermore, a distributed network of cortical generators of such strategy-dependent activity responded not only in navigation, but in sentence comprehension. Thus we report, for the first time, evidence for shared brain mechanisms across these two domains - advancing our understanding of language's interaction with other cognitive systems, and the individual differences shaping comprehension. © 2017 Elsevier Inc
Children learn new words and word forms with ease, often acquiring a new word after very few repetitions. Recent neurophysiological research on word form acquisition in adults indicates that novel words can be acquired within minutes of repetitive exposure to them, regardless of the individual's focused attention on the speech input. Although it is well-known that children surpass adults in language acquisition, the developmental aspects of such rapid and automatic neural acquisition mechanisms remain unexplored. To address this open question, we used magnetoencephalography (MEG) to scrutinise brain dynamics elicited by spoken words and word-like sounds in healthy monolingual (Danish) children throughout a 20-min repetitive passive exposure session. We found rapid neural dynamics manifested as an enhancement of early (~100 ms) brain activity over the short exposure session, with distinct spatiotemporal patterns for different novel sounds. For novel Danish word forms, signs of such enhancement were seen in the left temporal regions only, suggesting reliance on pre-existing language circuits for acquisition of novel word forms with native phonology. In contrast, exposure both to novel word forms with non-native phonology and to novel non-speech sounds led to activity enhancement in both left and right hemispheres, suggesting that more wide-spread cortical networks contribute to the build-up of memory traces for non-native and non-speech sounds. Similar studies in adults have previously reported more sluggish (~15–25 min, as opposed to 4 min in the present study) or non-existent neural dynamics for non-native sound acquisition, which might be indicative of a higher degree of plasticity in the children's brain. Overall, the results indicate a rapid and highly plastic mechanism for a dynamic build-up of memory traces for novel acoustic information in the children's brain that operates automatically and recruits bilateral temporal cortical circuits.
Autism spectrum conditions (ASC) are characterised by deficits in understanding and expressing emotions and are frequently accompanied by alexithymia, a difficulty in understanding and expressing emotion words. Words are differentially represented in the brain according to their semantic category and these difficulties in ASC predict reduced activation to emotion-related words in limbic structures crucial for affective processing. Semantic theories view 'emotion actions' as critical for learning the semantic relationship between a word and the emotion it describes, such that emotion words typically activate the cortical motor systems involved in expressing emotion actions such as facial expressions. As ASC are also characterised by motor deficits and atypical brain structure and function in these regions, motor structures would also be expected to show reduced activation during emotion-semantic processing. Here we used event-related fMRI to compare passive processing of emotion words in comparison to abstract verbs and animal names in typically-developing controls and individuals with ASC. Relatively reduced brain activation in ASC for emotion words, but not matched control words, was found in motor areas and cingulate cortex specifically. The degree of activation evoked by emotion words in the motor system was also associated with the extent of autistic traits as revealed by the Autism Spectrum Quotient. We suggest that hypoactivation of motor and limbic regions for emotion-word processing may underlie difficulties in processing emotional language in ASC. The role that sensorimotor systems and their connections might play in the affective and social-communication difficulties in ASC is discussed.