• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site
Of all publications in the section: 44
Sort:
by name
by year
Article
Haufe S., Nikulin V., Müller K. et al. Neuroimage. 2013. No. 64. P. 120-133.

Information flow between brain areas is difficult to estimate from EEG measurements due to the presence of noise as well as due to volume conduction. We here test the ability of popular measures of effective connectivity to detect an underlying neuronal interaction from simulated EEG data, as well as the ability of commonly used inverse source reconstruction techniques to improve the connectivity estimation. We find that volume conduction severely limits the neurophysiological interpretability of sensor-space connectivity analyses. Moreover, it may generally lead to conflicting results depending on the connectivity measure and statistical testing approach used. In particular, we note that the application of Granger-causal (GC) measures combined with standard significance testing leads to the detection of spurious connectivity regardless of whether the analysis is performed on sensor-space data or on sources estimated using three different established inverse methods. This empirical result follows from the definition of GC. The phase-slope index (PSI) does not suffer from this theoretical limitation and therefore performs well on our simulated data. We develop a theoretical framework to characterize artifacts of volume conduction, which may still be present even in reconstructed source time series as zero-lag correlations, and to distinguish their time-delayed brain interaction. Based on this theory we derive a procedure which suppresses the influence of volume conduction, but preserves effects related to time-lagged brain interaction in connectivity estimates. This is achieved by using time-reversed data as surrogates for statistical testing. We demonstrate that this robustification makes Granger-causal connectivity measures applicable to EEG data, achieving similar results as PSI. Integrating the insights of our study, we provide a guidance for measuring brain interaction from EEG data. Software for generating benchmark data is made available.

Added: Oct 23, 2014
Article
Nikulin V., Jönsson E. G., Brismar T. Neuroimage. 2012. Vol. 1. No. 61. P. 162-169.

Although schizophrenia was previously associated with affected spatial neuronal synchronization, surprisingly little is known about the temporal dynamics of neuronal oscillations in this disease. However, given that the coordination of neuronal processes in time represents an essential aspect of practically all cognitive operations, it might be strongly affected in patients with schizophrenia. In the present study we aimed at quantifying long-range temporal correlations (LRTC) in patients (18 with schizophrenia; 3 with schizoaffective disorder) and 28 healthy control subjects matched for age and gender. Ongoing neuronal oscillations were recorded with multi-channel EEG at rest condition. LRTC in the range 5-50s were analyzed with Detrended Fluctuation Analysis. The amplitude of neuronal oscillations in alpha and beta frequency ranges did not differ between patients and control subjects. However, LRTC were strongly attenuated in patients with schizophrenia in both alpha and beta frequency ranges. Moreover, the cross-frequency correlation between LRTC belonging to alpha and beta oscillations was stronger for patients than healthy controls, indicating that similar neurophysiological processes affect neuronal dynamics in both frequency ranges. We believe that the attenuation of LRTC is most likely due to the increased variability in neuronal activity, which was previously hypothesized to underlie an excessive switching between the neuronal states in patients with schizophrenia. Attenuated LRTC might allow for more random associations between neuronal activations, which in turn might relate to the occurrence of thought disorders in schizophrenia.

Added: Oct 23, 2014
Article
Herrojo-Ruiz M. D., Brücke C., Nikulin V. et al. Neuroimage. 2014. Vol. 85. No. 2. P. 779-793.

Sequential behavior characterizes both simple everyday tasks, such as getting dressed, and complex skills, such as music performance. The basal ganglia (BG) play an important role in the learning of motor sequences. To study the contribution of the human BG to the initial encoding of sequence boundaries, we recorded local field potentials in the sensorimotor area of the internal globus pallidus (GPi) during the early acquisition of sensorimotor sequences in patients undergoing deep brain stimulation for dystonia. We demonstrated an anticipatory modulation of pallidal beta-band neuronal oscillations that was specific to sequence boundaries, as compared to within-sequence elements, and independent of both the movement parameters and the initiation/termination of ongoing movement. The modulation at sequence boundaries emerged with training, in parallel with skill learning, and correlated with the degree of long-range temporal correlations (LRTC) in the dynamics of ongoing beta-band amplitude oscillations. The implication is that LRTC of beta-band oscillations in the sensorimotor GPi might facilitate the emergence of beta power modulations by the sequence boundaries in parallel with sequence learning. Taken together, the results reveal the oscillatory mechanisms in the human BG that contribute at an initial learning phase to the hierarchical organization of sequential behavior as reflected in the formation of boundary-delimited representations of action sequences.

Added: Oct 30, 2020
Article
Kumral D., Sansal F., Cesnaite E. et al. Neuroimage. 2020. Vol. 207. P. 116373.

Variability of neural activity is regarded as a crucial feature of healthy brain function, and several neuroimaging approaches have been employed to assess it noninvasively. Studies on the variability of both evoked brain response and spontaneous brain signals have shown remarkable changes with aging but it is unclear if the different measures of brain signal variability – identified with either hemodynamic or electrophysiological methods – reflect the same underlying physiology. In this study, we aimed to explore age differences of spontaneous brain signal variability with two different imaging modalities (EEG, fMRI) in healthy younger (25 ​± ​3 years, N ​= ​135) and older (67 ​± ​4 years, N ​= ​54) adults. Consistent with the previous studies, we found lower blood oxygenation level dependent (BOLD) variability in the older subjects as well as less signal variability in the amplitude of low-frequency oscillations (1–12 ​Hz), measured in source space. These age-related reductions were mostly observed in the areas that overlap with the default mode network. Moreover, age-related increases of variability in the amplitude of beta-band frequency EEG oscillations (15–25 ​Hz) were seen predominantly in temporal brain regions. There were significant sex differences in EEG signal variability in various brain regions while no significant sex differences were observed in BOLD signal variability. Bivariate and multivariate correlation analyses revealed no significant associations between EEG- and fMRI-based variability measures. In summary, we show that both BOLD and EEG signal variability reflect aging-related processes but are likely to be dominated by different physiological origins, which relate differentially to age and sex.

Added: Sep 16, 2020
Article
Egorova N., Shtyrov Y., Pulvermuller F. Neuroimage. 2015. Vol. in press.

Although language is a key tool for communication in social interaction, most studies in the neuroscience of language have focused on language structures such as words and sentences. Here, the neural correlates of speech acts, that is, the actions performed by using language, were investigated with functional magnetic resonance imaging (fMRI). Participants were shown videos, in which the same critical utterances were used in different communicative contexts, to Name objects, or to Request them from communication partners. Understanding of critical utterances asRequests was accompanied by activation in bilateral premotor, left inferior frontal and temporo-parietal cortical areas known to support action-related and social interactive knowledge. Naming, however, activated the left angular gyrus implicated in linking information about word forms and related reference objects mentioned in critical utterances. These findings show that the understanding of utterances as different communicative actions is reflected in distinct brain activation patterns, and thus suggest different neural substrates for different speech act types.

Added: Oct 23, 2015
Article
Vidaurre C., Nolte G., de Vries I. et al. Neuroimage. 2019. Vol. 201:116009. P. 1-11.

Synchronization between oscillatory signals is considered to be one of the main mechanisms through which neuronal populations interact with each other. It is conventionally studied with mass-bivariate measures utilizing either sensor-to-sensor or voxel-to-voxel signals. However, none of these approaches aims at maximizing synchronization, especially when two multichannel datasets are present. Examples include cortico-muscular coherence (CMC), cortico-subcortical interactions or hyperscanning (where electroencephalographic EEG/magnetoencephalographic MEG activity is recorded simultaneously from two or more subjects). For all of these cases, a method which could find two spatial projections maximizing the strength of synchronization would be desirable. Here we present such method for the maximization of coherence between two sets of EEG/MEG/EMG (electromyographic)/LFP (local field potential) recordings. We refer to it as canonical Coherence (caCOH). caCOH maximizes the absolute value of the coherence between the two multivariate spaces in the frequency domain. This allows very fast optimization for many frequency bins. Apart from presenting details of the caCOH algorithm, we test its efficacy with simulations using realistic head modelling and focus on the application of caCOH to the detection of cortico-muscular coherence. For this, we used diverse multichannel EEG and EMG recordings and demonstrate the ability of caCOH to extract complex patterns of CMC distributed across spatial and frequency domains. Finally, we indicate other scenarios where caCOH can be used for the extraction of neuronal interactions.

Added: Oct 25, 2019
Article
Bury G., García-Huéscar M., Bhattacharya J. et al. Neuroimage. 2019. Vol. 199. P. 704-717.

Behavioral adaptations during performance rely on predicting and evaluating the consequences of our actions through action monitoring. Previous studies revealed that proprioceptive and exteroceptive signals contribute to error-monitoring processes, which are implemented in the posterior medial frontal cortex. Interestingly, errors also trigger changes in autonomic nervous system activity such as pupil dilation or heartbeat deceleration. Yet, the contribution of implicit interoceptive signals of bodily states to error-monitoring during ongoing performance has been overlooked. This study investigated whether cardiovascular interoceptive signals influence the neural correlates of error processing during performance, with an emphasis on the early stages of error processing. We recorded musicians’ electroencephalography and electrocardiogram signals during the performance of highly-trained music pieces. Previous event-related potential (ERP) studies revealed that pitch errors during skilled musical performance are preceded by an error detection signal, the pre-error-negativity (preERN), and followed by a later error positivity (PE). In this study, by combining ERP, source localization and multivariate pattern classification analysis, we found that the error-minus-correct ERP waveform had an enhanced amplitude within 40–100 ms following errors in the systolic period of the cardiac cycle. This component could be decoded from single-trials, was dissociated from the preERN and PE, and stemmed from the inferior parietal cortex, which is a region implicated in cardiac autonomic regulation. In addition, the phase of the cardiac cycle influenced behavioral alterations resulting from errors, with a smaller post-error slowing and less perturbed velocity in keystrokes following pitch errors in the systole relative to the diastole phase of the cardiac cycle. Lastly, changes in the heart rate anticipated the upcoming occurrence of errors. This study provides the first evidence of preconscious visceral information modulating neural and behavioral responses related to early error monitoring during skilled performance.

Added: Oct 27, 2020
Article
Mahjoory K., Nikulin V., Botrel L. et al. Neuroimage. 2017. Vol. 152. No. 15 May. P. 590-601.

As the EEG inverse problem does not have a unique solution, the sources reconstructed from EEG and their connectivity properties depend on forward and inverse modeling parameters such as the choice of an anatomical template and electrical model, prior assumptions on the sources, and further implementational details. In order to use source connectivity analysis as a reliable research tool, there is a need for stability across a wider range of standard estimation routines. Using resting state EEG recordings of N=65 participants acquired within two studies, we present the first comprehensive assessment of the consistency of EEG source localization and functional/effective connectivity metrics across two anatomical templates (ICBM152 and Colin27), three electrical models (BEM, FEM and spherical harmonics expansions), three inverse methods (WMNE, eLORETA and LCMV), and three software implementations (Brainstorm, Fieldtrip and our own toolbox). Source localizations were found to be more stable across reconstruction pipelines than subsequent estimations of functional connectivity, while effective connectivity estimates where the least consistent. All results were relatively unaffected by the choice of the electrical head model, while the choice of the inverse method and source imaging package induced a considerable variability. In particular, a relatively strong difference was found between LCMV beamformer solutions on one hand and eLORETA/WMNE distributed inverse solutions on the other hand. We also observed a gradual decrease of consistency when results are compared between studies, within individual participants, and between individual participants. In order to provide reliable findings in the face of the observed variability, additional simulations involving interacting brain sources are required. Meanwhile, we encourage verification of the obtained results using more than one source imaging procedure.

Added: May 11, 2017
Article
Vukovic N., Shtyrov Y. Neuroimage. 2014. Vol. 102. No. P2. P. 695-703.

Understanding neurocognitive mechanisms supporting the use of multiple languages is a key question in language science. Recent neuroimaging studies in monolinguals indicated that core language areas in human neocortex together with sensorimotor structures form a highly interactive system underpinning native language comprehension. While the experience of a native speaker promotes the establishment of strong action-perception links in the comprehension network, this may not necessarily be the case for L2 where, as it has been argued, the most a typical L2 speaker may get is a link between an L2 wordform and its L1 translation equivalent. Therefore, we investigated, whether the motor cortex of bilingual subjects shows differential involvement in processing action semantics of native and non-native words. We used high-density EEG to dynamically measure changes in the cortical motor system's activity, indexed by event-related desynchronisation (ERD) of the mu-rhythm, in response to passively reading L1 (German) and L2 (English) action words. Analysis of motor-related EEG oscillations at the sensor level revealed an early (starting ~150ms) and left-lateralised coupling between action and semantics during both L1 and L2 processing. Crucially, source-level activation in the motor areas showed that mu-rhythm ERD, while present for both languages, is significantly stronger for L1 words. This is the first neurophysiological evidence of rapid motor-cortex involvement during L2 action-semantic processing. Our results both strengthen embodied cognition evidence obtained previously in monolinguals and, at the same time, reveal important quantitative differences between L1 and L2 sensorimotor brain activity in language comprehension.

Added: Oct 23, 2014
Article
Shtyrov Y., Vukovic N. Neuroimage. 2017. Vol. 161. P. 120-133.

To help us live in the three-dimensional world, our brain integrates incoming spatial information into reference frames, which are based either on our own body (egocentric) or independent from it (allocentric). Such frames, however, may be crucial not only when interacting with the visual world, but also in language comprehension, since even the simplest utterance can be understood from different perspectives. While significant progress has been made in elucidating how linguistic factors, such as pronouns, influence reference frame adoption, the neural underpinnings of this ability are largely unknown. Building on the neural reuse framework, this study tested the hypothesis that reference frame processing in language comprehension involves mechanisms used in navigation and spatial cognition. We recorded EEG activity in 28 healthy volunteers to identify spatiotemporal dynamics in (1) spatial navigation, and (2) a language comprehension task (sentence-picture matching). By decomposing the EEG signal into a set of maximally independent activity patterns, we localised and identified a subset of components which best characterised perspective-taking in both domains. Remarkably, we find individual co-variability across these tasks: people's strategies in spatial navigation are also reflected in their construction of sentential perspective. Furthermore, a distributed network of cortical generators of such strategy-dependent activity responded not only in navigation, but in sentence comprehension. Thus we report, for the first time, evidence for shared brain mechanisms across these two domains - advancing our understanding of language's interaction with other cognitive systems, and the individual differences shaping comprehension. © 2017 Elsevier Inc

Added: Oct 23, 2017
Article
Haufe S., Dähne S., Nikulin V. Neuroimage. 2014. No. 101. P. 583-597.

Neuronal oscillations have been shown to be associated with perceptual, motor and cognitive brain operations. While complex spatio-temporal dynamics are a hallmark of neuronal oscillations, they also represent a formidable challenge for the proper extraction and quantification of oscillatory activity with non-invasive recording techniques such as EEG and MEG. In order to facilitate the study of neuronal oscillations we present a general-purpose pre-processing approach, which can be applied for a wide range of analyses including but not restricted to inverse modeling and multivariate single-trial classification. The idea is to use dimensionality reduction with spatio-spectral decomposition (SSD) instead of the commonly and almost exclusively used principal component analysis (PCA). The key advantage of SSD lies in selecting components explaining oscillations-related variance instead of just any variance as in the case of PCA. For the validation of SSD pre-processing we performed extensive simulations with different inverse modeling algorithms and signal-to-noise ratios. In all these simulations SSD invariably outperformed PCA often by a large margin. Moreover, using a database of multichannel EEG recordings from 80 subjects we show that pre-processing with SSD significantly increases the performance of single-trial classification of imagined movements, compared to the classification with PCA pre-processing or without any dimensionality reduction. Our simulations and analysis of real EEG experiments show that, while not being supervised, the SSD algorithm is capable of extracting components primarily relating to the signal of interest often using as little as 20% of the data variance, instead of > 90% variance as in case of PCA. Given its ease of use, absence of supervision, and capability to efficiently reduce the dimensionality of multivariate EEG/MEG data, we advocate the application of SSD pre-processing for the analysis of spontaneous and induced neuronal oscillations in normal subjects and patients.

Added: Oct 23, 2014
Article
Herrojo-Ruiz M. D., Strübing F., Jabusch H. et al. Neuroimage. 2011. Vol. 55. No. 4. P. 1791-1803.

Skilled performance requires the ability to monitor ongoing behavior, detect errors in advance and modify the performance accordingly. The acquisition of fast predictive mechanisms might be possible due to the extensive training characterizing expertise performance. Recent EEG studies on piano performance reported a negative event-related potential (ERP) triggered in the ACC 70 ms before performance errors (pitch errors due to incorrect keypress). This ERP component, termed pre-error related negativity (pre-ERN), was assumed to reflect processes of error detection in advance. However, some questions remained to be addressed: (i) Does the electrophysiological marker prior to errors reflect an error signal itself or is it related instead to the implementation of control mechanisms? (ii) Does the posterior frontomedial cortex (pFMC, including ACC) interact with other brain regions to implement control adjustments following motor prediction of an upcoming error? (iii) Can we gain insight into the electrophysiological correlates of error prediction and control by assessing the local neuronal synchronization and phase interaction among neuronal populations? (iv) Finally, are error detection and control mechanisms defective in pianists with musician's dystonia (MD), a focal task-specific dystonia resulting from dysfunction of the basal ganglia–thalamic–frontal circuits? Consequently, we investigated the EEG oscillatory and phase synchronization correlates of error detection and control during piano performances in healthy pianists and in a group of pianists with MD. In healthy pianists, the main outcomes were increased pre-error theta and beta band oscillations over the pFMC and 13–15 Hz phase synchronization, between the pFMC and the right lateral prefrontal cortex, which predicted corrective mechanisms. In MD patients, the pattern of phase synchronization appeared in a different frequency band (6–8 Hz) and correlated with the severity of the disorder. The present findings shed new light on the neural mechanisms, which might implement motor prediction by means of forward control processes, as they function in healthy pianists and in their altered form in patients with MD.

Added: Nov 2, 2020
Article
Novitski Nikolai, Alho K., Korzyukov O. et al. Neuroimage. 2001. Vol. 14. P. 244-251.

The processing of sound changes and involuntary attention to them has been widely studied with event-related brain potentials (ERPs). Recently, functional magnetic resonance imaging (fMRI) has been applied to determine the neural mechanisms of involuntary attention and the sources of the corresponding ERP components. The gradient-coil switching noise from the MRI scanner, however, is a challenge to any experimental design using auditory stimuli. In the present study, the effects of MRI noise on ERPs associated with preattentive processing of sound changes and involuntary switching of attention to them were investigated. Auditory stimuli consisted of frequently presented “standard” sounds, infrequent, slightly higher “deviant” sounds, and infrequent natural “novel” sounds. The standard and deviant sounds were either sinusoidal tones or musical chords, in separate stimulus sequences. The mismatch negativity (MMN) ERP associated with preattentive sound change detection was elicited by the deviant and novel sounds and was not affected by the prerecorded background MRI noise (in comparison with the condition with no background noise). The succeeding positive P3a ERP responses associated with involuntary attention switching elicited by novel sounds were also not affected by the MRI noise. However, in ERPs to standard tones and chords, the P1, N1, and P2 peak latencies were significantly prolonged by the MRI noise. Moreover, the amplitude of the subsequent “exogenous” N2 to the standard sounds was significantly attenuated by the presence of MRI noise. In conclusion, the present results suggest that in fMRI the background noise does not interfere with the imaging of auditory processing related to involuntary attention.

Added: Jul 10, 2015
Article
Novitski Nikolai, Anourova I., Martikauppi S. et al. Neuroimage. 2003. Vol. 20. No. 2. P. 1320-1328.

The effects of functional magnetic resonance imaging (fMRI) acoustic noise were investigated on the parameters of event-related responses (ERPs) elicited during auditory matching-to-sample location and pitch working memory tasks. Stimuli were tones with varying location (left or right) and frequency (high or low). Subjects were instructed to memorize and compare either the locations or frequencies of the stimuli with each other. Tape-recorded fMRI acoustic noise was presented in half of the experimental blocks. The fMRI noise considerably enhanced the P1 component, reduced the amplitude and increased the latency of the N1, shortened the latency of the N2, and enhanced the amplitude of the P3 in both tasks. The N1 amplitude was higher in the location than pitch task in both noise and no-noise blocks, whereas the task-related N1 latency difference was present in the no-noise blocks only. Although the task-related differences between spatial and nonspatial auditory responses were partially preserved in noise, the finding that the acoustic gradient noise accompanying functional MR imaging modulated the auditory ERPs implies that the noise may confound the results of auditory fMRI experiments especially when studying higher cognitive processing

Added: Jul 10, 2015
Article
Vidaurre C., Ramos-Murguialday A., Haufe S. et al. Neuroimage. 2019. No. 199. P. 375-386.

An important goal in Brain-Computer Interfacing (BCI) is to find and enhance procedural strategies for users for whom BCI control is not sufficiently accurate. To address this challenge, we conducted offline analyses and online experiments to test whether the classification of different types of motor imagery could be improved when the training of the classifier was performed on the data obtained with the assistive muscular stimulation below the motor threshold. 10 healthy participants underwent three different types of experimental conditions: a) Motor imagery (MI) of hands and feet b) sensory threshold neuromuscular electrical stimulation (STM) of hands and feet while resting and c) sensory threshold neuromuscular electrical stimulation during performance of motor imagery (BOTH). Also, another group of 10 participants underwent conditions a) and c). Then, online experiments with 15 users were performed. These subjects received neurofeedback during MI using classifiers calibrated either on MI or BOTH data recorded in the same experiment. Offline analyses showed that decoding MI alone using a classifier based on BOTH resulted in a better BCI accuracy compared to using a classifier based on MI alone. Online experiments confirmed accuracy improvement of MI alone being decoded with the classifier trained on BOTH data. In addition, we observed that the performance in MI condition could be predicted on the basis of a more pronounced connectivity within sensorimotor areas in the frequency bands providing the best performance in BOTH. These finding might offer a new avenue for training SMR-based BCI systems particularly for users having difficulties to achieve efficient BCI control. It might also be an alternative strategy for users who cannot perform real movements but still have remaining afferent pathways (e.g., ALS and stroke patients).

Added: Oct 28, 2019
Article
Innocenti I., Giovannelli F., Cincotta M. et al. Neuroimage. 2010. Vol. 53. No. 1. P. 325-330.

The "level of processing" effect is a classical finding of the experimental psychology of memory. Actually, the depth of information processing at encoding predicts the accuracy of the subsequent episodic memory performance. When the incoming stimuli are analyzed in terms of their meaning (semantic, or deep, encoding), the memory performance is superior with respect to the case in which the same stimuli are analyzed in terms of their perceptual features (shallow encoding). As suggested by previous neuroimaging studies and by some preliminary findings with transcranial magnetic stimulation (TMS), the left prefrontal cortex may play a role in semantic processing requiring the allocation of working memory resources. However, it still remains unclear whether deep and shallow encoding share or not the same cortical networks, as well as how these networks contribute to the "level of processing" effect. To investigate the brain areas casually involved in this phenomenon, we applied event-related repetitive TMS (rTMS) during deep (semantic) and shallow (perceptual) encoding of words. Retrieval was subsequently tested without rTMS interference. RTMS applied to the left dorsolateral prefrontal cortex (DLPFC) abolished the beneficial effect of deep encoding on memory performance, both in terms of accuracy (decrease) and reaction times (increase). Neither accuracy nor reaction times were instead affected by rTMS to the right DLPFC or to an additional control site excluded by the memory process (vertex). The fact that online measures of semantic processing at encoding were unaffected suggests that the detrimental effect on memory performance for semantically encoded items took place in the subsequent consolidation phase. These results highlight the specific causal role of the left DLPFC among the wide left-lateralized cortical network engaged by long-term memory, suggesting that it probably represents a crucial node responsible for the improved memory performance induced by semantic processing.

Added: Sep 13, 2015
Article
Dähne S., Nikulin V., Ramírez D. et al. Neuroimage. 2014. No. 96. P. 334-348.

Phase synchronization among neuronal oscillations within the same frequency band has been hypothesized to be a major mechanism for communication between different brain areas. On the other hand, cross-frequency communications are more flexible allowing interactions between oscillations with different frequencies. Among such cross-frequency interactions amplitude-to-amplitude interactions are of a special interest as they show how the strength of spatial synchronization in different neuronal populations relates to each other during a given task. While, previously, amplitude-to-amplitude correlations were studied primarily on the sensor level, we present a source separation approach using spatial filters which maximize the correlation between the envelopes of brain oscillations recorded with electro-/magnetoencephalography (EEG/MEG) or intracranial multichannel recordings. Our approach, which is called canonical source power correlation analysis (cSPoC), is thereby capable of extracting genuine brain oscillations solely based on their assumed coupling behavior even when the signal-to-noise ratio of the signals is low. In addition to using cSPoC for the analysis of cross-frequency interactions in the same subject, we show that it can also be utilized for studying amplitude dynamics of neuronal oscillations across subjects. We assess the performance of cSPoC in simulations as well as in three distinctively different analysis scenarios of real EEG data, each involving several subjects. In the simulations, cSPoC outperforms unsupervised state-of-the-art approaches. In the analysis of real EEG recordings, we demonstrate excellent unsupervised discovery of meaningful power-to-power couplings, within as well as across subjects and frequency bands.

Added: Oct 23, 2014
Article
Partanen E., Leminen A., de Paoli S. et al. Neuroimage. 2017.

Children learn new words and word forms with ease, often acquiring a new word after very few repetitions. Recent neurophysiological research on word form acquisition in adults indicates that novel words can be acquired within minutes of repetitive exposure to them, regardless of the individual's focused attention on the speech input. Although it is well-known that children surpass adults in language acquisition, the developmental aspects of such rapid and automatic neural acquisition mechanisms remain unexplored. To address this open question, we used magnetoencephalography (MEG) to scrutinise brain dynamics elicited by spoken words and word-like sounds in healthy monolingual (Danish) children throughout a 20-min repetitive passive exposure session. We found rapid neural dynamics manifested as an enhancement of early (~100 ms) brain activity over the short exposure session, with distinct spatiotemporal patterns for different novel sounds. For novel Danish word forms, signs of such enhancement were seen in the left temporal regions only, suggesting reliance on pre-existing language circuits for acquisition of novel word forms with native phonology. In contrast, exposure both to novel word forms with non-native phonology and to novel non-speech sounds led to activity enhancement in both left and right hemispheres, suggesting that more wide-spread cortical networks contribute to the build-up of memory traces for non-native and non-speech sounds. Similar studies in adults have previously reported more sluggish (~15–25 min, as opposed to 4 min in the present study) or non-existent neural dynamics for non-native sound acquisition, which might be indicative of a higher degree of plasticity in the children's brain. Overall, the results indicate a rapid and highly plastic mechanism for a dynamic build-up of memory traces for novel acoustic information in the children's brain that operates automatically and recruits bilateral temporal cortical circuits.

Added: Jun 6, 2017
Article
Moseley R., Shtyrov Yury, Mohr B. et al. Neuroimage. 2015. Vol. 104. P. 413-422.

Autism spectrum conditions (ASC) are characterised by deficits in understanding and expressing emotions and are frequently accompanied by alexithymia, a difficulty in understanding and expressing emotion words. Words are differentially represented in the brain according to their semantic category and these difficulties in ASC predict reduced activation to emotion-related words in limbic structures crucial for affective processing. Semantic theories view 'emotion actions' as critical for learning the semantic relationship between a word and the emotion it describes, such that emotion words typically activate the cortical motor systems involved in expressing emotion actions such as facial expressions. As ASC are also characterised by motor deficits and atypical brain structure and function in these regions, motor structures would also be expected to show reduced activation during emotion-semantic processing. Here we used event-related fMRI to compare passive processing of emotion words in comparison to abstract verbs and animal names in typically-developing controls and individuals with ASC. Relatively reduced brain activation in ASC for emotion words, but not matched control words, was found in motor areas and cingulate cortex specifically. The degree of activation evoked by emotion words in the motor system was also associated with the extent of autistic traits as revealed by the Autism Spectrum Quotient. We suggest that hypoactivation of motor and limbic regions for emotion-word processing may underlie difficulties in processing emotional language in ASC. The role that sensorimotor systems and their connections might play in the affective and social-communication difficulties in ASC is discussed.

Added: Oct 23, 2014
Article
Yaple Z., Arsalidou M. Neuroimage. 2019. Vol. 196. P. 16-31.

Working memory, a fundamental cognitive function that is highly dependent on the integrity of the prefrontal cortex, is known to show age-related declines across the typical healthy adult lifespan. Moreover, we know from work in neurophysiology that the prefrontal cortex is disproportionately susceptibly to the pathological effects of aging. The n-back task is arguably the most ubiquitous cognitive task for investigating working memory performance. Many functional magnetic resonance imaging (fMRI) studies examine brain regions engaged during performance of the n-back task in adults. The current meta-analyses are the first to examine concordance and age-related changes across the healthy adult lifespan in brain areas engaged when performing the n-back task. We compile data from eligible fMRI articles that report stereotaxic coordinates of brain activity from healthy adults in three age-groups: young (23.57 ± 5.63 years), middle-aged (38.13 ± 5.63 years) and older (66.86 ± 5.70 years) adults. Findings show that the three groups share concordance in the engagement of parietal and cingulate cortices, which have been consistently identified as core areas involved in working memory; as well as the insula, claustrum, and cerebellum, which have not been highlighted as areas involved in working memory. Critically, prefrontal cortex engagement is concordant for young, to a lesser degree for middle-aged adults, and absent in older adults, suggesting a more gradual linear decline in prefrontal cortex engagement. Our results provide important new knowledge for improving methodology and theories of cognition across the lifespan.

Added: Apr 9, 2019