The spontaneous oscillatory activity in the human brain shows long-range temporal correlations (LRTC) that extend over time scales of seconds to minutes. Previous research has demonstrated aberrant LRTC in depressed patients; however, it is unknown whether the neuronal dynamics normalize after psychological treatment. In this study, we recorded EEG during eyes-closed rest in depressed patients (N = 71) and healthy controls (N = 25), and investigated the temporal dynamics in depressed patients at baseline, and after attending either a brief mindfulness training or a stress reduction training. Compared to the healthy controls, depressed patients showed stronger LRTC in theta oscillations (4–7 Hz) at baseline. Following the psychological interventions both groups of patients demonstrated reduced LRTC in the theta band. The reduction of theta LRTC differed marginally between the groups, and explorative analyses of separate groups revealed noteworthy topographic differences. A positive relationship between the changes in LRTC, and changes in depressive symptoms was observed in the mindfulness group. In summary, our data show that aberrant temporal dynamics of ongoing oscillations in depressive patients are attenuated after treatment, and thus may help uncover the mechanisms with which psychotherapeutic interventions affect the brain.
Previous electrophysiological studies of automatic language processing revealed early (100-200 ms) reflections of access to lexical characteristics of speech signal using the so-called mismatch negativity (MMN), a negative ERP deflection elicited by infrequent irregularities in unattended repetitive auditory stimulation. In those studies, lexical processing of spoken stimuli became manifest as an enhanced ERP in response to unattended real words, as opposed to phonologically matched but meaningless pseudoword stimuli. This lexical ERP enhancement was explained by automaticactivation of word memory traces realized as distributed strongly intra-connected neuronal circuits, whose robustness guarantees memory trace activation even in the absence of attention on spoken input. Such an account would predict the automatic activation of these memory traces upon anypresentation of linguistic information, irrespective of the presentation modality. As previous lexical MMN studies exclusively used auditory stimulation, we here adapted the lexical MMN paradigm to investigate early automatic lexical effects in the visual modality. In a visual oddball sequence, matched short word and pseudoword stimuli were presented tachistoscopically in perifoveal area outside the visual focus of attention, as the subjects' attention was concentrated on a concurrent non-linguistic visual dual task in the center of the screen. Using EEG, we found a visual analogue of the lexicalERP enhancement effect, with unattended written words producing larger brain response amplitudes than matched pseudowords, starting at ~100 ms. Furthermore, we also found significant visual MMN, reported here for the first time for unattended perifoveal lexical stimuli. The data suggest earlyautomatic lexical processing of visually presented language which commences rapidly and can take place outside the focus of attention.
Identifying facial expressions is crucial for social interactions. Functional neuroimaging studies show that a set of brain areas, such as the fusiform gyrus and amygdala, become active when viewing emotional facial expressions. The majority of functional magnetic resonance imaging (fMRI) studies investigating face perception typically employ static images of faces. However, studies that use dynamic facial expressions (e.g., videos) are accumulating and suggest that a dynamic presentation may be more sensitive and ecologically valid for investigating faces. By using quantitative fMRI meta-analysis the present study examined concordance of brain regions associated with viewing dynamic facial expressions. We analyzed data from 216 participants that participated in 14 studies, which reported coordinates for 28 experiments. Our analysis revealed bilateral fusiform and middle temporal gyri, left amygdala, left declive of the cerebellum and the right inferior frontal gyrus. These regions are discussed in terms of their relation to models of face processing.
The contribution of two different training contexts to online, gradual lexical acquisition was investigated by event-related potentials (ERPs) elicited by new, word-like stimuli. Pseudowords were repeatedly preceded by a picture representing a well-known object (semantic-associative training context) or by a hash mark (non-associative training context). The two training styles revealed differential effects of repetition in both behavioral and ERPs data. Repetition of pseudowords not associated with any stimulus gradually enhanced the late positive component (LPC) as well as speeded lexical categorization of these stimuli, suggesting the formation of episodic memory traces. However, repetition under the semantic-associative context caused higher reduction in N400 component and categorization latencies. This result suggests the facilitation in the lexico-semantic processing of pseudowords as a consequence of their progressive associations to picture-concepts, going beyond the visual memory trace that is generated under the non-associative context.
As adults we solve problems by applying our executive know-how and directing our mental-attention to relevant information. When we are not problem solving, our mind is free to wonder to things like lunchtime; this is often referred to as the default-mode. It is established that for adults the relation among executive and default-mode brain areas is negative (Fox et al., 2005; Arsalidou et al., 2013). Parts of the prefrontal cortex are involved in both the executive and default-mode networks.
Concurrent EEG and fMRI acquisitions in resting state showed a correlation between EEG power in various bands and spontaneous BOLD fluctuations. However, there is a lack of data on how changes in the complexity of brain dynamics derived fromEEG reflect variations in the BOLD signal. The purpose of our study was to correlate both spectral patterns, as linear features of EEG rhythms, and nonlinear EEG dynamic complexity with neuronal activity obtained by fMRI. We examined the relationships between EEG patterns and brain activation obtained by simultaneous EEG-fMRI during the resting state condition in 25 healthy right-handed adult volunteers. Using EEG-derived regressors, we demonstrated a substantial correlation of BOLD signal changes with linear and nonlinear features of EEG. We found the most significant positive correlation of fMRI signal with delta spectral power. Beta and alpha spectral features had no reliable effect on BOLD fluctuation. However, dynamic changes of alpha peak frequency exhibited a significant association with BOLD signal increase in right-hemisphere areas. Additionally, EEG dynamic complexity as measured by the HFD of the 2–20Hz EEG frequency range significantly correlated with the activation of cortical and subcortical limbic system areas. Our results indicate that both spectral features of EEG frequency bands and nonlinear dynamic properties of spontaneous EEG are strongly associated with fluctuations of the BOLD signal during the resting state condition.
Although cerebral palsy (CP) is among the most common causes of physical disability in early childhood, we know little about the functional and structural changes of this disorder in the developing brain. Here,we investigated with three different neuroimaging modalities [magnetoencephalography (MEG), diffusion tensor imaging (DTI), and resting-state fMRI] whether spastic CP is associated with functional and anatomical abnormalities in the sensorimotor network. Ten children participated in the study: four with diplegic CP (DCP), three with hemiplegic CP (HCP), and three typically developing (TD) children. Somatosensory (SS)-evoked fields (SEFs) were recorded in response to pneumatic stimuli applied to digits D1, D3, and D5 of both hands. Several parameters of water diffusion were calculated from DTI between the thalamus and the pre-central and post-central gyri in both hemispheres. The sensorimotor resting-state networks (RSNs) were examined by using an independent component analysis method. Tactile stimulation of the fingers elicited the first prominent cortical response at ~50 ms, in all except one child, localized over the primary SS cortex (S1). In five CP children, abnormal somatotopic organization was observed in the affected (or more affected) hemisphere. Euclidean distances were markedly different between the two hemispheres in the HCP children, and between DCP and TD children for both hemispheres. DTI analysis revealed decreased fractional anisotropy and increased apparent diffusion coefficient for the thalamocortical pathways in the more affected compared to less affected hemisphere in CP children. Resting-state functional MRI results indicated absent and/or abnormal sensorimotor RSNs for children with HCP and DCP consistent with the severity and location of their lesions. Our findings suggest an abnormal SS processing mechanism in the sensorimotor network of children with CP possibly as a result of diminished thalamocortical projections.
Neural processing by mental athletes (MAs) has received attention from Neuroscience community, with several publications examining superior memorizers (Maguire et al., 2003; Bor et al., 2008), lighting calculators (Pesenti et al., 2001; Fehr et al., 2010), and savants (Treffert, 2009). In this opinion, we contend that the presumption of extraordinary abilities in MAs is fundamentally flawed because their demonstrations involve tricks that regular individuals can learn. Since, these tricks easily escape the scrutiny of investigators, a high standard of rigor should be applied to research on MAs.
This paper presents the case of a 17-year-old right-handed Belgian boy with developmental FAS and comorbid developmental apraxia of speech (DAS). Extensive neuropsychological and neurolinguistic investigations demonstrated a normal IQ but impaired planning (visuo-constructional dyspraxia). A Tc-99m-ECD SPECT revealed a significant hypoperfusion in the prefrontal and medial frontal regions, as well as in the lateral temporal regions. Hypoperfusion in the right cerebellum almost reached significance. It is hypothesized that these clinical findings support the view that FAS and DAS are related phenomena following impairment of the cerebro-cerebellar network.
Music performance relies on the ability to learn and execute actions and their associated sounds. The process of learning these auditory-motor contingencies depends on the proper encoding of the serial order of the actions and sounds. Among the different serial positions of a behavioral sequence, the first and last (boundary) elements are particularly relevant. Animal and patient studies have demonstrated a specific neural representation for boundary elements in prefrontal cortical regions and in the basal ganglia, highlighting the relevance of their proper encoding. The neural mechanisms underlying the encoding of sequence boundaries in the general human population remain, however, largely unknown. In this study, we examined how alterations of auditory feedback, introduced at different ordinal positions (boundary or within-sequence element), affect the neural and behavioral responses during sensorimotor sequence learning. Analysing the neuromagnetic signals from 20 participants while they performed short piano sequences under the occasional effect of altered feedback (AF), we found that at around 150-200 ms post-keystroke, the neural activities in the dorsolateral prefrontal cortex (DLPFC) and supplementary motor area (SMA) were dissociated for boundary and within-sequence elements. Furthermore, the behavioral data demonstrated that feedback alterations on boundaries led to greater performance costs, such as more errors in the subsequent keystrokes. These findings jointly support the idea that the proper encoding of boundaries is critical in acquiring sensorimotor sequences. They also provide evidence for the involvement of a distinct neural circuitry in humans including prefrontal and higher-order motor areas during the encoding of the different classes of serial order.
Although language is a tool for communication, most research in the neuroscience of language has focused on studying words and sentences, while little is known about the brain mechanisms of speech acts, or communicative functions, for which words and sentences are used as tools. Here the neural processing of two types of speech acts, Naming and Requesting, was addressed using the time-resolved event-related potential (ERP) technique. The brain responses for Naming and Request diverged as early as ~120 ms after the onset of the critical words, at the same time as, or even before, the earliest brain manifestations of semantic word properties could be detected. Request-evoked potentials were generally larger in amplitude than those for Naming. The use of identical words in closely matched settings for both speech acts rules out explanation of the difference in terms of phonological, lexical, semantic properties, or word expectancy. The cortical sources underlying the ERP enhancement for Requests were found in the fronto-central cortex, consistent with the activation of action knowledge, as well as in the right temporo-parietal junction (TPJ), possibly reflecting additional implications of speech acts for social interaction and theory of mind. These results provide the first evidence for surprisingly early access to pragmatic and social interactive knowledge, which possibly occurs in parallel with other types of linguistic processing, and thus supports the near-simultaneous access to different subtypes of psycholinguistic information.
Sensitivity to regularities plays a crucial role in the acquisition of various linguistic features from spoken language input. Artificial grammar learning paradigms explore pattern recognition abilities in a set of structured sequences (i.e., of syllables or letters). In the present study, we investigated the functional underpinnings of learning phonological regularities in auditorily presented syllable sequences. While previous neuroimaging studies either focused on functional differences between the processing of correct vs. incorrect sequences or between different levels of sequence complexity, here the focus is on the neural foundation of the actual learning success. During functional magnetic resonance imaging (fMRI), participants were exposed to a set of syllable sequences with an underlying phonological rule system, known to ensure performance differences between participants. We expected that successful learning and rule application would require phonological segmentation and phoneme comparison. As an outcome of four alternating learning and test fMRI sessions, participants split into successful learners and non-learners. Relative to non-learners, successful learners showed increased task-related activity in a fronto-parietal network of brain areas encompassing the left lateral premotor cortex as well as bilateral superior and inferior parietal cortices during both learning and rule application. These areas were previously associated with phonological segmentation, phoneme comparison, and verbal working memory. Based on these activity patterns and the phonological strategies for rule acquisition and application, we argue that successful learning and processing of complex phonological rules in our paradigm is mediated via a fronto-parietal network for phonological processes.
Structural changes in the brain take place throughout one’s life. Changes related to cognitive decline may delay the stages of the speech production process in the aging brain. For example, semantic memory decline and poor inhibition may delay the retrieval of a concept from the mental lexicon. Electroencephalography (EEG) is a valuable method for identifying the timing of speech production stages. So far, studies using EEG mainly focused on a particular speech production stage in a particular group of subjects. Differences between subject groups and between methodologies have complicated identifying time windows of the speech production stages. For the current study, the speech production stages lemma retrieval, lexeme retrieval, phonological encoding, and phonetic encoding were tracked using a 64-channel EEG in 20 younger adults and 20 older adults. Picture-naming tasks were used to identify lemma retrieval, using semantic interference through previously named pictures from the same semantic category, and lexeme retrieval, using words with varying age of acquisition. Non-word reading was used to target phonological encoding (using non-words with a variable number of phonemes) and phonetic encoding (using non-words that differed in spoken syllable frequency). Stimulus-locked and response-locked cluster-based permutation analyses were used to identify the timing of these stages in the full time course of speech production from stimulus presentation until 100 ms before response onset in both subject groups. It was found that the timing of each speech production stage could be identified. Even though older adults showed longer response times for every task, only the timing of the lexeme retrieval stage was later for the older adults compared to the younger adults, while no such delay was found for the timing of the other stages. The results of a second cluster-based permutation analysis indicated that clusters that were observed in the timing of the stages for one group were absent in the other subject group, which was mainly the case in stimulus-locked time windows. A z-score mapping analysis was used to compare the scalp distributions related to the stages between the older and younger adults. No differences between both groups were observed with respect to scalp distributions, suggesting that the same groups of neurons are involved in the four stages, regardless of the adults’ age, even though the timing of the individual stages is different in both groups.
Background: Demographic and clinical predictors of aphasia recovery have been identified in the literature. However, little attention has been devoted to identifying and distinguishing predictors of improvement for different outcomes, e.g., production of treated vs. untreated materials. These outcomes may rely on different mechanisms, and therefore be predicted by different variables. Furthermore, treatment features are not typically accounted for when studying predictors of aphasia recovery. This is partly due to the small numbers of cases reported in studies, but also to limitations of data analysis techniques usually employed. Method: We reviewed the literature on predictors of aphasia recovery, and conducted a meta-analysis of single-case studies designed to assess the efficacy of treatments for verb production. The contribution of demographic, clinical, and treatment-related variables was assessed by means of Random Forests (a machine-learning technique used in classification and regression). Two outcomes were investigated: production of treated (for 142 patients) and untreated verbs (for 166 patients). Results: Improved production of treated verbs was predicted by a three-way interaction of pre-treatment scores on tests for verb comprehension and word repetition, and the frequency of treatment sessions. Improvement in production of untreated verbs was predicted by an interaction including the use of morphological cues, presence of grammatical impairment, pre-treatment scores on a test for noun comprehension, and frequency of treatment sessions. Conclusion: Improvement in the production of treated verbs occurs frequently. It may depend on restoring access to and/or knowledge of lexeme representations, and requires relative sparing of semantic knowledge (as measured by verb comprehension) and phonological output abilities (including working memory, as measured by word repetition). Improvement in the production of untreated verbs has not been reported very often. It may depend on the nature of impaired language representations, and the type of knowledge engaged by treatment: it is more likely to occur where abstract features (semantic and/or grammatical) are damaged and treated.
Rapid and automatic processing of grammatical complexity is argued to take place during speech comprehension, engaging a left-lateralized fronto-temporal language network. Here we address how neural activity in these regions is modulated by the grammatical properties of spoken words. We used combined magneto- and electroencephalography to delineate the spatiotemporal patterns of activity that support the recognition of morphologically complex words in English with inflectional (-s) and derivational (-er) affixes (e.g., bakes, baker). The mismatch negativity, an index of linguistic memory traces elicited in a passive listening paradigm, was used to examine the neural dynamics elicited by morphologically complex words. Results revealed an initial peak 130-180 ms after the deviation point with a major source in left superior temporal cortex. The localization of this early activation showed a sensitivity to two grammatical properties of the stimuli: (1) the presence of morphological complexity, with affixed words showing increased left-laterality compared to non-affixed words; and (2) the grammatical category, with affixed verbs showing greater left-lateralization in inferior frontal gyrus compared to affixed nouns (bakes vs. beaks). This automatic brain response was additionally sensitive to semantic coherence (the meaning of the stem vs. the meaning of the whole form) in left middle temporal cortex. These results demonstrate that the spatiotemporal pattern of neural activity in spoken word processing is modulated by the presence of morphological structure, predominantly engaging the left-hemisphere's fronto-temporal language network, and does not require focused attention on the linguistic input.
Previous research has pointed out that the combination of orthographic and semanticassociative training is a more advantageous strategy for the lexicalization of novel written word-forms than their single orthographic training. However, paradigms used previously involve explicit stimuli categorization (lexical decision), which likely influence word learning. In the present study, we used a more automatic task (silent reading) to determine the advantage of the associative training, by comparing the brain electrical signals elicited in combined (orthographic and semantic) and single (only orthographic) training conditions. In addition, the learning effect (in terms of similar neurophysiological activity between novel and known words) was also tested under a categorization paradigm, enabling determination of the possible influence of the training task in the lexicalization process. Results indicated that novel words repeatedly associated with meaningful cues showed a higher attenuation of N400 responses than those trained in the single orthographic condition, confirming the higher facilitation in the lexicosemantic processing of these stimuli, as a consequence of semantic associations. Moreover, only when the combined training was carried out in the reading task did novel words show similar N400 responses to those elicited by known words, suggesting the achievement of a similar lexical processing to known words. Crucially, when the training is carried out under a demanding task context (lexical decision), known words exhibited positive enhancement within the N400 time window, contributing to maintaining N400 differences with novel trained words and confounding the outcome of the learning. Such deflection—compatible with the modulation of the categorizationrelated P300 component—suggests that novel word learning could be influenced by the activation of categorization-related processes. Thus, the use of low-demand tasks arises as a more appropriate approach to study novel word learning, enabling the build-up process of mental representations, which probably depends on pure lexical and semantic factors rather than being guided by categorization demands.
A 40-year-old, non-aphasic, right-handed, and polyglot (L1: French, L2: Dutch, and L3: English) woman with a 12-year history of addiction to opiates and psychoactive substances, and clear psychiatric problems, presented with a foreign accent of sudden onset in L1. Speech evolved toward a mostly fluent output, despite a stutter-like behavior and a marked grammatical output disorder. The psychogenic etiology of the accent foreignness was construed based on the patient’s complex medical history and psychodiagnostic, neuropsychological, and neurolinguistic assessments. The presence of a foreign accent was affirmed by a perceptual accent rating and attribution experiment. It is argued that this patient provides additional evidence demonstrating the outdatedness of Whitaker’s (1982) definition of foreign accent syndrome, as only one of the four operational criteria was unequivocally applicable to our patient: her accent foreignness was not only recognized by her relatives and the medical staff but also by a group of native French-speaking laymen. However, our patient defied the three remaining criteria, as central nervous system damage could not conclusively be demonstrated, psychodiagnostic assessment raised the hypothesis of a conversion disorder, and the patient was a polyglot whose newly gained accent was associated with a range of foreign languages, which exceeded the ones she spoke.
One of the most important issues in the study of cognition is to understand which are the factors determining internal representation of the external world. Previous literature has started to highlight the impact of low-level sensory features (indexed by saliency-maps) in driving attention selection, hence increasing the probability for objects presented in complex and natural scenes to be successfully encoded into working memory (WM) and then correctly remembered. Here we asked whether the probability of retrieving high-saliency objects modulates the overall contents of WM, by decreasing the probability of retrieving other, lower-saliency objects. We presented pictures of natural scenes for 4 s. After a retention period of 8 s, we asked participants to verbally report as many objects/details as possible of the previous scenes. We then computed how many times the objects located at either the peak of maximal or minimal saliency in the scene (as indexed by a saliency-map; Itti et al., 1998) were recollected by participants. Results showed that maximal-saliency objects were recollected more often and earlier in the stream of successfully reported items than minimal-saliency objects. This indicates that bottom-up sensory salience increases the recollection probability and facilitates the access to memory representation at retrieval, respectively. Moreover, recollection of the maximal- (but not the minimal-) saliency objects predicted the overall amount of successfully recollected objects: The higher the probability of having successfully reported the most-salient object in the scene, the lower the amount of recollected objects. These findings highlight that bottom-up sensory saliency modulates the current contents of WM during recollection of objects from natural scenes, most likely by reducing available resources to encode and then retrieve other (lower saliency) objects.