• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site
Of all publications in the section: 4
Sort:
by name
by year
Article
Chernyshev B. V. Brain Research. 1998. Vol. 793. P. 79-94.
Added: Jan 18, 2010
Article
Galli G., Feurra M., Viggiano M. P. Brain Research. 2006. Vol. 1119. No. 1. P. 190-202.

Face recognition emerges from an interaction between bottom-up and top-down processing. Specifically, it relies on complex associations between the visual representation of a given face and previously stored knowledge about that face (e.g. biographical details). In the present experiment, the time-course of the interaction between bottom-up and top-down processing was investigated using event-related potentials (ERPs) and manipulating realistic, ecological contextual information. In the study phase, half of the faces (context faces) were framed in a newspaper page entitled with an action committed by the person depicted; these actions could have a positive or a negative value, so in this way emotional valence could be manipulated. The other half was presented on a neutral background (no-context faces). In the test phase, previously presented faces and new ones were presented on neutral backgrounds and an old/new discrimination was requested. The N170 component was modulated by both context (presence/absence at encoding) and valence (positive/negative). A reduction in amplitude was found for context faces as opposed to no-context faces. The same pattern was observed for negative faces compared to positive ones. Moreover, later activations associated with context and valence were differentially distributed over the scalp: context effects were prominent in left frontal areas, traditionally linked to person-specific information retrieval, whereas valence effects were broadly distributed over the scalp. In relation to recent neuroimaging findings on the neural basis of top-down modulations, present findings indicate that the information flow from higher-order areas might have modulated the N170 component and mediated the retrieval of semantic information pertaining to the study episode

Added: Sep 13, 2015
Article
Tonevitsky A., Kononenko O., Mityakina I. et al. Brain Research. 2018. Vol. 1695. P. 78-83.

The endogenous opioid system (EOS) controls the processing of nociceptive stimuli and is a pharmacological target for opioids. Alterations in expression of the EOS genes under neuropathic pain condition may account for low efficacy of opioid drugs. We here examined whether EOS expression patterns are altered in the lumbar spinal cord of the rats with spinal nerve ligation (SNL) as a neuropathic pain model. Effects of the left- and right-side SNL on expression of EOS genes in the ipsi- and contralateral spinal domains were analysed. The SNL-induced changes were complex and different between the genes; between the dorsal and ventral spinal domains; and between the left and right sides of the spinal cord. Prodynorphin (Pdyn) expression was upregulated in the ipsilateral dorsal domains by each the left and right-side SNL, while changes in expression of μ-opioid receptor (Oprm1) and proenkephalin (Penk) genes were dependent on the SNL side. Changes in expression of the Pdyn and κ-opioid receptor (Oprk1) genes were coordinated between the ipsi- and contralateral sides. Withdrawal response thresholds, indicators of mechanical allodynia correlated negatively with Pdyn expression in the right ventral domain after right side SNL. These findings suggest multiple roles of the EOS gene products in spinal sensitization and changes in motor reflexes, which may differ between the left and right sides.

Added: Oct 30, 2018
Article
Klucharev V., Möttönen R., Sams M. Brain Research. 2003. Vol. 18. No. 1. P. 65-75.

We studied the interactions in neural processing of auditory and visual speech by recording event-related brain potentials (ERPs). Unisensory (auditory - A and visual - V) and audiovisual (AV) vowels were presented to 11 subjects. AV vowels were phonetically either congruent (e.g., acoustic /a/ and visual /a/) or incongruent (e.g., acoustic /a/ and visual /y/). ERPs to AV stimuli and the sum of the ERPs to A and V stimuli (A+V) were compared. Similar ERPs to AV and A+V were hypothesized to indicate independent processing of A and V stimuli. Differences on the other hand would suggest AV interactions. Three deflections, the first peaking at about 85 ms after the A stimulus onset, were significantly larger in the ERPs to A+V than in the ERPs to both congruent and incongruent AV stimuli. We suggest that these differences reflect AV interactions in the processing of general, non-phonetic, features shared by the acoustic and visual stimulus (spatial location, coincidence in time). The first difference in the ERPs to incongruent and congruent AV vowels peaked at 155 ms from the A stimuli onset. This and two later differences are suggested to reflect interactions at phonetic level. The early general AV interactions probably reflect modified activity in the sensory-specific cortices, whereas the later phonetic AV interactions are likely generated in the heteromodal cortices. Thus, our results suggest that sensory-specific and heteromodal brain regions participate in AV speech integration at separate latencies and are sensitive to different features of A and V speech stimuli.

Added: Mar 13, 2015