• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site
Of all publications in the section: 8
Sort:
by name
by year
Article
Volkova K., Lebedev M., Kaplan A. et al. Frontiers in Neuroinformatics. 2019. No. 13. P. 1-20.

Electrocorticography (ECoG) holds promise to provide efficient neuroprosthetic solutions for people suffering from neurological disabilities. This recording technique combines adequate temporal and spatial resolution with the lower risks of medical complications compared to the other invasive methods. ECoG is routinely used in clinical practice for preoperative cortical mapping in epileptic patients. During the last two decades, research utilizing ECoG has considerably grown, including the paradigms where behaviorally relevant information is extracted from ECoG activity with decoding algorithms of different complexity. Several research groups have advanced toward the development of assistive devices driven by brain-computer interfaces (BCIs) that decode motor commands from multichannel ECoG recordings. Here we review the evolution of this field and its recent tendencies, and discuss the potential areas for future development.

Added: Apr 4, 2020
Article
Volk D., Дубинин И. В., Myasnikova A. et al. Frontiers in Neuroinformatics. 2018. Vol. 12. No. 72.

Perceptual, motor and cognitive processes are based on rich interactions between remote regions in the human brain. Such interactions can be carried out through phase synchronization of oscillatory signals. Neuronal synchronization has been primarily studied within the same frequency range, e.g., within alpha or beta frequency bands. Yet, recent research shows that neuronal populations can also demonstrate phase synchronization between different frequency ranges. An extraction of such cross-frequency interactions in EEG/MEG recordings remains, however, methodologically challenging. Here we present a new method for the robust extraction of cross-frequency phase-to-phase synchronized components. Generalized Cross-Frequency Decomposition (GCFD) reconstructs the time courses of synchronized neuronal components, their spatial filters and patterns. Our method extends the previous state of the art, Cross-Frequency Decomposition (CFD), to the whole range of frequencies: it works for any f1 and f2 whenever f1:f2 is a rational number. GCFD gives a compact description of non-linearly interacting neuronal sources on the basis of their cross-frequency phase coupling. We successfully validated the new method in simulations and tested it with real EEG recordings including resting state data and steady state visually evoked potentials (SSVEP).

Added: Oct 27, 2018
Article
Pronko P., Baillet S., Pflieger M. et al. Frontiers in Neuroinformatics. 2014. Vol. 7.

Spatial component analysis is often used to explore multidimensional time series data whose sources cannot be measured directly. Several methods may be used to decompose the data into a set of spatial components with temporal loadings. Component selection is of crucial importance, and should be supported by objective criteria. In some applications, the use of a well defined component selection criterion may provide for automation of the analysis. In this paper we describe a novel approach for ranking of spatial components calculated from the EEG or MEG data recorded within evoked response paradigm. Our method is called Mutual Information (MI) Spectrum and is based on gauging the amount of MI of spatial component temporal loadings with a synthetically created reference signal. We also describe the appropriate randomization based statistical assessment scheme that can be used for selection of components with statistically significant amount of MI. Using simulated data with realistic trial to trial variations and SNR corresponding to the real recordings we demonstrate the superior performance characteristics of the described MI based measure as compared to a more conventionally used power driven gauge. We also demonstrate the application of the MI Spectrum for the selection of task-related independent components from real MEG data. We show that the MI spectrum allows to identify task-related components reliably in a consistent fashion, yielding stable results even from a small number of trials. We conclude that the proposed method fits naturally the information driven nature of ICA and can be used for routine and automatic ranking of independent components calculated from the functional neuroimaging data collected within event-related paradigms.

Added: Oct 23, 2014
Article
Ossadtchi A., Pronko P. K., Baillet S. et al. Frontiers in Neuroinformatics. 2014. Vol. 7. No. 53. P. 1-11.

Spatial component analysis is often used to explore multidimensional time series data whose sources cannot be measured directly. Several methods may be used to decompose the data into a set of spatial components with temporal loadings. Component selection is of crucial importance, and should be supported by objective criteria. In some applications, the use of a well defined component selection criterion may provide for automation of the analysis. In this paper we describe a novel approach for ranking of spatial components calculated from the EEG or MEG data recorded within evoked response paradigm. Our method is called Mutual Information Spectrum and is based on gauging the amount of mutual information of spatial component temporal loadings with a synthetically created reference signal. We also describe the appropriate randomization based statistical assessment scheme that can be used for selection of components with statistically significant amount of mutual information. Using simulated data with realistic trial to trial variations and SNR corresponding to the real recordings we demonstrate the superior performance characteristics of the described mutual information based measure as compared to a more conventionally used power driven gauge. We also demonstrate the application of the Mutual Information Spectrum for the selection of task-related independent components from real MEG data. We show that the Mutual Information spectrum allows to identify task-related components reliably in a consistent fashion, yielding stable results even from a small number of trials. We conclude that the proposed method fits naturally the information driven nature of ICA and can be used for routine and automatic ranking of independent components calculated from the functional neuroimaging data collected within event-related paradigms.

Added: Jan 19, 2014
Article
Smetanin N., Volkova K., Zabodaev S. et al. Frontiers in Neuroinformatics. 2018. Vol. 12. No. 100. P. 1-18.

Neurofeedback (NFB) is a real-time paradigm, where subjects learn to volitionally modulate their own brain activity recorded with electroencephalographic (EEG), magnetoencephalographic (MEG) or other functional brain imaging techniques and presented to them via one of sensory modalities: visual, auditory or tactile. NFB has been proposed as an approach to treat neurological conditions and augment brain functions. Although the early NFB studies date back nearly six decades ago, there is still much debate regarding the efficiency of this approach and the ways it should be implemented. Partly, the existing controversy is due to suboptimal conditions under which the NFB training is undertaken. Therefore, new experimental tools attempting to provide optimal or close to optimal training conditions are needed to further exploration of NFB paradigms and comparison of their effects across subjects and training days. To this end, we have developed open-source NFBLab, a versatile, python-based software for conducting NFB experiments with completely reproducible paradigms and low-latency feedback presentation. Complex experimental protocols \textcolor{red}{can be configured} using the GUI and saved in NFBLab's internal XML-based language that describes signal processing tracts, experimental blocks and sequences including randomization of experimental blocks. NFBLab implements interactive modules that enable individualized EEG/MEG signal processing tracts specification using spatial and temporal filters for feature selection and artifacts removal. NFBLab supports direct interfacing to MNE-Python software to facilitate source-space NFB based on individual head models and properly tailored individual inverse solvers. In addition to the standard algorithms for extraction of brain rhythms dynamics from EEG and MEG data, NFBLab implements several novel in-house signal processing algorithms that afford significant reduction in latency of feedback presentation and may potentially improve training effects. The software also supports several standard BCI paradigms. To interface with external data acquisition devices NFBLab employs Lab Streaming Layer protocol supported by the majority of EEG vendors. MEG devices are interfaced though the Fieldtrip buffer.

Added: Dec 13, 2018
Article
Musegaas M., Dietzenbacher B., Borm P. Frontiers in Neuroinformatics. 2016. Vol. 10. No. 51. P. 1-5.
Added: Oct 12, 2018
Article
Alexei Ossadtchi, Pronko P. K., Baillet S. et al. Frontiers in Neuroinformatics. 2014. Vol. 7. No. January. P. Article 53.

Spatial component analysis is often used to explore multidimensional time series data whose sources cannot be measured directly. Several methods may be used to decompose the data into a set of spatial components with temporal loadings. Component selection is of crucial importance, and should be supported by objective criteria. In some applications, the use of a well defined component selection criterion may provide for automation of the analysis. In this paper we describe a novel approach for ranking of spatial components calculated from the EEG or MEG data recorded within evoked response paradigm. Our method is called Mutual Information (MI) Spectrum and is based on gauging the amount of MI of spatial component temporal loadings with a synthetically created reference signal. We also describe the appropriate randomization based statistical assessment scheme that can be used for selection of components with statistically significant amount of MI. Using simulated data with realistic trial to trial variations and SNR corresponding to the real recordings we demonstrate the superior performance characteristics of the described MI based measure as compared to a more conventionally used power driven gauge. We also demonstrate the application of the MI Spectrum for the selection of task-related independent components from real MEG data. We show that the MI spectrum allows to identify task-related components reliably in a consistent fashion, yielding stable results even from a small number of trials. We conclude that the proposed method fits naturally the information driven nature of ICA and can be used for routine and automatic ranking of independent components calculated from the functional neuroimaging data collected within event-related paradigms.

Added: Jan 29, 2014
Article
Combrisson E., Vallat R., O'Reilly C. et al. Frontiers in Neuroinformatics. 2019. Vol. 13. P. 1-14.

We present Visbrain, a Python open-source package that offers a comprehensive visualization suite for neuroimaging and electrophysiological brain data. Visbrain consists of two levels of abstraction: (1) objects which represent highly configurable neuro-oriented visual primitives (3D brain, sources connectivity, etc.) and (2) graphical user interfaces for higher level interactions. The object level offers flexible and modular tools to produce and automate the production of figures using an approach similar to that of Matplotlib with subplots. The second level visually connects these objects by controlling properties and interactions through graphical interfaces. The current release of Visbrain (version 0.4.2) contains 14 different objects and three responsive graphical user interfaces, built with PyQt: Signal, for the inspection of time-series and spectral properties, Brain for any type of visualization involving a 3D brain and Sleep for polysomnographic data visualization and sleep analysis. Each module has been developed in tight collaboration with end-users, i.e., primarily neuroscientists and domain experts, who bring their experience to make Visbrain as transparent as possible to the recording modalities (e.g., intracranial EEG, scalp-EEG, MEG, anatomical and functional MRI). Visbrain is developed on top of VisPy, a Python package providing high-performance 2D and 3D visualization by leveraging the computational power of the graphics card. Visbrain is available on Github and comes with a documentation, examples, and datasets (http://visbrain.org).

Added: Nov 1, 2019