In recent years, several assistive devices have been proposed to reconstruct arm and hand movements from electromyographic (EMG) activity. Although simple to implement and potentially useful to augment many functions, such myoelectric devices still need improvement before they become practical. Here we considered the problem of reconstruction of handwriting from multichannel EMG activity. Previously, linear regression methods (e.g., the Wiener filter) have been utilized for this purpose with some success. To improve reconstruction accuracy, we implemented the Kalman filter, which allows to fuse two information sources: the physical characteristics of handwriting and the activity of the leading hand muscles, registered by the EMG. Applying the Kalman filter, we were able to convert eight channels of EMG activity recorded from the forearm and the hand muscles into smooth reconstructions of handwritten traces. The filter operates in a causal manner and acts as a true predictor utilizing the EMGs from the past only, which makes the approach suitable for real-time operations. Our algorithm is appropriate for clinical neuroprosthetic applications and computer peripherals. Moreover, it is applicable to a broader class of tasks where predictive myoelectric control is needed.
Kaufman et al. recently proposed a hypothesis of how cortical neuronal ensembles prepare movements without initiating them prematurely (Kaufman et al., 2014). Although novel and potentially paradigm-shifting, their model appears to contradict some of the previously reported results. Here I discuss several possible reasons for this contradiction.
Kaufman et al. recorded from neuronal populations in dorsal premotor cortex (PMd) and primary motor cortex (M1), in monkeys performing center-out arm reaching movements with straight and curved trajectories. The experimental task incorporated a delay period during which monkeys could see the target but were required to withhold movement until a trigger stimulus (Figure (Figure1A).1A). Kaufman et al. asked how it was possible that M1 and PMd, known to project to the spinal cord and to each other (Dum and Strick, 2002), modulated their activity in in a time- and direction-dependent manner during the delay but did not induce EMG responses. While the standard explanation has been that delay-period cortical activity is a subthreshold version of movement activity (Tanji and Evarts, 1976; Weinrich and Wise, 1982; Alexander and Crutcher, 1990; Riehle and Requin, 1995; Prut and Fetz, 1999), Kaufman et al. proposed an alternative explanation. They asserted that delay-period cortical modulations were confined to a null space with respect to the linear transformation that mapped neuronal activity into movements.
This highly cited paper by Ganguly and Carmena (2009) reported a case of neuroplasticity associated with the operation of a brain-machine interface (BMI). Neuroplasticity is of great interest to BMI developers because of its causal role in the embodiment of neural prostheses (Lebedev and Nicolelis, 2006; Dobkin, 2007; Koralek et al., 2012; Shenoy and Carmena, 2014; Kraus et al., 2016; Gulati et al., 2017). Ganguly and Carmena reported that small populations of neurons (from 10 to 15) recorded in monkey primary motor cortex (M1) adapted to operating a BMI based on a fixed linear decoder. The decoder was trained once and left unchanged for several weeks. The population activity patterns underwent plastic modifications and stabilized on an optimal “cortical map” that assured accurate performance of center-out movements with a screen cursor. Moreover, monkeys learned to operate shuffled decoders, where the original neuronal weights were randomly reassigned. Here I comment on three issues arising from this paper: (1) the proper way to assess neuronal tuning under BMI control; (2) the constraints imposed on neuronal tuning properties by a fixed decoder; and (3) the problem of measuring changes in tuning when both neuronal activity and cursor trajectories change.
Although MEG/EEG signals are highly variable between subjects, they allow characterizing systematic changes of cortical activity in both space and time. Traditionally a two-step procedure is used. The first step is a transition from sensor to source space by the means of solving an ill-posed inverse problem for each subject individually. The second is mapping of cortical regions consistently active across subjects. In practice the first step often leads to a set of active cortical regions whose location and timecourses display a great amount of interindividual variability hindering the subsequent group analysis. We propose Group Analysis Leads to Accuracy (GALA)—a solution that combines the two steps into one. GALA takes advantage of individual variations of cortical geometry and sensor locations. It exploits the ensuing variability in electromagnetic forward model as a source of additional information. We assume that for different subjects functionally identical cortical regions are located in close proximity and partially overlap and their timecourses are correlated. This relaxed similarity constraint on the inverse solution can be expressed within a probabilistic framework, allowing for an iterative algorithm solving the inverse problem jointly for all subjects. A systematic simulation study showed that GALA, as compared with the standard min-norm approach, improves accuracy of true activity recovery, when accuracy is assessed both in terms of spatial proximity of the estimated and true activations and correct specification of spatial extent of the activated regions. This improvement obtained without using any noise normalization techniques for both solutions, preserved for a wide range of between-subject variations in both spatial and temporal features of regional activation. The corresponding activation timecourses exhibit significantly higher similarity across subjects. Similar results were obtained for a real MEG dataset of face-specific evoked responses.
The Internet comprises a decentralized global system that serves humanity's collective effort to generate, process, and store data, most of which is handled by the rapidly expanding cloud. A stable, secure, real-time system may allow for interfacing the cloud with the human brain. One promising strategy for enabling such a system, denoted here as a "human brain/cloud interface" ("B/CI"), would be based on technologies referred to here as "neuralnanorobotics." Future neuralnanorobotics technologies are anticipated to facilitate accurate diagnoses and eventual cures for the ∼400 conditions that affect the human brain. Neuralnanorobotics may also enable a B/CI with controlled connectivity between neural activity and external data storage and processing, via the direct monitoring of the brain's ∼86 × 109 neurons and ∼2 × 1014 synapses. Subsequent to navigating the human vasculature, three species of neuralnanorobots (endoneurobots, gliabots, and synaptobots) could traverse the blood-brain barrier (BBB), enter the brain parenchyma, ingress into individual human brain cells, and autoposition themselves at the axon initial segments of neurons (endoneurobots), within glial cells (gliabots), and in intimate proximity to synapses (synaptobots). They would then wirelessly transmit up to ∼6 × 1016 bits per second of synaptically processed and encoded human-brain electrical information via auxiliary nanorobotic fiber optics (30 cm3) with the capacity to handle up to 1018 bits/sec and provide rapid data transfer to a cloud based supercomputer for real-time brain-state monitoring and data extraction. A neuralnanorobotically enabled human B/CI might serve as a personalized conduit, allowing persons to obtain direct, instantaneous access to virtually any facet of cumulative human knowledge. Other anticipated applications include myriad opportunities to improve education, intelligence, entertainment, traveling, and other interactive experiences. A specialized application might be the capacity to engage in fully immersive experiential/sensory experiences, including what is referred to here as "transparent shadowing" (TS). Through TS, individuals might experience episodic segments of the lives of other willing participants (locally or remote) to, hopefully, encourage and inspire improved understanding and tolerance among all members of the human family.
Top-down processing is a mechanism in which memory, context and expectation are used to perceive stimuli. For this study we investigated how emotion content, induced by music mood, influences perception of happy and sad emoticons. Using single pulse TMS we stimulated right occipital face area (rOFA), primary visual cortex (V1) and vertex while subjects performed a face-detection task and listened to happy and sad music. At baseline, incongruent audio-visual pairings decreased performance, demonstrating dependence of emotion while perceiving ambiguous faces. However, performance of face identification decreased during rOFA stimulation regardless of emotional content. No effects were found between Cz and V1 stimulation. These results suggest that while rOFA is important for processing faces regardless of emotion, V1 stimulation had no effect. Our findings suggest that early visual cortex activity may not integrate emotional auditory information with visual information during emotion top-down modulation of faces.
Humans often adjust their opinions to the perceived opinions of others. Neural responses to a perceived match or mismatch between individual and group opinions have been investigated previously, but some findings are inconsistent. In this study, we used magnetoencephalographic source imaging to investigate further neural responses to the perceived opinions of others. We found that group opinions mismatching with individual opinions evoked responses in the anterior and posterior medial prefrontal cortices, as well as in the temporoparietal junction and ventromedial prefrontal cortex in the 220–320 and 380–530 ms time windows. Evoked responses were accompanied by an increase in the power of theta oscillations (4–8 Hz) over a number of frontal cortical sites. Group opinions matching with individual opinions evoked an increase in amplitude of beta oscillations (13–30 Hz) in the anterior cingulate and ventral medial prefrontal cortices. Based on these results, we argue that distinct valuation and performance-monitoring neural circuits in the medial cortices of the brain may monitor compliance of individual behavior to the perceived group norms.
This is an opinion paper without a spcial abstract
Human speech requires that new words are routinely memorized, yet neurocognitive mechanisms of such acquisition of memory remain highly debatable. Major controversy concerns the question whether cortical plasticity related to word learning occurs in neocortical speech-related areas immediately after learning, or neocortical plasticity emerges only on the second day after a prolonged time required for consolidation after learning. The functional spatiotemporal pattern of cortical activity related to such learning also remains largely unknown. In order to address these questions, we examined magnetoencephalographic responses elicited in the cerebral cortex by passive presentations of eight novel pseudowords before and immediately after an operant conditioning task. This associative procedure forced participants to perform an active search for unique meaning of four pseudowords that referred to movements of left and right hands and feet. The other four pseudowords did not require any movement and thus were not associated with any meaning. Familiarization with novel pseudowords led to a bilateral repetition suppression of cortical responses to them; the effect started before or around the uniqueness point and lasted for more than 500 ms. After learning, response amplitude to pseudowords that acquired meaning was greater compared with response amplitude to pseudowords that were not assigned meaning; the effect was significant within 144–362 ms after the uniqueness point, and it was found only in the left hemisphere. Within this time interval, a learning-related selective response initially emerged in cortical areas surrounding the Sylvian fissure: anterior superior temporal sulcus, ventral premotor cortex, the anterior part of intraparietal sulcus and insula. Later within this interval, activation additionally spread to more anterior higher-tier brain regions, and reached the left temporal pole and the triangular part of the left inferior frontal gyrus extending to its orbital part. Altogether, current findings evidence rapid plastic changes in cortical representations of meaningful auditory word-forms occurring almost immediately after learning. Additionally, our results suggest that familiarization resulting from stimulus repetition and semantic acquisition resulting from an active learning procedure have separable effects on cortical activity.