A new videotest for measuring emotion recognition ability
Previous works show that mood congruence effect or trait congruence effect can be achieved (Chepenik et al., 2007; Rusting, 1998). The present study explores the effect of emotional state and dispositional joy on effectiveness of emotion recognition from facial expression. The experimental study was conducted in two groups of subjects. The general sample consisted of 39 participants. Participants’ emotional state was measured with the self-report questionnaire PANAS. The participants’ current mood was manipulated with the emotion induction procedure, which involved screening video with “joyful” or “neutral” emotional coloring. To measure the speed of emotional information processing a computer technique was used, in which a participant performed the task on emotion recognition from facial expression. The hypothesis was tested whether there is an effect of congruency in positive information processing. It was supposed that positive emotional state and dispositional joy heighten the speed of positive information processing and don’t influence processing of the stimuli with negative emotional coloring. Testing of the emotion induction procedure proved it to be partially successful. Congruency effect for dispositional joy was achieved: we found an interrelation of higher manifestation of this trait with higher speed in joy recognition from facial expressions. The influence of positive emotional state was manifested in lower speed in recognition of joy. In sum, the results show that the congruency effect is expressed differently for trait and emotional state. Overall, the results of the conducted study provide information on the mechanisms of emotion recognition.
The paper focuses on the way one’s own emotional state influences the recognition of other people’s emotions. Existing research indicates the effect of congruence between the emotions experienced at the moment and the evaluations of emotional stimuli. Our experimental study tested the hypotheses of the influence of emotional states on two aspects of emotion recognition, accuracy and sensitivity. We hypothesized that emotional state of the observer reduces the accuracy and increases the sensitivity. The study involved 69 participants divided into three groups. The baseline emotional state was assessed using a self-report measure. We used video clips with neutral, positive, and negative emotional content to induce different emotional states in each group. The accuracy and sensitivity of emotion recognition were measured using a test based on video samples of people's behavior in different situations. The results showed that the emotional state in the control group was rather «tense» and different from neutral. However, our hypotheses were not supported: the groups with different induced emotional states did not exhibit any significant differences in the accuracy of emotion recognition. The control group demonstrated higher sensitivity. These preliminary results are discussed in the context of the issues of emotion recognition research (such as emotion induction, assessment of emotions, differentiation of emotional states and traits).
In this paper we consider the automatic emotions recognition problem, especially the case of digital audio signal processing. We consider and verify an approach in which the classification of a sound fragment is reduced to the problem of image recognition. The waveform and spectrogram are used as a visual representation of the image. The computational experiment was done based on Radvess open dataset including 8 different emotions: "neutral", "calm", "happy," "sad," "angry," "scared", "disgust", "surprised". The best accuracy result was 64%, which was produced by a combination of “|spectrogram + convolution neural network VGG-11”
Studies of emotion recognition ability do not give an exact answer to the question whether it is a general ability of a human being. We make an attempt to cover this gap using three types of stimuli: human behavior, music, non-musical sound stimuli. Two aspects of emotion recognition ability are proposed to be distinguished. One aspect is supposed to be accuracy of emotion modality recognition, i.e. an ability to recognize correctly modality of emotional state. Another aspect might be emotion sensitivity, i.e. a bias in emotion perception such that intensity of emotions is ‘overperceived’ or 'underperceived'. The hypothesis that sensitivity is a general component and accuracy is a specific component of emotion recognition ability was partly confirmed.
In this paper we consider the automatic emotions recognition problem, especially the case of digital audio signal processing. We consider and verify an straight forward approach in which the classification of a sound fragment is reduced to the problem of image recognition. The waveform and spectrogram are used as a visual representation of the image. The computational experiment was done based on Radvess open dataset including 8 different emotions: “neutral”, “calm”, “happy,” “sad,” “angry,” “scared”, “disgust”, “surprised”. Our best accuracy result 71% was produced by combination “melspectrogram + convolution neural network VGG-16”.