Emotion Identification From TQWT-Based EEG Rhythms

Emotion Identification From TQWT-Based EEG Rhythms

Aditya Nalwaya, Kritiprasanna Das, Ram Bilas Pachori
Copyright: © 2022 |Pages: 22
DOI: 10.4018/978-1-6684-3947-0.ch011
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Electroencephalogram (EEG) signals are the recording of brain electrical activity, commonly used for emotion recognition. Different EEG rhythms carry different neural dynamics. EEG rhythms are separated using tunable Q-factor wavelet transform (TQWT). Several features like mean, standard deviation, information potential are extracted from the TQWT-based EEG rhythms. Machine learning classifiers are used to differentiate various emotional states automatically. The authors have validated the proposed model using a publicly available database. Obtained classification accuracy of 92.9% proves the candidature of the proposed method for emotion identification.
Chapter Preview
Top

Introduction

Emotion plays a vital role in human life, as it influences human behavior, mental state, decision making, etc. (Šimić et al., 2021). In human beings, overall intelligence is generally measured by logical and emotional intelligence (Picard et al., 2001; Salovey & Mayer, 1990). In recent years artificial intelligence (AI) and machine learning (ML) have helped computers to achieve higher intelligence particularly, in numerical computing and logical reasoning. But still, there are some limitations in its ability to understand, comprehend, and respond according to the emotional state of persons, interacting with a computer. To address these shortcomings, research in the domain of affective computing is going on. Affective computing is a field that aims to design machines that can recognize, interpret, process, and stimulate the human experience of feeling or emotion. Recognizing a person's emotional state can help a computer to interact with humans in a better way.

In order to get more customized and user-centric information and communications technology solutions, an emotion recognition system could play an important role. In the absence of emotional intelligence, computers lag understanding and making a decision according to the situation. Thus, instead of making decisions logically, computers can be made aware of the human emotional state and then make any decision. Emotion recognition may be also helpful in upcoming new entertainment systems such as virtual reality systems for enhancing user experience (Gupta et al., 2020). Emotion recognition systems can also be used in understanding the health condition of patients with mental disabilities or infant patients (Hassouneh et al., 2020). Emotion detection can be used to monitor students learning and create personalized educational content for students (Kołakowska et al., 2014). Also, a software developer can examine user experience by using the emotion recognition system. Emotion recognition system has a vast area of application such as health care, brain-computer interface (BCI), education, smart entertainment system, smart rooms, intelligent cars, psychological study, etc. (Kołakowska et al., 2014).

Emotions are revealed by a human through either facial expression, verbal expression, or several physiological signals such as variability in heart rate, skin conductance, brain electrical activity, etc. These are generated by the human body in response to the emotion evoked (Egger et al., 2019).

In an emotion recognition system, emotions can be evoked or elicited either in a passive way or in an active way. In the case of passive emotion elicitation, the subject's emotions are evoked by exposing them to targeted emotion elicitation material. Some of the publicly available elicitation materials are the international affective picture system (IAPS) (Lang et al., 1997); it is a library of photographs used extensively for emotion elicitation, Nencki affective picture system (NAPS); is another database for visual stimulus. The Montreal affective voices and the international affective digitized sounds (IADS) are some of the acoustic stimulus databases used for passive emotion elicitation (Yang et al., 2018). In the case of active emotion elicitation, subjects will be asked to actively participate in a certain task that leads to emotion elicitation. Participants may be asked to play video games (Martínez-Tejada et al., 2021) or engage in conversation with another participant (Boateng et al., 2020); thus, by actively participating in the experiment subject's emotions can be evoked. Emotion elicited can be labelled either through explicit assessment by the subject itself, where the subject tells about his/her feeling, or by an implicit assessment, where the subject's emotional state is evaluated externally by some other person. Some standard psychological questionnaires, used for the emotion evaluation, are self-assessment manikin (SAM) (Bradley et al., 1994), the positive and negative affective (PANA) scheme (Watson et al., 1988), and differential emotion scale (DES) (Gross & Levenson, 1995), where subjects will answer according to their feelings. Both implicit and explicit methods of assessment are approximate evaluations of elicitation. Therefore in (Correa et al., 2018), to ensure the correctness of the label, both techniques are used in combination. Thus, in order to get physiological signals for a targeted emotion, the elicitation or stimulus of a particular emotion must be chosen carefully.

Complete Chapter List

Search this Book:
Reset