Emotion Detection Using Deep Learning Algorithm

Emotion Detection Using Deep Learning Algorithm

Shital Sanjay Yadav, Anup S. Vibhute
Copyright: © 2021 |Pages: 9
DOI: 10.4018/IJCVIP.2021100103
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Automatic emotion detection is a prime task in computerized human behaviour analysis. The proposed system is an automatic emotion detection using convolution neural network. The proposed end-to-end CNN is therefore named as ENet. Keeping in mind the computational efficiency, the deep network makes use of trained weight parameters of the MobileNet to initialize the weight parameters of ENet. On top of the last convolution layer of ENet, the authors place global average pooling layer to make it independent of the input image size. The ENet is validated for emotion detection using two benchmark datasets: Cohn-Kanade+ (CK+) and Japanese female facial expression (JAFFE). The experimental results show that the proposed ENet outperforms the other existing methods for emotion detection.
Article Preview
Top

2. Literature Survey

A general FER follows five stages which includes the task of image capturing, the creation of pre-processing techniques, effective feature extraction, recognition and post processing. The usefulness of such a structure depends to a large degree on the exact mechanism of abstraction and classification of features. Even whilst using the best classification model, insufficient extraction of the function will degrade the efficiency. For a reliable FER system, developing a suitable characteristic descriptor is indeed vital.

Techniques for feature extraction process may be generally grouped into two types: handmade features and (Corneanu C. A.et.al,2016) learned features. The handmade features are well before-designed to capture specific facial expressions while the learned features are coded utilizing convolution neural networks (CNN). The CNN based methods (Burkert P.et.al,2015, Mollahosseini A.et.al,2016,Barsoum E.et.al,2016) jointly learn to classify the facial expression through the correct attributes and weights. Handmade features proposed in the existing method broadly comes under appearance based features and geometric features. The geometric features (Pantic M. and Patras I., 2006 & Sebe N et.al.,2007) encode the face image with the help of geometric properties like deformation, contour, and various other geometric properties. Zhang Z. et al. (Zhang Z. et al.1998) represented face image by 34 facial points and utilized them as a landmark points. Further, these landmark points are used to extract geometric features. Valstar M.F.et al.(Valstar M. F. et.al.,2005) proposed to track the facial points and detect the AUs (Action Units) in the face image. The facial expressions can be recognized based on the detected AUs in the image. The geometric features fail to identify the minute characteristics such as ridges and skin texture changes and are dependent on reliable and accurate feature detection and tracking. In addition, pre processing techniques are required to localize various facial components before the extraction of facial features.

Complete Article List

Search this Journal:
Reset
Volume 14: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 13: 1 Issue (2023)
Volume 12: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 11: 4 Issues (2021)
Volume 10: 4 Issues (2020)
Volume 9: 4 Issues (2019)
Volume 8: 4 Issues (2018)
Volume 7: 4 Issues (2017)
Volume 6: 2 Issues (2016)
Volume 5: 2 Issues (2015)
Volume 4: 2 Issues (2014)
Volume 3: 4 Issues (2013)
Volume 2: 4 Issues (2012)
Volume 1: 4 Issues (2011)
View Complete Journal Contents Listing