Combining deep neural network with traditional classifier to recognize facial expressions

Fei, Zixiang and Yang, Erfu and Li, David and Butler, Stephen and Ijomah, Winifred and Zhou, Huiyu; (2019) Combining deep neural network with traditional classifier to recognize facial expressions. In: 2019 25th IEEE International Conference on Automation and Computing. IEEE, GBR. ISBN 9781861376664 (https://doi.org/10.23919/IConAC.2019.8895084)

[thumbnail of Fei-etal-ICAC2019-Combining-deep-neural-network-with-traditional-classifier]
Preview
Text. Filename: Fei_etal_ICAC2019_Combining_deep_neural_network_with_traditional_classifier.pdf
Accepted Author Manuscript

Download (498kB)| Preview

Abstract

Facial expressions are important in people's daily communications. Recognising facial expressions also has many important applications in the areas such as healthcare and e-learning. Existing facial expression recognition systems have problems such as background interference. Furthermore, systems using traditional approaches like SVM (Support Vector Machine) have weakness in dealing with unseen images. Systems using deep neural network have problems such as requirement for GPU, longer training time and requirement for large memory. To overcome the shortcomings of pure deep neural network and traditional facial recognition approaches, this paper presents a new facial expression recognition approach which has image pre-processing techniques to remove unnecessary background information and combines deep neural network ResNet50 and a traditional classifier-the multiclass model for Support Vector Machine to recognise facial expressions. The proposed approach has better recognition accuracy than traditional approaches like Support Vector Machine and doesn't need GPU. We have compared 3 proposed frameworks with a traditional SVM approach against the Karolinska Directed Emotional Faces (KDEF) Database, the Japanese Female Facial Expression (JAFFE) Database and the extended Cohn-Kanade dataset (CK+), respectively. The experiment results show that the features extracted from the layer 49Relu have the best performance for these three datasets.

ORCID iDs

Fei, Zixiang, Yang, Erfu ORCID logoORCID: https://orcid.org/0000-0003-1813-5950, Li, David ORCID logoORCID: https://orcid.org/0000-0002-6401-4263, Butler, Stephen ORCID logoORCID: https://orcid.org/0000-0002-2103-0773, Ijomah, Winifred and Zhou, Huiyu;