Perbandingan Metode Penyesuaian Kontras Citra Pada Pengenalan Ekspresi Wajah Menggunakan Fine-Tuning AlexNet

 (*)Akhmad Sarif Mail (Universitas Indonesia, Depok, Indonesia)
 Dadang Gunawan (Universitas Indonesia, Depok, Indonesia)

(*) Corresponding Author

Submitted: June 17, 2023; Published: July 23, 2023


Research related to facial expression recognition (FER) has become a significant topic of interest in the field of computer vision due to its broad applications. Artificial intelligence technologies, such as deep learning, have been applied in FER research. The use of deep learning models in FER requires a dataset for training, which plays a crucial role in determining the performance of deep learning. However, the available FER datasets often require preprocessing before being processed using deep learning. In this study, a comparison of contrast adjustment preprocessing methods was conducted using Histogram Equalization (HE) and Contrast Limited Adaptive Histogram Equalization (CLAHE). Subsequently, the dataset images were used with a fine-tuned deep learning model, specifically AlexNet, to classify them according to the categories of human facial expressions. The objective of this research is to determine the superior contrast adjustment method for FER dataset images in improving the performance of the deep learning model employed. The CK+ dataset (The Extended Cohn-Kanade) and KDEF dataset (The Karolinska Directed Emotional Faces) were used in this study. The results indicate that the CLAHE method outperforms HE in both the CK+ and KDEF datasets. In the CK+ dataset, the CLAHE method achieved an average accuracy of 93.21%, while the average accuracy of the HE method was 91.50%. For the KDEF dataset, the average accuracy of the CLAHE method was 88.35%, compared to 84.70% for the HE method.


AlexNet; CK+; CLAHE; Computer Vision; Facial Expressions Recognition; Fine-tuning; Histogram Equalization; KDEF

Full Text:


Article Metrics

Abstract view : 423 times
PDF - 164 times


K. Vasudeva and S. Chandran, “A Comprehensive Study on Facial Expression Recognition Techniques using Convolutional Neural Network,” in Proceedings of the 2020 IEEE International Conference on Communication and Signal Processing, ICCSP 2020, 2020. doi: 10.1109/ICCSP48568.2020.9182076.

A. Joseph and P. Geetha, “Facial emotion detection using modified eyemap–mouthmap algorithm on an enhanced image and classification with tensorflow,” Vis. Comput., vol. 36, no. 3, 2020, doi: 10.1007/s00371-019-01628-3.

X. Zhang, F. Zhang, and C. Xu, “Joint Expression Synthesis and Representation Learning for Facial Expression Recognition,” IEEE Trans. Circuits Syst. Video Technol., vol. 32, no. 3, 2022, doi: 10.1109/TCSVT.2021.3056098.

M. Sajjad et al., “A comprehensive survey on deep facial expression recognition: challenges, applications, and future guidelines,” Alexandria Engineering Journal, vol. 68. 2023. doi: 10.1016/j.aej.2023.01.017.

H. Ge, Z. Zhu, Y. Dai, B. Wang, and X. Wu, “Facial expression recognition based on deep learning,” Comput. Methods Programs Biomed., vol. 215, 2022, doi: 10.1016/j.cmpb.2022.106621.

D. Li, W. Qi, and S. Sun, “Facial Landmarks and Expression Label Guided Photorealistic Facial Expression Synthesis,” IEEE Access, vol. 9, 2021, doi: 10.1109/ACCESS.2021.3072057.

S. Saifullah, “ANALISIS PERBANDINGAN HE DAN CLAHE PADA IMAGE ENHANCEMENT DALAM PROSES SEGMENASI CITRA UNTUK DETEKSI FERTILITAS TELUR,” J. Nas. Pendidik. Tek. Inform., vol. 9, no. 1, 2020, doi: 10.23887/janapati.v9i1.23013.

Z. Yuan et al., “CLAHE-Based Low-Light Image Enhancement for Robust Object Detection in Overhead Power Transmission System,” IEEE Trans. Power Deliv., vol. 38, no. 3, pp. 2240–2243, Jun. 2023, doi: 10.1109/TPWRD.2023.3269206.

V. Stimper, S. Bauer, R. Ernstorfer, B. Scholkopf, and R. P. Xian, “Multidimensional Contrast Limited Adaptive Histogram Equalization,” IEEE Access, vol. 7, pp. 165437–165447, 2019, doi: 10.1109/ACCESS.2019.2952899.

S. F. Mat Radzi et al., “Impact of Image Contrast Enhancement on Stability of Radiomics Feature Quantification on a 2D Mammogram Radiograph,” IEEE Access, vol. 8, 2020, doi: 10.1109/ACCESS.2020.3008927.

P. Cunha Carneiro, C. Lemos Debs, A. Oliveira Andrade, and A. C. Patrocinio, “CLAHE Parameters Effects on the Quantitative and Visual Assessment of Dense Breast Mammograms,” IEEE Lat. Am. Trans., vol. 17, no. 5, 2019, doi: 10.1109/TLA.2019.8891954.

Y. Chang, C. Jung, P. Ke, H. Song, and J. Hwang, “Automatic Contrast-Limited Adaptive Histogram Equalization with Dual Gamma Correction,” IEEE Access, vol. 6, 2018, doi: 10.1109/ACCESS.2018.2797872.

I. Singh, G. Goyal, and A. Chandel, “AlexNet architecture based convolutional neural network for toxic comments classification,” J. King Saud Univ. - Comput. Inf. Sci., 2022, doi: 10.1016/j.jksuci.2022.06.007.

B. Yang, J. Cao, R. Ni, and Y. Zhang, “Facial Expression Recognition Using Weighted Mixture Deep Neural Network Based on Double-Channel Facial Images,” IEEE Access, vol. 6, pp. 4630–4640, 2018, doi: 10.1109/ACCESS.2017.2784096.

T. Li, K.-L. Chan, and T. Tjahjadi, “Multi-Scale correlation module for video-based facial expression recognition in the wild,” Pattern Recognit., vol. 142, p. 109691, Oct. 2023, doi: 10.1016/j.patcog.2023.109691.

T. Li, H. Su, S. Zhang, B. Xie, and B. Yang, “The effect of pedestrian lighting on facial expression recognition with 3D models: A lab experiment,” Build. Environ., vol. 228, 2023, doi: 10.1016/j.buildenv.2022.109896.

A. R. Shahid and H. Yan, “SqueezExpNet: Dual-stage convolutional neural network for accurate facial expression recognition with attention mechanism,” Knowledge-Based Syst., vol. 269, p. 110451, Jun. 2023, doi: 10.1016/j.knosys.2023.110451.

Z. Zhang, X. Sun, J. Li, and M. Wang, “MAN: Mining Ambiguity and Noise for Facial Expression Recognition in the Wild,” Pattern Recognit. Lett., vol. 164, 2022, doi: 10.1016/j.patrec.2022.10.016.

X. Liu, X. Cheng, and K. Lee, “GA-SVM-Based Facial Emotion Recognition Using Facial Geometric Features,” IEEE Sens. J., vol. 21, no. 10, 2021, doi: 10.1109/JSEN.2020.3028075.

S. Li and W. Deng, “A Deeper Look at Facial Expression Dataset Bias,” IEEE Trans. Affect. Comput., vol. 13, no. 2, pp. 881–893, Apr. 2022, doi: 10.1109/TAFFC.2020.2973158.

Norhikmah, A. Lutfhi, and Rumini, “The Effect of Layer Batch Normalization and Dropout of CNN model Performance on Facial Expression Classification,” Int. J. Informatics Vis., vol. 6, no. 2–2, 2022, doi: 10.30630/joiv.6.2-2.921.

X. Liu, X. Cheng, and K. Lee, “GA-SVM-Based Facial Emotion Recognition Using Facial Geometric Features,” IEEE Sens. J., vol. 21, no. 10, pp. 11532–11542, May 2021, doi: 10.1109/JSEN.2020.3028075.

Bila bermanfaat silahkan share artikel ini

Berikan Komentar Anda terhadap artikel Perbandingan Metode Penyesuaian Kontras Citra Pada Pengenalan Ekspresi Wajah Menggunakan Fine-Tuning AlexNet


  • There are currently no refbacks.


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

STMIK Budi Darma
Secretariat: Sisingamangaraja No. 338 Telp 061-7875998

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.