Perbandingan Matriks Loss Pada Model Deep Learning Resnet50 dan Xception dalam Deteksi Objek

Authors

  • Herimanto Herimanto Institut Teknologi Del, Laguboti

DOI:

https://doi.org/10.30865/mib.v7i4.6849

Keywords:

Deep Learning, Resnet50, Xception, Flask, Object Detection

Abstract

The implementation of deep learning has expanded into various fields, not confined solely to the field of education, particularly in computer science. It has also integrated technology into various other domains, including geospatial, remote sensing, and even the medical field. This development has made a significant contribution to reshaping the way humans understand and tackle challenges across different sectors. In this context, deep learning is employed for object detection and classification. Despite the considerable progress facilitated by the application of deep learning, object detection remains a challenge that is not entirely resolved. Constraints such as variations in lighting conditions, angles of view, and object diversity make achieving high-accuracy object detection a difficult task. Therefore, further research is required to comprehend and compare the performance of various deep learning models in addressing this issue. This research focuses on the comparison of two deep learning models, namely ResNet50 and Xception, in terms of loss metrics when detecting an object, in this case, a chair. The models are provided with input images of chairs and predict whether the chairs are empty or occupied. The results obtained from this research indicate that the ResNet50 model has a lower total loss value of 0.19422098, while the Xception model has a total loss value of 1.1822930. The lower the loss value, the better the model's performance. Based on the comparison results, the author has developed a web application simulator using Flask, utilizing the model with the lowest loss, which is the ResNet50 model.

References

M. S. Devi, R. Aruna, D. R. Rajeswari, and R. S. Manogna, “Conv2D Xception Adadelta Gradient Descent Learning Rate Deep learning Optimizer for Plant Species Classification,†in 2023 Third International Conference on Advances in Electrical, Computing, Communication and Sustainable Technologies (ICAECT), Bhilai, India: IEEE, Jan. 2023, pp. 1–4. doi: 10.1109/ICAECT57570.2023.10117710.

H. Chen et al., “A deep learning approach to automatic teeth detection and numbering based on object detection in dental periapical films,†Sci. Rep., vol. 9, no. 1, p. 3840, Mar. 2019, doi: 10.1038/s41598-019-40414-y.

R. Cobilla et al., “Classification of the Type of Brain Tumor in MRI Using Xception Model,†in 2023 International Conference on Electronics, Information, and Communication (ICEIC), Singapore: IEEE, Feb. 2023, pp. 1–4. doi: 10.1109/ICEIC57457.2023.10049979.

Z. Li, M. Dong, S. Wen, X. Hu, P. Zhou, and Z. Zeng, “CLU-CNNs: Object detection for medical images,†Neurocomputing, vol. 350, pp. 53–59, Jul. 2019, doi: 10.1016/j.neucom.2019.04.028.

P. Thirumalaraju et al., “Evaluation of deep convolutional neural networks in classifying human embryo images based on their morphological quality,†Heliyon, vol. 7, no. 2, p. e06298, Feb. 2021, doi: 10.1016/j.heliyon.2021.e06298.

V. Tanwar and S. Lamba, “Multiple Grapes Leaf Disease Identification Using an Optimal Deep Learning Model: Xception,†in 2023 2nd International Conference on Smart Technologies and Systems for Next Generation Computing (ICSTSN), Villupuram, India: IEEE, Apr. 2023, pp. 1–6. doi: 10.1109/ICSTSN57873.2023.10151615.

R. Singh, A. Sharma, N. Sharma, and R. Gupta, “Xception Model for Pneumothorax Classification using Chest X-ray Images,†in 2023 2nd International Conference for Innovation in Technology (INOCON), Bangalore, India: IEEE, Mar. 2023, pp. 1–5. doi: 10.1109/INOCON57975.2023.10101280.

J. Schmidhuber, “Deep learning in neural networks: An overview,†Neural Netw., vol. 61, pp. 85–117, Jan. 2015, doi: 10.1016/j.neunet.2014.09.003.

R. Padilla, S. L. Netto, and E. A. B. Da Silva, “A Survey on Performance Metrics for Object-Detection Algorithms,†in 2020 International Conference on Systems, Signals and Image Processing (IWSSIP), Niterói, Brazil: IEEE, Jul. 2020, pp. 237–242. doi: 10.1109/IWSSIP48289.2020.9145130.

L. Alzubaidi et al., “Review of deep learning: concepts, CNN architectures, challenges, applications, future directions,†J. Big Data, vol. 8, no. 1, p. 53, Mar. 2021, doi: 10.1186/s40537-021-00444-8.

P. Adarsh, P. Rathi, and M. Kumar, “YOLO v3-Tiny: Object Detection and Recognition using one stage improved model,†in 2020 6th International Conference on Advanced Computing and Communication Systems (ICACCS), Coimbatore, India: IEEE, Mar. 2020, pp. 687–694. doi: 10.1109/ICACCS48705.2020.9074315.

J. Du, “Understanding of Object Detection Based on CNN Family and YOLO,†J. Phys. Conf. Ser., vol. 1004, p. 012029, Apr. 2018, doi: 10.1088/1742-6596/1004/1/012029.

P. Jiang, D. Ergu, F. Liu, Y. Cai, and B. Ma, “A Review of Yolo Algorithm Developments,†Procedia Comput. Sci., vol. 199, pp. 1066–1073, 2022, doi: 10.1016/j.procs.2022.01.135.

B. Zoph, E. D. Cubuk, G. Ghiasi, T.-Y. Lin, J. Shlens, and Q. V. Le, “Learning Data Augmentation Strategies for Object Detection,†in Computer Vision – ECCV 2020, vol. 12372, A. Vedaldi, H. Bischof, T. Brox, and J.-M. Frahm, Eds., in Lecture Notes in Computer Science, vol. 12372. , Cham: Springer International Publishing, 2020, pp. 566–583. doi: 10.1007/978-3-030-58583-9_34.

A. Dhillon and G. K. Verma, “Convolutional neural network: a review of models, methodologies and applications to object detection,†Prog. Artif. Intell., vol. 9, no. 2, pp. 85–112, Jun. 2020, doi: 10.1007/s13748-019-00203-0.

Y. He, C. Zhu, J. Wang, M. Savvides, and X. Zhang, “Bounding Box Regression With Uncertainty for Accurate Object Detection,†in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA: IEEE, Jun. 2019, pp. 2883–2892. doi: 10.1109/CVPR.2019.00300.

Z. Cai and N. Vasconcelos, “Cascade R-CNN: High Quality Object Detection and Instance Segmentation,†IEEE Trans. Pattern Anal. Mach. Intell., vol. 43, no. 5, pp. 1483–1498, May 2021, doi: 10.1109/TPAMI.2019.2956516.

S. Indolia, A. K. Goswami, S. P. Mishra, and P. Asopa, “Conceptual Understanding of Convolutional Neural Network- A Deep Learning Approach,†Procedia Comput. Sci., vol. 132, pp. 679–688, 2018, doi: 10.1016/j.procs.2018.05.069.

Y. Shima, “Image Augmentation for Object Image Classification Based On Combination of Pre-Trained CNN and SVM,†J. Phys. Conf. Ser., vol. 1004, p. 012001, Apr. 2018, doi: 10.1088/1742-6596/1004/1/012001.

V. R. Joseph and A. Vakayil, “SPlit: An Optimal Method for Data Splitting,†Technometrics, vol. 64, no. 2, pp. 166–176, Apr. 2022, doi: 10.1080/00401706.2021.1921037.

Downloads

Published

2023-10-25