Application of YOLO (You Only Look Once) V.4 with Preprocessing Image and Network Experiment

 Yovi Pratama (Dinamika Bangsa University, Jambi, Indonesia)
 (*)Errissya Rasywir Mail (Dinamika Bangsa University, Jambi, Indonesia)
 Akwan Sunoto (Dinamika Bangsa University, Jambi, Indonesia)
 Irawan Irawan (Dinamika Bangsa University, Jambi, Indonesia)

(*) Corresponding Author

Submitted: November 1, 2021; Published: November 30, 2021

Abstract

In computer science, specifically in the field of image processing, many reliable algorithms have been found. Previously it was introduced that the YOLO (You Only Look Once) V.3 algorithm. In this case, the application of the YOLO algorithm that we carried out was applied experimentally by utilizing image preprocessing techniques. In this study, image preprocessing was carried out. The image of the Microsoft COCO dataset that was preprocessed in this study used the method of image dimension reduction and image quality improvement. The Microsoft COCO dataset image dimension reduction method used is the Principal Component Analysis (PCA) method and to improve the image quality of the Microsoft COCO dataset using Gaussian Smoothing. Then after the fine-tuning process, there is an increase in the mAP value by an average of 8.99% so that the five models can have an mAP above 80%. The highest mAP value is owned by the model using the schema after the fine-tuning process. From the results of experiments carried out in this study, obtained detection results that have fairly good accuracy in the dataset results. Irregular transformations of position, dimension, composition and, direction can still be captured as the same feature. YOLO's ability in feature engineering is an acknowledgment that has been successfully proven in this research. Although not all the results of using this algorithm are perfect on all data, the results tend to be good. This is related to the services available in the form of a convolutional layer on YOLO reducing downsample or reducing image dimensions by using anchor boxes, this algorithm can also improve accuracy

Keywords


YOLO; Image; CNN; Experiment; Preprocess

Full Text:

PDF


Article Metrics

Abstract view : 923 times
PDF - 669 times

References

M. Ju, H. Luo, Z. Wang, B. Hui, and Z. Chang, “The application of improved YOLO V3 in multi-scale target detection,” Appl. Sci., vol. 9, no. 18, 2019, doi: 10.3390/app9183775.

P. Wanda and H. J. Jie, “DeepSentiment: Finding malicious sentiment in online social network based on dynamic deep learning,” IAENG Int. J. Comput. Sci., vol. 46, no. 4, pp. 1–12, 2019.

G. Liu, J. C. Nouaze, P. L. T. Mbouembe, and J. H. Kim, “YOLO-tomato: A robust algorithm for tomato detection based on YOLOv3,” Sensors (Switzerland), vol. 20, no. 7, pp. 1–20, 2020, doi: 10.3390/s20072145.

G. Oltean, C. Florea, R. Orghidan, and V. Oltean, “Towards Real Time Vehicle Counting using YOLO-Tiny and Fast Motion Estimation,” SIITME 2019 - 2019 IEEE 25th Int. Symp. Des. Technol. Electron. Packag. Proc., no. October, pp. 240–243, 2019, doi: 10.1109/SIITME47687.2019.8990708.

E. Rasywir, R. Sinaga, and Y. Pratama, “Analisis dan Implementasi Diagnosis Penyakit Sawit dengan Metode Convolutional Neural Network ( CNN ),” J. Paradig. UBSI, vol. 22, no. 2, pp. 117–123, 2020.

O. M. Parkhi, A. Vedaldi, and A. Zisserman, “Deep Face Recognition,” J. Face Recognit., no. Section 3, pp. 41.1-41.12, 2015, doi: 10.5244/c.29.41.

Hendrawan, A. Haris, E. Rasywir, and Y. Pratama, “Sistem Pakar Diagnosis Penyakit Tanaman Karet dengan Metode Fuzzy Mamdani Berbasis Web,” J. Media Inform. Budidarma, vol. 4, no. 4, pp. 1225–1234, 2020, doi: 10.30865/mib.v4i4.2521.

H. Wang, X. Lou, Y. Cai, Y. Li, and L. Chen, “Real-time vehicle detection algorithm based on vision and LiDAR point cloud fusion,” J. Sensors, vol. 2019, 2019, doi: 10.1155/2019/8473980.

H. Hammam, A. Asyhar, S. A. Wibowo, and G. Budiman, “Implementasi Dan Analisis Performansi Metode You Only Look Once ( Yolo ) Sebagai Sensor Pornografi Pada Video Implementation and Performance Analysis of You Only Look Once ( Yolo ) Method As Porn Censorship in Video,” 2020, vol. 7, no. 2, pp. 3631–3638.

P. Shinde, S. Yadav, S. Rudrake, and P. Kumbhar, “Smart Traffic Control System Using YOLO,” Int. Res. J. Eng. Technol., vol. 6, no. 12, pp. 169–172, 2019.

A. Borji, S. Izadi, and L. Itti, “ILab-20M: A Large-Scale Controlled Object Dataset to Investigate Deep Learning,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2016-Decem, pp. 2221–2230, 2016, doi: 10.1109/CVPR.2016.244.

X. Ding and R. Yang, “Vehicle and Parking Space Detection Based on Improved YOLO Network Model,” J. Phys. Conf. Ser., vol. 1325, no. 1, 2019, doi: 10.1088/1742-6596/1325/1/012084.

X. Li, L. Bing, W. Lam, and B. Shi, “Transformation Networks for Target-Oriented Sentiment Classification,” arxiv, 2018.

P. Liu, J. M. Guo, C. Y. Wu, and D. Cai, “Fusion of deep learning and compressed domain features for content-based image retrieval,” IEEE Trans. Image Process., vol. 26, no. 12, pp. 5706–5717, 2017, doi: 10.1109/TIP.2017.2736343.

A. Alzu’bi, A. Amira, and N. Ramzan, “Content-based image retrieval with compact deep convolutional features,” Neurocomputing, vol. 249, pp. 95–105, 2017, doi: 10.1016/j.neucom.2017.03.072.

Y. Hartiwi, E. Rasywir, Y. Pratama, and P. A. Jusia, “Eksperimen Pengenalan Wajah dengan fitur Indoor Positioning System menggunakan Algoritma CNN,” J. Paradig. UBSI, vol. 22, no. 2, 2020.

R. Ranjan, V. M. Patel, and R. Chellappa, “A deep pyramid Deformable Part Model for face detection,” 2015 IEEE 7th Int. Conf. Biometrics Theory, Appl. Syst. BTAS 2015, 2015, doi: 10.1109/BTAS.2015.7358755.

S. Kornblith, J. Shlens, and Q. V. Le, “Do better imagenet models transfer better?,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2019-June, pp. 2656–2666, 2019, doi: 10.1109/CVPR.2019.00277.

S. K. Yoki Donzia and H. K. Kim, “Implementation of recurrent neural network with sequence to sequence model to translate language based on tensorflow,” Lect. Notes Eng. Comput. Sci., vol. 2237, pp. 375–379, 2018.

Y. Zhang and Z. Mu, “Ear detection under uncontrolled conditions with multiple scale faster Region-based convolutional neural networks,” Symmetry (Basel)., vol. 9, no. 4, 2017, doi: 10.3390/sym9040053.

M. Tan, C. dos Santos, B. Xiang, and B. Zhou, “LSTM-based Deep Learning Models for Non-factoid Answer Selection,” in ICLR, 2016, no. 1, pp. 1–11.

S. Fadaei, R. Amirfattahi, and M. R. Ahmadzadeh, “New content-based image retrieval system based on optimised integration of DCD, wavelet and curvelet features,” IET Image Process., vol. 11, no. 2, pp. 89–98, 2017, doi: 10.1049/iet-ipr.2016.0542.

A. Kamilaris and F. X. Prenafeta-Boldú, “Deep learning in agriculture: A survey,” Comput. Electron. Agric., vol. 147, no. July 2017, pp. 70–90, 2018, doi: 10.1016/j.compag.2018.02.016.

Refbacks

  • There are currently no refbacks.


Copyright (c) 2021 Yovi Pratama, Errissya Rasywir, Akwan Sunoto, Irawan Irawan

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.


The IJICS (International Journal of Informatics and Computer Science)
Published by STMIK Budi Darma.
Jl. Sisingamangaraja No.338 Simpang Limun, Medan, North Sumatera
Email: ijics.stmikbudidarma@gmail.com

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.