Sistem Manajemen Absensi dengan Fitur Pengenalan Wajah dan GPS Menggunakan YOLO pada Platform Android
DOI:
https://doi.org/10.30865/mib.v4i4.2522Keywords:
GPS, YOLO, Application, Accuracy, Android.Abstract
This study offers an attendance system that can be run with the Global Positioning System (GPS) feature to automatically check the location of the face owner. Recently, the YOLO algorithm is the world's most popular method of facial recognition. Currently the You Only Look Once (YOLO) algorithm toolbox has been provided in various programming language platforms for use. The system we offer is also able to check the position or whereabouts of objects using Global Positioning System (GPS) technology. The results of this test obtained an accuracy of 0.93435 and the lowest was within the range of 93%, while the average accuracy values were 93.26%. Of the 20 assessment data carried out by the Attendance Management System with Face Recognition and GPS Features using YOLO on the Android Platform. The evaluation of the accuracy of student attendance is expected to support the process of academic activities on campus. In addition, this product is expected to be able to assist management who require evaluation results as well as an effort to improve business processes in an agency in order to improve their performance. This research proves that the use of the tool library with the You Only Look Once (YOLO) algorithm is the most popular method in the world of facial recognition and is proven to be tough and very good at this time.
References
M. R. Mulyawan, B. Irawan, and Y. Brianorman, “Metode Eigenface Pada Sistem Absensi,†J. Coding, Sist. Komput. Untan, vol. 03, no. 1, pp. 41–50, 2015.
Kurniawan, A. S. Akuwan, and N. Ramadijanti, “Aplikasi Absensi Kuliah Berbasis Identifikasi Wajah Menggunakan Metode Gabor Wavelet,†J. ICT, no. Face Regocnition, p. 6, 2014.
R. Ranjan, V. M. Patel, and R. Chellappa, “A deep pyramid Deformable Part Model for face detection,†2015 IEEE 7th Int. Conf. Biometrics Theory, Appl. Syst. BTAS 2015, 2015.
F. Fachruddin, E. Rasywir, Hendrawan, Y. Pratama, D. Kisbianty, and M. R. Borroek, “Real Time Detection on Face Side Image with Ear Biometric Imaging Using Integral Image and Haar- Like Feature,†2018 Int. Conf. Electr. Eng. Comput. Sci., pp. 165–170, 2018.
A. D. Egorov, A. N. Shtanko, and P. E. Minin, “Selection of Viola–Jones algorithm parameters for specific conditions,†Bull. Lebedev Phys. Inst., vol. 42, no. 8, pp. 244–248, 2015.
M. Azzeh, “A replicated assessment and comparison of adaptation techniques for analogy-based effort estimation,†Empir. Softw. Eng., vol. 17, no. 1–2, pp. 90–127, 2012.
W. Liu, Z. Wang, X. Liu, N. Zeng, Y. Liu, and F. E. Alsaadi, “A survey of deep neural network architectures and their applications,†Neurocomputing, vol. 234, no. December 2016, pp. 11–26, 2017.
A. Kamilaris and F. X. Prenafeta-Boldú, “You Only Look Once (YOLO) in agriculture: A survey,†Comput. Electron. Agric., vol. 147, no. July 2017, pp. 70–90, 2018.
O. M. Parkhi, A. Vedaldi, and A. Zisserman, “Deep Face Recognition,†J. Face Recognit., no. Section 3, p. 41.1-41.12, 2015.
I. Supriana and Y. Pratama, “Face recognition new approach based on gradation contour of face color,†Int. J. Electr. Eng. Informatics, vol. 9, no. 1, pp. 125–138, 2017.
F. Mayer and M. Steinebach, “Forensic image inspection assisted by deep learning,†ACM Int. Conf. Proceeding Ser., vol. Part F1305, 2017.
Y. Wang et al., “EV-gait: Event-based robust gait recognition using dynamic vision sensors,†Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2019–June, pp. 6351–6360, 2019.
M. Mohammadi, A. Al-Fuqaha, M. Guizani, and J. S. Oh, “Semisupervised Deep Reinforcement Learning in Support of IoT and Smart City Services,†IEEE Internet Things J., vol. 5, no. 2, pp. 624–635, 2018.
G. D. Bonde, “Finding Indoor Position of Person Using Wi-Fi & Smartphone : A Survey,†Int. J. Innov. Res. Sci. Technol., vol. 1, no. 8, pp. 202–207, 2015.
G. Felix, M. Siller, and E. N. Alvarez, “A fingerprinting indoor localization algorithm based deep learning,†2016 Eighth Int. Conf. Ubiquitous Futur. Networks, pp. 1006–1011, 2016.
S. K. Yoki Donzia and H. K. Kim, “Implementation of recurrent neural network with sequence to sequence model to translate language based on tensorflow,†Lect. Notes Eng. Comput. Sci., vol. 2237, pp. 375–379, 2018.
Y. Liu, S. Xia, Z. Wang, M. Zhu, and G. Yuan, “Indoor Fingerprint Positioning Based on Wi-Fi: An Overview,†ISPRS Int. J. Geo-Information, vol. 6, no. 5, p. 135, 2017.
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution 4.0 International License
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under Creative Commons Attribution 4.0 International License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (Refer to The Effect of Open Access).