Implementasi Algoritma Reinforcement Learning dalam Pengelolaan Energi untuk Sistem Smart Grid

Authors

  • Nor Milatul Khusna Universitas Muria Kudus
  • Evanita Evanita Universitas Muria Kudus
  • Aditya Akbar Riadi Universitas Muria Kudus

DOI:

https://doi.org/10.30865/json.v6i4.8690

Keywords:

Smart Grid, Reinforcement Learning, PPO, Load Shifting, Efisiensi Energi

Abstract

Pengelolaan energi pada sistem smart grid merupakan tantangan penting dalam mendukung efisiensi dan keberlanjutan energi, khususnya pada skala rumah tangga. Penelitian ini mengusulkan implementasi algoritma Proximal Policy Optimization (PPO) berbasis Deep Reinforcement Learning (DRL) untuk mengoptimalkan efisiensi penggunaan energi melalui strategi load shifting. Lingkungan simulasi dibangun untuk merepresentasikan konsumsi energi rumah tangga dalam skenario waktu nyata, di mana agen PPO dilatih untuk mengalihkan beban penggunaan listrik ke waktu dengan tarif lebih rendah atau beban sistem yang lebih ringan. Pengujian dilakukan terhadap tiga skema reward dengan dua mode pelatihan, yaitu cepat dan maksimal. Hasil terbaik diperoleh pada kombinasi reward ketiga dengan mode pelatihan maksimal, menghasilkan rata-rata reward sebesar 41690,53 dan efisiensi biaya hingga 95,83% dibandingkan dengan data konsumsi asli. Temuan ini membuktikan bahwa PPO merupakan pendekatan yang efektif dalam pengelolaan energi pada smart grid skala rumah tangga, khususnya dalam mendukung strategi pengalihan beban yang adaptif dan hemat biaya.

Author Biography

Nor Milatul Khusna, Universitas Muria Kudus

Mahasiswa Teknik Informatika di Universitas Muria Kudus

References

T. Peirelinck, C. Hermans, F. Spiessens, and G. Deconinck, “Combined Peak Reduction and Self-Consumption Using Proximal Policy Optimization,” Nov. 2022, doi: 10.1016/j.egyai.2023.100323.

H.-M. Chung, S. Maharjan, Y. Zhang, and F. Eliassen, “Distributed Deep Reinforcement Learning for Intelligent Load Scheduling in Residential Smart Grids,” Jun. 2020, [Online]. Available: http://arxiv.org/abs/2006.16100

L. Zhao, T. Yang, W. Li, and A. Y. Zomaya, “Deep reinforcement learning-based joint load scheduling for household multi-energy system,” Appl Energy, vol. 324, p. 119346, Oct. 2022, doi: 10.1016/J.APENERGY.2022.119346.

X. Huang, D. Zhang, and X. S. Zhang, “Energy management of intelligent building based on deep reinforced learning,” Alexandria Engineering Journal, vol. 60, no. 1, pp. 1509–1517, Feb. 2021, doi: 10.1016/J.AEJ.2020.11.005.

V. R. Chifu, T. Cioara, C. B. Pop, H. G. Rusu, and I. Anghel, “Deep Q-Learning-Based Smart Scheduling of EVs for Demand Response in Smart Grids,” Applied Sciences (Switzerland), vol. 14, no. 4, Feb. 2024, doi: 10.3390/app14041421.

J. Aldahmashi and X. Ma, “Real-Time Energy Management in Smart Homes Through Deep Reinforcement Learning,” IEEE Access, vol. 12, pp. 43155–43172, 2024, doi: 10.1109/ACCESS.2024.3375771.

C. Zhang, T. Li, W. Cui, and N. Cui, “Proximal Policy Optimization Based Intelligent Energy Management for Plug-In Hybrid Electric Bus Considering Battery Thermal Characteristic,” World Electric Vehicle Journal, vol. 14, no. 2, Feb. 2023, doi: 10.3390/WEVJ14020047.

A. Pigott, C. Crozier, K. Baker, and Z. Nagy, “GridLearn: Multiagent Reinforcement Learning for Grid-Aware Building Energy Management,” 2022. [Online]. Available: https://github.com/apigott/CityLearn/releases/tag/gridlearn-v1.0

M. Zulfiqar, N. F. Alshammari, and M. B. Rasheed, “Reinforcement Learning-Enabled Electric Vehicle Load Forecasting for Grid Energy Management,” Mathematics, vol. 11, no. 7, Apr. 2023, doi: 10.3390/math11071680.

H. T. Haider, O. H. See, and W. Elmenreich, “Dynamic residential load scheduling based on adaptive consumption level pricing scheme,” Electric Power Systems Research, vol. 133, pp. 27–35, Apr. 2016, doi: 10.1016/j.epsr.2015.12.007.

S. Lee and D. H. Choi, “Energy Management of Smart Home with Home Appliances, Energy Storage System and Electric Vehicle: A Hierarchical Deep Reinforcement Learning Approach,” Sensors 2020, Vol. 20, Page 2157, vol. 20, no. 7, p. 2157, Apr. 2020, doi: 10.3390/S20072157.

Y. Wen, P. Fan, J. Hu, S. Ke, F. Wu, and X. Zhu, “An Optimal Scheduling Strategy of a Microgrid with V2G Based on Deep Q-Learning,” Sustainability (Switzerland), vol. 14, no. 16, Aug. 2022, doi: 10.3390/su141610351.

M. Sumayli and O. M. Anubi, “Integration of Multi-Mode Preference into Home Energy Management System Using Deep Reinforcement Learning,” May 2025, Accessed: May 28, 2025. [Online]. Available: https://arxiv.org/pdf/2505.01332

J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. K. Openai, “Proximal Policy Optimization Algorithms,” Jul. 2017, Accessed: May 28, 2025. [Online]. Available: https://arxiv.org/pdf/1707.06347

R. H. D. Reswara, E. Evanita, and A. Susanto, “IMPLEMENTASI CONTENT-BASED FILTERING PADA SISTEM REKOMENDASI BUKU PERPUSTAKAAN,” JATI (Jurnal Mahasiswa Teknik Informatika), vol. 9, no. 2, pp. 3243–3250, Apr. 2025, doi: 10.36040/JATI.V9I2.13312.

A. Laukaitis, A. Šareiko, and D. Mažeika, “Facilitating Robot Learning in Virtual Environments: A Deep Reinforcement Learning Framework,” Applied Sciences 2025, Vol. 15, Page 5016, vol. 15, no. 9, p. 5016, Apr. 2025, doi: 10.3390/APP15095016.

A. Lavanya et al., “Assessing the Performance of Python Data Visualization Libraries: A Review,” International Journal of Computer Engineering in Research Trends, vol. 10, no. 1, pp. 28–39, Jan. 2023, doi: 10.22362/IJCERT/2023/V10/I01/V10I0104.

S. Han and I.-Y. Kwak, “Mastering data visualization with Python: practical tips for researchers,” Journal of Minimally Invasive Surgery, vol. 26, no. 4, pp. 167–175, Dec. 2023, doi: 10.7602/JMIS.2023.26.4.167.

Downloads

Published

2025-06-30

How to Cite

Khusna, N. M., Evanita, E., & Riadi, A. A. (2025). Implementasi Algoritma Reinforcement Learning dalam Pengelolaan Energi untuk Sistem Smart Grid. Jurnal Sistem Komputer Dan Informatika (JSON), 6(4), 239–247. https://doi.org/10.30865/json.v6i4.8690

Issue

Section

Articles