Performance of Levenberg-Marquardt Algorithm in Backpropagation Network Based on the Number of Neurons in Hidden Layers and Learning Rate

Hindayati Mustafidah, Suwarsito Suwarsito


One of the supervised learning paradigms in artificial neural networks (ANN) that are in great developed is the backpropagation model. Backpropagation is a perceptron learning algorithm with many layers to change weights connected to neurons in hidden layers. The performance of the algorithm is influenced by several network parameters including the number of neurons in the input layer, the maximum epoch used, learning rate (lr) value, the hidden layer configuration, and the resulting error (MSE). Some of the tests conducted in previous studies obtained information that the Levenberg-Marquardt training algorithm has better performance than other algorithms in the backpropagation network, which produces the smallest average error with a test level of α = 5% which used 10 neurons in a hidden layer. The number of neurons in hidden layers varies depending on the number of neurons in the input layer. In this study an analysis of the performance of the Levenberg-Marquardt training algorithm was carried out with 5 neurons in the input layer, a number of n neurons in hidden layers (n = 2, 4, 5, 7, 9), and 1 neuron in the output layer. Performance analysis is based on network-generated errors. This study uses a mixed method, namely development research with quantitative and qualitative testing using ANOVA statistical tests. Based on the analysis, the Levenberg-Marquardt training algorithm produces the smallest error of 0.00014 ± 0.00018 on 9 neurons in hidden layers with lr = 0.5.

Keywords: hidden layer, backpropogation, MSE, learning rate, Levenberg-Marquardt.


[1] S. Kusumadewi and S. Hartati, Neuro-Fuzzy : Integrasi Sistem Fuzzy dan Jaringan Syaraf. Yogyakarta: Graha Ilmu, 2006.

[2] S. Shanmuganathan and S. Samarasinghe, Artificial Neural Network Modelling, vol. 628. Springer International Publishing, 2016.

[3] M. T. T. Jones, Artificial Intelligence A Systems Approach. New Delhi: Infinity Science Press LLC, 2008.

[4] J. J. Siang, Jaringan Syaraf Tiruan dan Pemrogramannya Menggunakan MATLAB. Yogyakarta: ANDI, 2009.

[5] A. B. Adetunji, A. Q. Ayinde, and C. O. Akanbi, “Application of Neural Network to Detect Intrusion in Banking System.,” Am. J. Sci. Ind. Res., vol. 5, no. 2, pp. 53–59, 2014, [Online]. Available:

[6] A. H. Zaji and H. Bonakdari, “Application of artificial neural network and genetic programming models for estimating the longitudinal velocity field in open channel junctions,” Flow Meas. Instrum., vol. 41, pp. 81–89, 2015.

[7] A. Gholami, H. Bonakdari, A. H. Zaji, and A. A. Akhtari, “Simulation of open channel bend characteristics using computational fluid dynamics and artificial neural networks,” Eng. Appl. Comput. Fluid Mech., vol. 9, no. 1, pp. 355–369, 2015.

[8] A. Gholami, H. Bonakdari, A. H. Zaji, S. Ajeel Fenjan, and A. A. Akhtari, “Design of modified structure multi-layer perceptron networks based on decision trees for the prediction of flow parameters in 90° open-channel bends,” Eng. Appl. Comput. Fluid Mech., vol. 10, no. 1, pp. 193–208, 2016.

[9] A. Desiani and M. Arhami, Konsep kecerdasan buatan. Yogyakarta: Andi Offset, 2006.

[10] S. Kusumadewi, Membangun Jaringan Syaraf Tiruan Menggunakan MATLAB & EXCEL LINK. Yogyakarta: Graha Ilmu, 2004.

[11] H. Harjono and D. Aryanto, “Application of Artificial Neural Networks to Predict Student Achievement Study,” SAINTEKS, vol. 5, no. 2, 2009.

[12] H. Mustafidah, D. K. Hakim, and S. Sugiyanto, “Tingkat Keoptimalan Algoritma Pelatihan pada Jaringan Syaraf Tiruan (Studi Kasus Prediksi Prestasi Belajar Mahasiswa) Optimization Level of Training Algorithms in Neural Network (Case Studies of Student Learning Achievement Predictions),” JUITA, vol. II, no. 3, pp. 159–166, 2013.

[13] H. Mustafidah, D. Aryanto, and D. K. Hakim, “Uji Optimalisasi Algoritma Pelatihan Conjugate Gradient pada Jaringan Syaraf Tiruan,” in Prosiding SENATEK, ISBN: 978-602-14355-0-2, 21 September 2013, 2013, p. B-9-1.

[14] F. Wibowo, S. Sugiyanto, and H. Mustafidah, “Tingkat Ketelitian Pengenalan Pola Data pada Algoritma Pelatihan Perbaikan Metode Batch Mode dalam Jaringan Syaraf Tiruan,” JUITA (Jurnal Inform., vol. II, no. 4, pp. 259 – 264, 2013.

[15] H. Mustafidah, S. Hartati, R. Wardoyo, and A. Harjoko, “Prediction of Test Items Validity Using Artificial Neural Network,” in Proceeding International Conference on Education, Technology, and Science (NETS) 2013, “Improving The Quality Of Education To Face The Impact Of Technology”. December 28th, 2013, 2013.

[16] H. Mustafidah and S. Suwarsito, “Error Rate Testing of Training Algorithm in Back Propagation Network,” Int. J. Soft Comput. Eng., vol. 5, no. 4, pp. 46 – 50, 2015.

[17] H. Mustafidah and S. Suwarsito, “Model Parameter Jaringan Syaraf Tiruan untuk Pemilihan Algoritma Pelatihan Jaringan Backpropagation yang Paling Optimal,” Purwokerto, Central Java, Indonesia, 2015.

[18] H. Mustafidah and S. Suwarsito, “Uji Keoptimalan Algoritma Pelatihan pada Jaringan Syaraf Tiruan,” in Prosiding Seminar Nasional SENATKOM 2015, 2015, pp. 243–248.

[19] H. Mustafidah and S. Suwarsito, “Inferensi Tingkat Kesalahan dalam Jaringan Backpropagation Berdasarkan Laju Pemahaman,” in Prosiding Seminar Nasional Aptikom 2016 (SEMNASTIKOM), Hotel Lombok Raya Mataram, 28 – 29 Oktober 2016, 2016, pp. 576–580.

[20] H. Mustafidah and H. Harjono, “Korelasi Tingkat Kesalahan dan Epoh dalam Jaringan Backpropagation,” in Prosiding SEMNASTIKOM 2017, 3 November 2017, ISBN: 978-602-50434-0-6, 2017, pp. 55–61.

[21] E. L. Lehmann and G. Casella, Theory of Point Estimation, Springer t. Springer, 2003.

[22] T. Taniredja and H. Mustafidah, Penelitian Kuantitatif (Sebuah Pengantar). Bandung: ALFABETA, 2011.

[23] H. Mustafidah, Suwarsito, and S. N. C. Permatasari, “Accuracy of the neurons number in the hidden layer of the levenberg-marquardt algorithm,” Int. J. Recent Technol. Eng., vol. 8, no. 4, pp. 2349–2353, 2019.

[24] H. Mustafidah and S. Suwarsito, “Correlation Analysis Between Error Rate of Output and Learning Rate in Backpropagation Network,” Adv. Sci. Lett., vol. 24, no. 12, pp. 9182–9185, 2018.

Full Text: PDF

DOI: 10.30595/juita.v8i1.7150


  • There are currently no refbacks.

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

ISSN: 2579-8901