In-Depth Case Study on Artificial Neural Network Weights Optimization Using Meta-Heuristic and Heuristic Algorithmic Approach

Main Article Content

Victor Stany Rozario
Partha Sutradhar

Abstract

The Meta-heuristic and Heuristic algorithms that have been introduced for deep neural network optimization is in this paper. Artificial Intelligence, and also the most used Deep Learning methods are all growing in popularity these days, thus we need faster optimization strategies for finding the results of future activities. Neural Network Optimization with Particle Swarm Optimization, Backpropagation (BP), Resilient Propagation (Rprop), and Genetic Algorithm (GA) is used for numerical analysis of different datasets and comparing each other to find out which algorithms work better for finding optimal solutions by reducing training loss. Genetic algorithm and also bio-inspired Particle Swarm Optimization is introduced in this paper. Besides, Resilient Propagation and Conventional Backpropagation algorithms which are application-specific algorithms have also been introduced. Meta-heuristic algorithms GA and PSO are a higher-level formula and problem-independent technique that may be used to a diverse number of challenges. The characteristic of Heuristic algorithms has extremely specific features that vary depending on the problem. The conventional Backpropagation (BP) based optimization, the Particle Swarm Optimization methodology, and Resilient Propagation (Rprop) are all fully presented, and how to apply these procedures in Artificial Deep Neural networks Optimization is also thoroughly described. Applied numerical simulation over several datasets proves that the Meta-heuristic algorithm Particle Swarm Optimization and also Genetic Algorithm performs better than the conventional heuristic algorithm like Backpropagation and Resilient Propagation.

Article Details

How to Cite
[1]
V. S. Rozario and P. Sutradhar, “In-Depth Case Study on Artificial Neural Network Weights Optimization Using Meta-Heuristic and Heuristic Algorithmic Approach”, AJSE, vol. 21, no. 2, pp. 98 - 109, Nov. 2022.
Section
Articles

References

[1] Abhijit Suresha, K.V Harisha, N. Radhikaa, “Particle swarm optimization over back propagation neural network for length of stay prediction”, 2015.
[2] M. Carvalho and T.B. Ludermir, “Hybrid Training of Feed-Forward Neural Networks with Particle Swarm Optimization”, Springer-Verlag Berlin Heidalberg, 2006.
[3] Venu G. Gudise and Ganesh K. Venayagamoorthy, “Comparison of Particle Swarm Optimization and Backpropagation as Training”, 2003.
[4] Xiao-Lin Li, Roger Serra, Olivier Julien. “Effects of the Particle Swarm Optimization parameters for structural dynamic monitoring of cantilever beam”. Surveillance, Vishno and AVE conferences, INSA-Lyon, Université de Lyon, Jul 2019, Lyon, France.
[5] Venu G. Gudise and Ganesh K. Venayagamoorthy, “Comparison of Particle Swarm Optimization and Backpropagation as Training Algorithms for Neural Networks”, 05 June 2003, DOI: 10.1109/SIS.2003.1202255
[6] Martin Riedmiller, Heinrich Braun, “A Direct Adaptive Method for Faster Backpropagation Learning: The RPROP Algorithm”, 06 August 2002, DOI: 10.1109/ICNN.1993.298623
[7] Hong Cai, Yanda Li, “Fuzzy neural network optimization method based on Hopfield networks”, January 1998.
[8] Seba Susan, Rohit Ranjan, Udyant Taluja, Shivang Rai, Pranav Agarwal “Global-best optimization of ANN trained by PSO using the non-extensive cross-entropy with Gaussian gain”, Soft Computing 2020
[9] Jiri Stastny, Vladislav Skorpil, “Designing Neural Networks using Genetic Algorithms.” August 2007.
[10] Santosha Rathod, Amit Saha, Kanchan Sinha, “Particle Swarm Optimization and its applications in agricultural research” April 2020.
[11] A. Mosca and G. D. Magoulas, “Adapting resilient propagation for deep learning,” CoRR, vol. abs/1509.04612, 2015.
[12] W. D, “Stacked generalization,” Neural Networks, vol. 5, pp. 241–259, 1992.
[13] J. Heaton, “Encog java and dotnet neural network framework”, Heaton Re-search, Inc., Retrieved on July 20 (2010) 2010.
[14] Sadegh Mirshekarian, Dusan Sormaz, “Machine Learning Approaches to Learning Heuristics for Combinatorial Optimization Problems”,https://doi.org/10.1016/j.promfg.2018.10.019.
[15] Fadlallah, S.O., Anderson, T.N. & Nates, R.J. “Artificial Neural Network–Particle Swarm Optimization (ANN-PSO) Approach for Behaviour Prediction and Structural Optimization of Lightweight Sandwich Composite Heliostats”. Arab J Sci Eng 46, 12721–12742 (2021). https://doi.org/10.1007/s13369-021-06126-0
[16] Agnes Lydia, Sagayaraj Francis, “A Survey of Optimization Techniques for Deep Learning Networks”, May 2019, DOI: 10.35291/2454-9150.2019.0100
[17] Thomas Ragg, Heinrich Braun, Heiko L, “A Comparative Study of Neural Network Optimization Techniques”, February 1997, DOI: 10.1007/978-3-7091-6492-1_75
[18] Trần Ngọc Hà, Nguyễn Thanh Thủy, “The backpropagation neural network for modelling”, March 2016, DOI: 10.15625/1813-9663/14/1/7879
[19] Yang Meng, Hosahalli S. Ramaswamy, “Neural Networks and Genetic Algorithms”, December 2008, DOI:10.1201/9781420061420.ch10
[20] Gursel Serpen, Joel Corra, “Training Simultaneous Recurrent Neural Network with Resilient Propagation for Static Optimization” June 2022, International Journal of Neural Systems 12(3-4):203-18, DOI: 10.1142/S0129065702001199.
[21] Riedmiller, M. and Braun, H., 1993. “A direct adaptive method for faster backpropagation learning”: The RPROP algorithm. In Neural Networks, 1993., IEEE International Conference on (pp. 586-591). IEEE
[22] Garro, Beatriz A, Vázquez, Roberto, “Designing Artificial Neural Networks Using Particle Swarm Optimization Algorithms”, 2015/06/29 https://doi.org/10.1155/2015/369298.

Most read articles by the same author(s)

Obs.: This plugin requires at least one statistics/report plugin to be enabled. If your statistics plugins provide more than one metric then please also select a main metric on the admin's site settings page and/or on the journal manager's settings pages.