Comparison of Swarm-Based Metaheuristic and Gradient Descent-Based Algorithms in Artif Icial Neural Network Training
No Thumbnail Available
Date
2023
Journal Title
Journal ISSN
Volume Title
Publisher
Ediciones Univ Salamanca
Abstract
This paper aims to compare the gradient descent-based algorithms under classical training model and swarm-based metaheuristic algorithms in feed forward backpropagation artificial neural network training. Batch weight and bias rule, Bayesian regularization, cyclical weight and bias rule and Levenberg-Marquardt algorithms are used as the classical gradient descentbased algorithms. In terms of the swarm-based metaheuristic algorithms, hunger games search, gray wolf optimizer, Archimedes optimization, and the Aquila optimizer are adopted. The Iris data set is used in this paper for the training. Mean square error, mean absolute error and determination coefficient are used as statistical measurement techniques to determine the effect of the network architecture and the adopted training algorithm. The metaheuristic algorithms are shown to have superior capability over the gradient descent-based algorithms in terms of artificial neural network training. In addition to their success in error rates, the classification capabilities of the metaheuristic algorithms are also observed to be in the range of 94%-97%. The hunger games search algorithm is also observed for its specific advantages amongst the metaheuristic algorithms as it maintains good performance in terms of classification ability and other statistical measurements.
Description
Izci, Davut/0000-0001-8359-0875
ORCID
Keywords
Classification, Swarm-Based Metaheuristic Algorithms, Gradient Descent-Based Algorithm, Artificial Neural Networks
Turkish CoHE Thesis Center URL
WoS Q
N/A
Scopus Q
Q4
Source
Volume
12
Issue
1