A novel approach to training feed-forward multi-layer perceptrons with recently proposed secretary bird optimization algorithm


Dilber B., ÖZDEMİR A. F.

Neural Computing and Applications, cilt.38, sa.5, 2026 (Scopus) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 38 Sayı: 5
  • Basım Tarihi: 2026
  • Doi Numarası: 10.1007/s00521-026-11874-x
  • Dergi Adı: Neural Computing and Applications
  • Derginin Tarandığı İndeksler: Scopus, Compendex, Index Islamicus, INSPEC, zbMATH
  • Anahtar Kelimeler: Artificial neural networks, Metaheuristic algorithm, Multi-layer perceptron, Secretary bird optimization, Training of neural networks
  • Dokuz Eylül Üniversitesi Adresli: Evet

Özet

A multilayer perceptron (MLP) is a type of artificial neural network (ANN) that incorporates hidden layers and is extensively utilized in artificial intelligence applications. MLPs are frequently employed for both regression and classification tasks, with backpropagation being the most commonly used training method. However, the classical gradient-based backpropagation approach has several drawbacks, including susceptibility to local minima, sensitivity to initialization, and slow convergence rates. To overcome these challenges, researchers have increasingly explored metaheuristic optimization techniques as alternative training approaches. One such method is the Secretary Bird Optimization Algorithm (SBOA), a population-based metaheuristic inspired by the unique hunting and survival strategies of secretary birds. This study introduces SBOA as a novel approach for training MLPs, focusing on optimizing weight and bias values-key factors influencing model performance. The effectiveness of SBOA was evaluated using 13 datasets covering five regression problems, three function approximation tasks, and five classification challenges. Comparative analyses against leading optimization techniques, including particle swarm optimization (PSO), artificial bee colony (ABC), grey wolf optimization (GWO), whale optimization algorithm (WOA), moth flame optimization (MFO), salp swarm algorithm (SSA), gorilla troops optimization (GTO), and zebra optimization algorithm (ZOA), indicate that SBOA achieves superior performance. Furthermore, the results demonstrate the effectiveness of the proposed approach in handling diverse datasets for real-world regression and classification challenges.