Abstract
Function optimization is a significant issue in the fields of mathematics and computer science, finding extensive applications across various domains in the real world. Genetic algorithms serve as a global, parallel, and efficient search method for solving function optimization and search problems. Despite the widespread use of genetic algorithms in diverse fields, their intrinsic stochastic search characteristics present challenges such as slow convergence and susceptibility to local convergence, especially when dealing with complex problems. To address these challenges, this paper introduces an adaptive genetic algorithm that incorporates ideas from particle swarm optimization. This integration involves incorporating the mechanisms of particle influence by the optimal particle and population classification from particle swarm optimization into the mutation operator of the genetic algorithm. The proposed approach compares the fitness of individuals and the fitness of elite individuals, executing corresponding strategies based on the ratio of these fitness values. During mutation, the algorithm converges towards a randomly selected elite individual by a certain proportion, thereby exploring solutions that are more optimal in the vicinity of the current elite individual. Through multiple sets of simulation experiments and comparisons, the improved genetic algorithm demonstrates a certain enhancement in search capability compared to traditional genetic algorithms.