Issue |
Wuhan Univ. J. Nat. Sci.
Volume 28, Number 6, December 2023
|
|
---|---|---|
Page(s) | 461 - 473 | |
DOI | https://doi.org/10.1051/wujns/2023286461 | |
Published online | 15 January 2024 |
Computer Science
CLC number: TP301.6
Harris Hawks Algorithm Incorporating Tuna Swarm Algorithm and Differential Variance Strategy
1
College of Optoelectronic Information and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
2
School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201620, China
† To whom correspondence should be addressed. E-mail: snowyhm@sina.com
Received:
12
June
2023
Because of the low convergence accuracy of the basic Harris Hawks algorithm, which quickly falls into the local optimal, a Harris Hawks algorithm combining tuna swarm algorithm and differential mutation strategy (TDHHO) is proposed. The escape energy factor of nonlinear periodic energy decline balances the ability of global exploration and regional development. The parabolic foraging approach of the tuna swarm algorithm is introduced to enhance the global exploration ability of the algorithm and accelerate the convergence speed. The difference variation strategy is used to mutate the individual position and calculate the fitness, and the fitness of the original individual position is compared. The greedy technique is used to select the one with better fitness of the objective function, which increases the diversity of the population and improves the possibility of the algorithm jumping out of the local extreme value. The test function tests the TDHHO algorithm, and compared with other optimization algorithms, the experimental results show that the convergence speed and optimization accuracy of the improved Harris Hawks are improved. Finally, the enhanced Harris Hawks algorithm is applied to engineering optimization and wireless sensor networks (WSN) coverage optimization problems, and the feasibility of the TDHHO algorithm in practical application is further verified.
Key words: Harris Hawks optimization / nonlinear periodic energy decreases / differential mutation strategy / wireless sensor networks (WSN) coverage optimization results
Biography: XU Xiaohan, male, Master, research directions: intelligent optimization algorithm, signal processing, fault diagnosis, wavefront detection, etc. E-mail: iridescenthan@gmail.com
Fundation item: Supported by Key Laboratory of Space Active Opto-Electronics Technology of Chinese Academy of Sciences (2021ZDKF4), and Shanghai Science and Technology Innovation Action Plan (21S31904200, 22S31903700)
© Wuhan University 2023
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
0 Introduction
In recent years, with the development of society, people need to face more and more complex problems, such as engineering optimization problems, complex computational problems, etc. Traditional algorithms for solving these complex problems lead to low computational accuracy and long computation time due to the substantial computational volume of the issues. To solve these complex problems better, experts and scholars have proposed a series of intelligent optimization algorithms and achieved better results.
Swarm intelligence optimization algorithm is a metaheuristic algorithm that has now been widely used in image processing [1], control optimization [2,3], electric power [4,5], and so on. The mainstream swarm intelligence optimization algorithms are Particle Swarm Optimization (PSO) [6], Grey Wolf Optimizer (GWO) [7], Whale Optimization Algorithm (WOA) [8], Butterfly Optimization Algorithm (BOA) [9], Bacterial Foraging Optimization (BFA) [10], Harris Hawks Optimization (HHO) [11], Sparrow Search Algorithm (SSA) [12], etc. Among them, the Harris Hawks algorithm is a new swarm intelligence optimization algorithm proposed by Heidari et al[11] inspired by the hunting behavior of the Harris Hawks population in 2019. It has the characteristics of high stability and simple structure compared with other optimization algorithms. However, the HHO algorithm also has disadvantages, such as quickly falling into local extremes and low convergence accuracy.
Many scholars have proposed improved strategies to overcome the shortcomings of the HHO algorithm. Guo et al [13] introduced the Corsi variation and adaptive weights into the Harris Hawks algorithm to improve the algorithm's local exploitation ability; Tang et al[14] introduced strategies such as elite hierarchy and random wandering to optimize the algorithm's ability to jump out of the local optimum; Chen et al[15] improved the algorithm's global exploration ability by introducing reverse learning and spiral exploration and other strategies to enhance the global exploration ability of the algorithm; Liu et al [16] optimized the foraging behavior of individuals by setting a square domain topology, and then formed a fixed replacement probability, which enhanced the communication between populations and improved the algorithm's optimization-seeking ability and robustness; Zhang et al [17] explored the impact of different update methods of escape energy E on the algorithm; Yin et al[18] used such infinite collapse chaos mapping to reduce the influence of randomness in the population initialization process, and also introduced strategies such as golden positive cosine and fused lensing imaging learning to improve the search space and merit-seeking ability of the algorithm; Chen et al [19] enhanced the convergence speed of the algorithm by strategies such as random traceless σ-variation, pseudo-opposition and pseudo-reflection learning mechanisms.
These improvement strategies have improved the algorithm's overall performance, but the Harris Hawks algorithm still has a lot of room for improvement. In summary, to further enhance the optimization accuracy and convergence speed of the Harris Hawk, we propose a Harris Hawks algorithm that integrates the tuna school algorithm and the differential variation strategy (TDHHO). The problem of performing only the development phase after the middle of the algorithm is solved by reflecting the linear periodically decreasing escape energy E, which balances the global exploration and local exploitation; secondly, the parabolic-shaped foraging strategy of the tuna school algorithm is introduced to strengthen the global exploration ability of the original algorithm; then the variation of the updated positions of individuals is performed by the differential variation strategy to increase the diversity of the population and make the algorithm avoid falling into local extremes. The experimental results prove the effectiveness of the TDHHO algorithm by testing 16 benchmark test functions and comparing it with other optimization algorithms. Finally, the algorithm is applied to the design problem of extension/compression spring and wireless sensor networks (WSN) coverage optimization problem, and the experimental results show that the TDHHO algorithm has a more vital foraging ability than other algorithms and has a specific application value.
1 Harris Hawks Optimization Algorithm
Harris Hawks algorithm is a new meta-heuristic algorithm inspired by the hunting behavior of Harris Hawk, which is divided into the global exploration phase, local development phase, and conversion phase.
1.1 Global Exploration Phase
During the global exploration phase, the Harris Hawks will search for prey locations on a large scale within the hunting area, updating its position through two equal opportunity strategies with the following position update formula.
where denotes the position of the -th iteration, means the position of the -th iteration, is the position of the individual randomly selected in the current population, represents the position of the prey, i.e., the current optimal solution, describes the upper bound of the population, represents the lower bound of the population, represents the average position of the population, are random numbers between 0 and 1.
1.2 Conversion Phase
In the Harris Hawks algorithm, the escape energy E controls the Harris Hawks to perform the exploration or exploitation behavior with the following equation.
where denotes the current number of iterations, represents the maximum number of iterations, denotes a random number between 1 and 1, and denotes a linear parameter. When , the algorithm performs an exploration phase, which searches for prey locations on a large scale. When , the algorithm performs the exploitation phase, which conducts a local search.
1.3 Partial Development Stage
After entering the development phase, the Harris Hawks has surrounded the prey and will attack the target. The Harris Hawks has a variety of attack strategies, namely soft surround, rugged surround, swooping soft surround, and swooping hard surround. The choice of attack strategy is controlled by the escape energy and the random number , which is between 0 and 1.
When and , the Harris Hawks uses a soft envelope strategy, according to the formula:
When and , the Harris Hawks adopts a complex encircling strategy. At this time, the prey's energy has been exhausted, and it cannot jump. The position update formula is as follows:
When and , the Harris hawk adopts a swooping soft encirclement strategy. The encircled prey still has a large escape energy E and can jump. Then, the Harris hawk will attack in two ways. If one attack fails, the second is used. If the second attack fails, the original position remains unchanged. The formula is as follows:
where denotes a D-dimensional random vector, the flight function, and the fitness function.
When and , Harris's hawk adopts a swooping hard encirclement strategy, in which the encircled prey has insufficient escape energy and Poor jumping ability. At this time, the Harris hawk will also attack in two ways. If the attack fails, then another will be used. If the second attack fails, the original position will remain unchanged. The parameters in the following formula are the same as above.
2 Improved Harris Hawks Algorithm
2.1 The Nonlinear Periodic Energy-Decreasing Escape Energy E
The exploration and development phases of the Harris Hawks algorithm are controlled by the escape energy , which decreases linearly in the HHO algorithm, as shown in Fig. 1 (the maximum number of iterations of the algorithm is set to 500).
Fig. 1 Escape energy curve of HHO algorithm |
When the escape energy , the exploration phase is executed, and the algorithm has a strong global search capability. When the escape energy , the exploitation phase is performed, and the algorithm has a strong local search capability. In the original HHO algorithm, when the number of iterations , and the escape energy E is always less than 1, the algorithm always performs the exploitation phase, which easily falls into the local optimal solution and is easy to mature prematurely. To solve the problem of only the development stage being implemented in the algorithm's middle and later stages, an escape energy decay strategy that integrates nonlinear energy decline and energy periodic fluctuations [20] is proposed, as shown in Fig. 2. The formula is as follows:
Fig. 2 Escape energy curve of TDHHO algorithm |
where is the current number of iterations, is the maximum number of iterations, denotes the nonlinear parameter, and k takes the value of 4. It can be seen that with the assistance of the parabolic feeding strategy of the tuna swarm algorithm. However, the improved energy decay trend is still approaching 0; the decay in the early and middle stages becomes slow, and the deterioration in the later stages becomes rapid. In detail, under the influence of the periodic fluctuation strategy of energy, Harris hawks' hunting stage will alternate between exploration and exploitation phases for a long time. In other words, the duration of the two stages will be balanced. Harris hawks have more opportunities to conduct global search (even if the number of iterations exceeds ) and more abundant dwell time for each local search. Those methods enable the algorithm to maintain high-precision local optimization ability from the beginning and a more robust ability to jump out of local optima.
2.2 Introduction of a Parabolic Shape-Based Update Mechanism
The tuna school optimization(TSO) algorithm [21] is a meta-heuristic algorithm that mimics the foraging strategy of tuna schools. The TSO algorithm has two foraging methods: the spiral foraging method and the parabolic foraging method. Both have a robust global search capability, making the algorithm characterized by strong merit-seeking ability and good convergence. The parabolic foraging method is that tuna will explore the surrounding space to search for food with a parabolic-shaped route, and the mathematical expression is shown in the following equation.
where is a random number of 1 or 1, is the number of current iterations, and is the maximum number of iterations.
In the classical Harris Hawks algorithm, the global exploration phase described in Eq. (1) involves updating the hawk's position through communication with individuals from a randomly selected population. However, this method does not directly involve communication with the globally optimal individual. Consequently, obtaining a new position becomes overly random and lacks clear guidance, ultimately leading to a decrease in convergence speed and a degradation of the algorithm's overall convergence. The accuracy of the search for the optimal is not high. To address this situation, a parabolic shape-based updating mechanism is introduced in the exploration phase of the HHO, replacing equation (1) with equation (20), using the global optimal solution to guide it, which strengthens the convergence speed of the algorithm, and the position updating equation is as follows (with the same parameters as above).
To ensure the convergence of the algorithm, both the original algorithm and the improved algorithm focus on the execution of the local development stage in the later iteration, which will increase the risk of the algorithm falling into local optima and reduce optimization accuracy. In this case, we introduced equation (19). Suppose the position is updated by Eq. (6) or Eq. (7). In that case, the fitness is calculated and compared with the fitness of the position before the update, and if it becomes better, the position updated by Eq. (6) or Eq. (7) is used. Otherwise, the algorithm may fall into the local extremes, and Eq. (19) is used to update the position. The fitness is calculated and compared with the fitness of the original position, and if it becomes better, the new position is used. Otherwise, this strategy increases the chance of the algorithm jumping out of the local extremes and strengthens the algorithm's ability to find the best, replacing Eq. (6) with Eq. (24) and Eq. (7) with Eq. (26). The new position update equation is as follows (with the same parameters as above).
2.3 Incorporating Differential Variation Strategies
The location update formula in the Harris Hawks algorithm uses many global optimal solutions to guide the individuals, which is conducive to the rapid convergence of the algorithm. When dealing with simple problems, it can quickly converge to the optimal value, but when dealing with complex issues, the algorithm is easy to fall into the local optimum under the guidance of the global optimal solution, resulting in premature maturity of the algorithm, which reduces the accuracy of the search for the best. To address this problem, when the Harris Hawks algorithm individual location updates after the normal execution of the formula, we introduce the differential variation strategy[22] (the differential variation strategy is to achieve the purpose of variation by randomly selecting individuals within the population for differencing and scaling), which varies the positions of individuals in the population to produce a new result and calculate the fitness, and then use the greedy strategy to compare the new position with the old position to retain the better fitness. The formula is as follows:
where denotes the position of individuals after the normal execution of the HHO algorithm, representing the random individuals in the population. Through the differential variation strategy, the population diversity is increased. The individuals in the early stage of the population are more different from each other. The difference is more extensive, which can help the algorithm jump out of the local optimum, avoid the premature maturity of the algorithm, and improve the algorithm's optimization-seeking accuracy and global search ability. In the late iteration, the individuals in the population tend to be consistent, and the difference decreases, ensuring the algorithm's convergence performance.
2.4 Algorithm Flow of TDHHO
Step 1: Initialize the basic parameters of the population, such as the population size, upper and lower boundaries, and other parameters.
Step 2: Calculate the escape energy E of the TDHHO algorithm according to Eq. (15).
Step 3: If , update the position according to Eq. (20) or Eq. (2).
Step 4: If and , update the position parameter of the individual according to Eq. (24). If and , the position parameters of the individual are updated according to Eq. (10). If and , the location parameters of the individual are updated according to Eq. (26). If and , the position parameters of the individual are updated according to Eq. (13).
Step 5: Update the position parameters of the individual using Eq. (28).
Step 6: Determine whether the maximum number of iterations is reached, and if yes, output the best position and terminate the algorithm; otherwise, return to Step 2.
3 Experimental Results and Analysis
3.1 Experimental Environment and Test Functions
The experimental environment of this paper is: OS Win10 64-bit, RAM 16GB, CPU I7-10750H, simulation software Matlab2018b.
To verify the effectiveness of the improved Harris Hawks algorithm, 16 benchmark test functions are selected, as shown in Table 1 below. Among these 16 benchmark functions, functions F1-F7 are single-peak test functions, which test the convergence performance and local exploitation ability of the algorithm; F8-F13 are multi-peak test functions, which are characterized by the existence of multiple local extrema, easily allowing the algorithm to fall into the local optimum, testing The global exploration ability of the algorithm, the ability to jump out of the local optimum and the ability to avoid premature maturity, and the functions F14-F16 are fixed-dimensional functions to test whether the algorithm can converge to the theoretical optimum on simple problems, please refer to Table 1 for details.
In addition, the PSO, GWO, WOA, equilibrium optimizer (EO), and the original HHO were selected for a comprehensive comparison. The number of populations for each algorithm is set to 30, and the number of iterations is 500. In order to reduce the interference caused by the randomness of the algorithm, each algorithm is run 30 times independently of the test function. The results of these 30 times are averaged. Then we standardization deviation, where the average value is used to evaluate the performance of the algorithm, and the standard deviation is used to assess the stability of the algorithm.
Test function
3.2 Experimental Results and Analysis ofLow-Dimensional Test Functions
From the results in Table 2, we can see that the mean value of the TDHHO algorithm reaches the theoretical optimal value for functions F1-F4, and the standard deviation is 0. Compared with other algorithms, we can see that the TDHHO algorithm has strong convergence performance and can quickly converge to the optimal value. Although it does not reach the theoretical optimal value, the standard deviation and average value are still substantially ahead of other algorithms. This indicates that TDHHO has higher convergence accuracy and more excellent stability than other algorithms. In the multi-peak functions F8-F11, both TDHHO and HHO algorithms reach the theoretical optimum. In contrast, the multi-peak functions F12 and F13 have multiple extrema and are more dispersed, which require high global exploration ability and the ability to avoid premature maturity of the algorithms. It indicates that the global optimization-seeking ability and the ability to jump out of the local optimum of TDHHO are more robust than the other algorithms, and the premature maturity of the algorithm is avoided. On the fixed dimensional functions F14-F16, only the mean value of TDHHO has reached the theoretical optimum, which is ahead of the other algorithms, and the standard deviation shows that the TDHHO algorithm is more stable than the other optimization algorithms.
In summary, the TDHHO algorithm can converge to the theoretical optimum on simple problems relatively more than the other algorithms.
Test results of different algorithms(dim=30)
3.3 Convergence Curve Analysis
Figure 3 shows the convergence curves of single-peak functions F5, F6, multi-peak functions F12, F13, and fixed dimensional functions F15, F16 (population number is 30, maximum iteration number is 500, dimension number is 30). But the convergence curve of the TDHHO algorithm keeps decreasing in the whole iteration cycle; comparing the HHO algorithm and other algorithms shows that the convergence accuracy of TDHHO is greatly improved; for fixed dimensional functions, the convergence curve of the TDHHO algorithm is steeper than other algorithms in the first iteration, and it can converge to the optimal value quickly, which means that TDHHO algorithm has stronger convergence performance and better finding ability than other algorithms. In summary, it can be seen that the improved algorithm not only speeds up the convergence speed but also substantially improves the convergence accuracy of the algorithm.
Fig.3 Partial function convergence curve |
3.4 Experimental Results and Analysis of High-Dimensional Test Functions
From the previous experimental results, it can be seen that the TDHHO algorithm has good performance on low-dimensional test functions. To verify the applicability of the TDHHO algorithm on high-dimensional test functions, the WOA algorithm, GWO algorithm, and HHO algorithm are selected as comparisons for high-dimensional function test experiments. The experimental parameters are set as follows: the number of populations is 30, the maximum number of iterations is 500, and the 100- and 500-dimensional functions are run independently 30 times each, and the mean and standard deviation are taken to verify the performance and stability of the algorithm. As can be seen from Table 3, on the high-dimensional test functions F1-F4, the mean values of the TDHHO algorithm converge to the theoretical value optimum in both dimensions, and for the high-dimensional test functions F5, F6, the TDHHO algorithm is also several orders of magnitude ahead of other algorithms in each dimension, and for the high-dimensional test function F7, the TDHHO algorithm is ahead of the WOA algorithm and the GWO algorithm in two metrics and behind the For the high-dimensional multi-peak function F8, the TDHHO algorithm has the same mean value as the HHO algorithm and is ahead of the other algorithms, but the standard deviation of the HHO algorithm is better than that of the TDHHO algorithm, indicating that the stability of the HHO algorithm is better than that of TDHHO in function F8, and for the high-dimensional multi-peak functions F12-F13, For the high-dimensional multi-peaked functions F12-F13, both the mean and standard deviation of TDHHO algorithm are significantly ahead of the other algorithms by several orders of magnitude, indicating that the TDHHO algorithm still has excellent global optimization-seeking ability in high dimensions and still maintains a strong ability to jump out of the local optimum. In summary, it can be seen that the TDHHO algorithm outperforms other algorithms on most of the high-dimensional test functions, indicating that the improved strategy has good results in high-dimensional optimization-seeking problems, is equally applicable in high-dimensional situations, and can handle complex high-dimensional difficulties in life.
Experimental results of high-dimensional test functions with different algorithms
3.5 Comparison of Different Improved Harris Hawks Algorithms
To further verify the effectiveness of the improved algorithms, we select several improved Harris Hawks algorithms and compare the results of some of the test functions with those of the TDHHO algorithm. The algorithm in Ref. [23] was denoted as IHHO-1, the algorithm in Ref.[24] as GSHHO, the algorithm in Ref.[25] as IHHO-2, and the algorithm in Ref.[26] as EGHHO, the parameters of all algorithms were kept the same (test function dimension 30, number of populations 30, maximum number of iterations 500), each algorithm was run 30 times independently, and the mean and standard deviation of the results were taken for these 30 times.
From the data in Table 4, it can be seen that for the simple single-peaked functions F1-F4, the improved algorithm mostly achieves the theoretical optimal value, but for the complex single-peaked functions F5, F6, the convergence accuracy of the TDHHO algorithm is substantially better than the other improved algorithms. For the multi-peaked functions F12 and F13, the convergence accuracy of the TDHHO algorithm is several orders of magnitude better than that of the other improved algorithms, and it can be seen that the TDHHO algorithm has a better global search capability and the ability to jump out of the local extremes than the other improved algorithms. For the fixed dimensional functions F14-F16, the convergence accuracy of the TDHHO algorithm is not much ahead of other algorithms, but the standard deviation is substantially ahead of other algorithms, which indicates that the TDHHO algorithm is more stable. In summary, the improvement strategy of the TDHHO algorithm still has a certain superiority compared with other literature.
Experimental results of different improved Harris Hawks algorithm test functions
4 Application Examples
4.1 Application of Improved Harris Hawks Algorithm to Engineering Problems
To verify the effectiveness of the improved Harris Hawks algorithm in engineering applications, the design problem of an extension/compression spring in a classical engineering problem is selected, the improved Harris Hawks algorithm is applied to this problem, and the results are compared with other optimization algorithms to verify the effectiveness of the TDHHO algorithm.
The design problem of tension/compression springs is a classical engineering optimization problem. This problem aims to minimize the weight of an extension/compression spring, which has many constraints, such as minimum deflection, shear stress, surge frequency, outer diameter limit, and other restrictions. The problem has three variables: wire diameter , average coil diameter , and number of active coils .
The TDHHO algorithm, together with the GWO, WOA, Sparrow Search Algorithm (SSA), and HHO, is used to solve the design problem of the tension/compression spring. The experimental results are shown in Table 5, where the TDHHO algorithm optimizes the design problem of the tension/compression spring better than the other algorithms, and it can be seen that the TDHHO algorithm has some feasibility in the practical application problem.
Experimental results of different algorithms for the design of tension/compression springs
4.2 Application of TDHHO Algorithm for Coverage Optimization in Wireless Sensor Networks
To further verify the effectiveness of the TDHHO algorithm for practical engineering applications, the TDHHO algorithm is chosen to optimize the wireless sensor network coverage. With the continuous development of the Internet of Things (IoT) technology, wireless sensor networks play a crucial role in connecting people to the Internet. Nowadays, they have been widely used in smart homes, communications, smart agriculture, and other fields. To optimize the user experience and reduce the cost, the wireless sensor network coverage optimization problem has gained more and more attention. Many scholars have applied intelligent optimization algorithms to sensor coverage optimization in recent years, hoping to use fewer sensor nodes to cover a larger area.
The WSN network coverage model is a two-dimensional plane with a length of and a width of as the target, in which several wireless network sensors are deployed randomly. The set of these wireless sensor nodes is , and the sensing radius and communication radius of the wireless network sensor nodes are and , respectively.
where denotes the distance between the point and the target point, denotes the position of the target point, denotes the probability that the node covers the target point, denotes the probability of joint perception, and denotes the total coverage rate.
Equation (36) is taken as the objective function, and TDHHO, HHO, SSA, and WOA are used to solve the problem. The results are shown in Table 6. It can be seen from the table that the TDHHO algorithm has the best optimization effect and the highest coverage rate.Figures 4-7 show the optimal coverage effect of TDHHO, HHO, WOA, and SSA, respectively. Compared with other algorithms, the optimal coverage effect of TDHHO will be more uniform and have a wider coverage area.
Fig.4 The optimal coverage effect of TDHHO |
Fig.5 The optimal coverage effect of HHO |
Fig.6 The optimal coverage effect of WOA |
Fig.7 The optimal coverage effect of SSA |
Optimization coverage of different algorithms
5 Conclusion
To address the problem that the convergence accuracy of the Harris Hawks algorithm is not high and it is easy to fall into local optimum, we propose a Harris Hawks algorithm that integrates the tuna swarm algorithm and differential variation strategy. Firstly, the escape energy E in the original algorithm is improved so that the algorithm still has the opportunity to execute the global exploration strategy after the middle of the iteration, balancing the global search and local search ability; secondly, the parabolic shaped foraging strategy of the tuna algorithm is introduced to improve the convergence performance and the optimization-seeking accuracy of the algorithm; finally, the positions after the individual update are mutated by the differential variation strategy, and the adaptation degree is calculated and compared with that of the positions before the mutation The adaptation degree is compared. The position with a better adaptation degree is selected, which increases the population's diversity and improves the algorithm's ability to jump out of the local optimum and avoid premature maturity. Sixteen classical low-dimensional test functions are selected to test the improved Harris Hawks algorithm and compared with five algorithms, including HHO. The experimental results show that the improved Harris Hawks algorithm converges faster, has higher accuracy in finding the best, has a better ability to jump out of the local optimum, and is more stable. At the same time, 13 high-dimensional test functions of 100 and 500 dimensions are experimented, except the improved algorithm still outperforms other algorithms in the rest of the functions, indicating that the improved algorithm is still applicable to high-dimensional problems. To further verify the effect of the improved algorithm, several improved Harris Hawks algorithms are selected for comparison, and the results show that the improved strategy of the TDHHO algorithm has a certain superiority. The following work continues to improve the algorithm's performance and apply it to more practical optimization problems.
References
- Rodríguez-Esparza E, Zanella-Calzada L A, Oliva D, et al. An efficient Harris hawks-inspired image segmentation method[J]. Expert Systems with Applications, 2020, 155: 113428. [CrossRef] [Google Scholar]
- Jia L, Zhao X Q. An improved particle swarm optimization (PSO) optimized integral separation PID and its application on central position control system[J]. IEEE Sensors Journal, 2019, 19(16): 7064-7071. [NASA ADS] [CrossRef] [Google Scholar]
- Zeng G H, Fu X W, Liu J, et al. PMSM vector control optimization based on fractional PIλ of rotational speed outer loop of dragonfly algorithm[J]. Wuhan University Journal of Natural Sciences, 2021, 26(5):429-436. [Google Scholar]
- Zhang H R, Yang Y, Zhang Y, et al. A combined model based on SSA, neural networks, and LSSVM for short-term electric load and price forecasting[J]. Neural Computing and Applications, 2021, 33(2): 773-788. [CrossRef] [Google Scholar]
- Liao H F, Zeng G H, Huang B, et al. Optimal control virtual inertia of optical storage microgrid based on improved sailfish algorithm[J]. Wuhan University Journal of Natural Sciences, 2022, 27(3):218-230. [CrossRef] [EDP Sciences] [Google Scholar]
- Kennedy J, Eberhart R. Particle swarm optimization[C]// Proceedings of the 1995 International Conference on Neural Networks. Piscataway: IEEE, 1995: 1942-1948. [Google Scholar]
- Mirjalili S, Mirjalili S M, Lewis A. Grey wolf optimizer[J]. Advances in Engineering Software, 2014, 69: 46-61. [CrossRef] [Google Scholar]
- Mirjalili S, Lewis A. The whale optimization algorithm[J]. Advances in Engineering Software, 2016, 95: 51-67. [CrossRef] [Google Scholar]
- Arora S, Singh S. Butterfly optimization algorithm: A novel approach for global optimization[J]. Soft Computing, 2019, 23(3): 715-734. [CrossRef] [Google Scholar]
- Das S, Biswas A, Dasgupta S, et al. Bacterial foraging optimization algorithm: Theoretical foundations, analysis, and applications[C]// Foundations of Computational Intelligence Volume 3: Global Optimization. Berlin: Springer-Verlag, 2009: 23-55. [Google Scholar]
- Heidari A A, Mirjalili S, Faris H, et al. Harris Hawks optimization: Algorithm and applications[J]. Future Generation Computer Systems, 2019, 97: 849-872. [CrossRef] [Google Scholar]
- Xue J K, Shen B. A novel swarm intelligence optimization approach: Sparrow search algorithm[J]. Systems Science & Control Engineering, 2020, 8(1): 22-34. [CrossRef] [Google Scholar]
- Guo Y X, Liu S, Gao W X, et al. Improved Harris Hawks optimization algorithm with multiple strategies [J]. Microelectronics and Computers, 2021, 38(7): 18-24(Ch). [Google Scholar]
- Tang A D, Han T, Xu D W, et al. Chaotic elite Harris Hawks optimization algorithm[J]. Journal of Computer Applications, 2021,41(8): 2265-2272(Ch). [Google Scholar]
- Li C Y, Li J, Chen H L, et al. Enhanced Harris Hawks optimization with multi-strategy for global optimization tasks[J]. Expert Systems with Applications, 2021, 185: 115499. [CrossRef] [Google Scholar]
- Liu X L, Liang T Y. Harris Hawk optimization algorithm based on square neighborhood and random array[J]. Control and Decision, 2022, 37(10): 2467-2476(Ch). [Google Scholar]
- Zhang Y, Zhou X Z, Shi P C. Modified Harris Hawks optimization algorithm for global optimization problems[J]. Arabian Journal for Science and Engineering, 2020, 45(12): 10949-10974. [CrossRef] [Google Scholar]
- Yin D X, Zhang L N, Zhang D M, et al. Harris Hawks optimization based on chaotic lens imaging learning and its application[J]. Chinese Journal of Sensors and Actuator, 2021, 34 (11): 1463-1474(Ch). [Google Scholar]
- Chen Q, Li K S. Based on random tracelessness σ modified HHO algorithm for mutation and its application[J]. Computer Application Research, 2022(5): 1-9 (Ch). [Google Scholar]
- Zhao S J, Gao L F, Yu D M, et al. Improved HHO Algorithm Integrating Periodic Energy Declining and Newton Local Enhancement[J]. Control and Decision, 2021, 36(3): 629-636(Ch). [Google Scholar]
- Xie L, Han T, Zhou H, et al. Tuna swarm optimization: A novel swarm-based metaheuristic algorithm for global optimization[J]. Computational Intelligence and Neuroscience, 2021, 2021: 9210050. [PubMed] [Google Scholar]
- Storn R, Price K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces[J]. Journal of Global Optimization, 1997, 11: 341-359. [CrossRef] [Google Scholar]
- Chen G, Zeng G H, Huang B, et al. HHO algorithm integrating mutually beneficial symbiosis and lens imaging learning[J]. Computer Engineering and Application, 2022, 58(10): 76-86(Ch). [Google Scholar]
- Nie C F. Harris Hawk optimization algorithm combining golden sine and random walk[J]. Intelligent Computer and Application, 2021, 11(7): 113-119+123(Ch). [Google Scholar]
- Zhang S, Wang J J, Li A L, et al. Harris Hawk optimization algorithm integrating normal clouds and dynamic perturbations [J]. Small Microcomputer System, 2022: 1-11(Ch). [Google Scholar]
- Guo Y X, Liu S, Gao W X, et al. The HHO algorithm for elite reverse learning and golden sine optimization[J]. Computer Engineering and Application, 2021(1): 8-12Ch). □ [Google Scholar]
All Tables
Experimental results of high-dimensional test functions with different algorithms
Experimental results of different algorithms for the design of tension/compression springs
All Figures
Fig. 1 Escape energy curve of HHO algorithm |
|
In the text |
Fig. 2 Escape energy curve of TDHHO algorithm |
|
In the text |
Fig.3 Partial function convergence curve |
|
In the text |
Fig.4 The optimal coverage effect of TDHHO |
|
In the text |
Fig.5 The optimal coverage effect of HHO |
|
In the text |
Fig.6 The optimal coverage effect of WOA |
|
In the text |
Fig.7 The optimal coverage effect of SSA |
|
In the text |
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.