Harris Hawks Algorithm Incorporating Tuna Swarm Algorithm and Differential Variance Strategy

: Because of the low convergence accuracy of the basic Harris Hawks algorithm, which quickly falls into the local optimal, a Harris Hawks algorithm combining tuna swarm algorithm and differential mutation strategy (TDHHO) is proposed. The escape energy factor of nonlinear periodic energy decline balances the ability of global exploration and regional development. The parabolic foraging approach of the tuna swarm algorithm is introduced to enhance the global exploration ability of the algorithm and accelerate the conver ‐ gence speed. The difference variation strategy is used to mutate the individual position and calculate the fitness, and the fitness of the original individual position is compared. The greedy technique is used to select the one with better fitness of the objective function, which increases the diversity of the population and improves the possibility of the algorithm jumping out of the local extreme value. The test function tests the TDHHO algorithm, and compared with other optimization algorithms, the experimental results show that the conver ‐ gence speed and optimization accuracy of the improved Harris Hawks are improved. Finally, the enhanced Harris Hawks algorithm is applied to engineering optimization and wireless sensor networks (WSN) coverage optimization problems, and the feasibility of the TDHHO algorithm in practical application is further verified.


Introduction
In recent years, with the development of society, people need to face more and more complex problems, such as engineering optimization problems, complex computational problems, etc. Traditional algorithms for solving these complex problems lead to low computa-tional accuracy and long computation time due to the substantial computational volume of the issues.To solve these complex problems better, experts and scholars have proposed a series of intelligent optimization algorithms and achieved better results.
Swarm intelligence optimization algorithm is a metaheuristic algorithm that has now been widely used in image processing [1] , control optimization [2,3] , electric power [4,5] , and so on.The mainstream swarm intelligence optimization algorithms are Particle Swarm Optimization (PSO) [6] , Grey Wolf Optimizer (GWO) [7] , Whale Optimization Algorithm (WOA) [8] , Butterfly Optimization Algorithm (BOA) [9] , Bacterial Foraging Optimization (BFA) [10] , Harris Hawks Optimization (HHO) [11] , Sparrow Search Algorithm (SSA) [12] , etc.Among them, the Harris Hawks algorithm is a new swarm intelligence optimization algorithm proposed by Heidari et al [11] inspired by the hunting behavior of the Harris Hawks population in 2019.It has the characteristics of high stability and simple structure compared with other optimization algorithms.However, the HHO algorithm also has disadvantages, such as quickly falling into local extremes and low convergence accuracy.
Many scholars have proposed improved strategies to overcome the shortcomings of the HHO algorithm.Guo et al [13] introduced the Corsi variation and adaptive weights into the Harris Hawks algorithm to improve the algorithms local exploitation ability; Tang et al [14] introduced strategies such as elite hierarchy and random wandering to optimize the algorithms ability to jump out of the local optimum; Chen et al [15] improved the algorithms global exploration ability by introducing reverse learning and spiral exploration and other strategies to enhance the global exploration ability of the algorithm; Liu et al [16] optimized the foraging behavior of individuals by setting a square domain topology, and then formed a fixed replacement probability, which enhanced the communication between populations and improved the algorithm  s optimization-seeking ability and robustness; Zhang et al [17] explored the impact of different update methods of escape energy E on the algorithm; Yin et al [18] used such infinite collapse chaos mapping to reduce the influence of randomness in the population initialization process, and also introduced strategies such as golden positive cosine and fused lensing imaging learning to improve the search space and merit-seeking ability of the algorithm; Chen et al [19] enhanced the convergence speed of the algorithm by strategies such as random traceless σ -variation, pseudo-opposition and pseudoreflection learning mechanisms.
These improvement strategies have improved the al-gorithms overall performance, but the Harris Hawks algorithm still has a lot of room for improvement.In summary, to further enhance the optimization accuracy and convergence speed of the Harris Hawk, we propose a Harris Hawks algorithm that integrates the tuna school algorithm and the differential variation strategy (TDHHO).The problem of performing only the development phase after the middle of the algorithm is solved by reflecting the linear periodically decreasing escape energy E, which balances the global exploration and local exploitation; secondly, the parabolic-shaped foraging strategy of the tuna school algorithm is introduced to strengthen the global exploration ability of the original algorithm; then the variation of the updated positions of individuals is performed by the differential variation strategy to increase the diversity of the population and make the algorithm avoid falling into local extremes.The experimental results prove the effectiveness of the TDHHO algorithm by testing 16 benchmark test functions and comparing it with other optimization algorithms.Finally, the algorithm is applied to the design problem of extension/compression spring and wireless sensor networks (WSN) coverage optimization problem, and the experimental results show that the TDHHO algorithm has a more vital foraging ability than other algorithms and has a specific application value.

Harris Hawks Optimization Algorithm
Harris Hawks algorithm is a new meta-heuristic algorithm inspired by the hunting behavior of Harris Hawk, which is divided into the global exploration phase, local development phase, and conversion phase.

Global Exploration Phase
During the global exploration phase, the Harris Hawks will search for prey locations on a large scale within the hunting area, updating its position through two equal opportunity strategies with the following position update formula. 1) where X (t) denotes the position of the t-th iteration, X (t + 1) means the position of the t + 1-th iteration, X rand (t) is the position of the individual randomly selected in the current population, X rabbit (t) represents the position of the prey, i. e., the current optimal solution, ub describes the upper bound of the population, lb represents the lower bound of the population, X m (t) represents the average po-sition of the population, qr 1 r 2 r 3 r 4 are random numbers between 0 and 1.

Conversion Phase
In the Harris Hawks algorithm, the escape energy E controls the Harris Hawks to perform the exploration or exploitation behavior with the following equation.
where t denotes the current number of iterations, T represents the maximum number of iterations, E 0 denotes a random number between -1 and 1, and E 1 denotes a linear parameter.When | E | ≥ 1, the algorithm performs an exploration phase, which searches for prey locations on a large scale.When | E | < 1, the algorithm performs the exploitation phase, which conducts a local search.

Partial Development Stage
After entering the development phase, the Harris Hawks has surrounded the prey and will attack the target.The Harris Hawks has a variety of attack strategies, namely soft surround, rugged surround, swooping soft surround, and swooping hard surround.The choice of attack strategy is controlled by the escape energy E and the random number k, which is between 0 and 1.
When | E | ≥ 0.5 and k ≥ 0.5, the Harris Hawks uses a soft envelope strategy, according to the formula: When | E | < 0.5 and k ≥ 0.5, the Harris Hawks adopts a complex encircling strategy.At this time, the prey  s energy has been exhausted, and it cannot jump.The position update formula is as follows: When |E| ≥ 5 and k < 0.5, the Harris hawk adopts a swooping soft encirclement strategy.The encircled prey still has a large escape energy E and can jump.Then, the Harris hawk will attack in two ways.If one attack fails, the second is used.If the second attack fails, the original position remains unchanged.The formula is as follows: where S denotes a D-dimensional random vector, LF (D) the Levy flight function, and f (x) the fitness function.
When | E | < 0.5 and k < 0.5, Harriss hawk adopts a swooping hard encirclement strategy, in which the en-circled prey has insufficient escape energy E and Poor jumping ability.At this time, the Harris hawk will also attack in two ways.If the attack fails, then another will be used.If the second attack fails, the original position will remain unchanged.The parameters in the following formula are the same as above.
2 Improved Harris Hawks Algorithm

The Nonlinear Periodic Energy-Decreasing Escape Energy E
The exploration and development phases of the Harris Hawks algorithm are controlled by the escape energy E, which decreases linearly in the HHO algorithm, as shown in Fig. 1 (the maximum number of iterations of the algorithm is set to 500).
When the escape energy | E | > 1, the exploration phase is executed, and the algorithm has a strong global search capability.When the escape energy | E | < 1, the exploitation phase is performed, and the algorithm has a strong local search capability.In the original HHO algorithm, when the number of iterations t ≥ T/2, and the escape energy E is always less than 1, the algorithm always performs the exploitation phase, which easily falls into the local optimal solution and is easy to mature prematurely.To solve the problem of only the development stage being implemented in the algorithm  s middle and later stages, an escape energy decay strategy that integrates nonlinear energy decline and energy periodic fluc- tuations [20] is proposed, as shown in Fig. 2. The formula is as follows: where t is the current number of iterations, T is the maximum number of iterations, E 2 denotes the nonlinear parameter, and k takes the value of 4. It can be seen that with the assistance of the parabolic feeding strategy of the tuna swarm algorithm.However, the improved energy decay trend is still approaching 0; the decay in the early and middle stages becomes slow, and the deterioration in the later stages becomes rapid.In detail, under the influence of the periodic fluctuation strategy of energy, Harris hawks hunting stage will alternate between exploration and exploitation phases for a long time.In other words, the duration of the two stages will be balanced.Harris hawks have more opportunities to conduct global search (even if the number of iterations exceeds T/2) and more abundant dwell time for each local search.Those methods enable the algorithm to maintain high-precision local optimization ability from the beginning and a more robust ability to jump out of local optima.

Introduction of a Parabolic Shape-Based Update Mechanism
The tuna school optimization(TSO) algorithm [21] is a meta-heuristic algorithm that mimics the foraging strategy of tuna schools.The TSO algorithm has two foraging methods: the spiral foraging method and the parabolic foraging method.Both have a robust global search capability, making the algorithm characterized by strong merit-seeking ability and good convergence.The parabolic foraging method is that tuna will explore the sur-rounding space to search for food with a parabolicshaped route, and the mathematical expression is shown in the following equation.
where TF is a random number of 1 or -1, t is the number of current iterations, and T is the maximum number of iterations.
In the classical Harris Hawks algorithm, the global exploration phase described in Eq. ( 1) involves updating the hawk  s position through communication with individuals from a randomly selected population.However, this method does not directly involve communication with the globally optimal individual.Consequently, obtaining a new position becomes overly random and lacks clear guidance, ultimately leading to a decrease in convergence speed and a degradation of the algorithm  s overall convergence.The accuracy of the search for the optimal is not high.To address this situation, a parabolic shape-based updating mechanism is introduced in the exploration phase of the HHO, replacing equation ( 1) with equation ( 20), using the global optimal solution to guide it, which strengthens the convergence speed of the algorithm, and the position updating equation is as follows (with the same parameters as above).
X (t + 1) = X rabbit (t) + r ´Dt + TF ´p2 ´Dtq ≥ 0.5 ( 20) To ensure the convergence of the algorithm, both the original algorithm and the improved algorithm focus on the execution of the local development stage in the later iteration, which will increase the risk of the algorithm falling into local optima and reduce optimization accuracy.In this case, we introduced equation ( 19).Suppose the position is updated by Eq. ( 6) or Eq.(7).In that case, the fitness is calculated and compared with the fitness of the position before the update, and if it becomes better, the position updated by Eq. ( 6) or Eq. ( 7) is used.Otherwise, the algorithm may fall into the local extremes, and Eq. ( 19) is used to update the position.The fitness is calculated and compared with the fitness of the original position, and if it becomes better, the new position is used.Otherwise, this strategy increases the chance of the algorithm jumping out of the local extremes and strengthens the algorithms ability to find the best, replacing Eq. ( 6) with Eq. ( 24) and Eq. ( 7) with Eq.  (26).The new position update equation is as follows (with the same parameters as above).25)

Incorporating Differential Variation Strategies
The location update formula in the Harris Hawks algorithm uses many global optimal solutions to guide the individuals, which is conducive to the rapid convergence of the algorithm.When dealing with simple problems, it can quickly converge to the optimal value, but when dealing with complex issues, the algorithm is easy to fall into the local optimum under the guidance of the global optimal solution, resulting in premature maturity of the algorithm, which reduces the accuracy of the search for the best.To address this problem, when the Harris Hawks algorithm individual location updates after the normal execution of the formula, we introduce the DE/rand/2 differential variation strategy [22] (the differential variation strategy is to achieve the purpose of variation by randomly selecting individuals within the population for differencing and scaling), which varies the positions of individuals in the population to produce a new result and calculate the fitness, and then use the greedy strategy to compare the new position with the old position to retain the better fitness.The formula is as follows: where X old denotes the position of individuals after the normal execution of the HHO algorithm, representing the random individuals in the population.Through the differential variation strategy, the population diversity is increased.The individuals in the early stage of the population are more different from each other.The difference is more extensive, which can help the algorithm jump out of the local optimum, avoid the premature maturity of the algorithm, and improve the algorithm  s optimization-seeking accuracy and global search ability.
In the late iteration, the individuals in the population tend to be consistent, and the difference decreases, ensur-ing the algorithms convergence performance.

Algorithm Flow of TDHHO
Step 1: Initialize the basic parameters of the population, such as the population size, upper and lower boundaries, and other parameters.
Step 2: Calculate the escape energy E of the TDHHO algorithm according to Eq. (15).13).
Step 5: Update the position parameters of the individual using Eq.(28).
Step 6: Determine whether the maximum number of iterations is reached, and if yes, output the best position and terminate the algorithm; otherwise, return to Step 2.

Experimental Environment and Test Functions
The experimental environment of this paper is: OS Win10 64-bit, RAM 16GB, CPU I7-10750H, simulation software Matlab2018b.
To verify the effectiveness of the improved Harris Hawks algorithm, 16 benchmark test functions are selected, as shown in Table 1 below.Among these 16 benchmark functions, functions F1-F7 are single-peak test functions, which test the convergence performance and local exploitation ability of the algorithm; F8-F13 are multi-peak test functions, which are characterized by the existence of multiple local extrema, easily allowing the algorithm to fall into the local optimum, testing The global exploration ability of the algorithm, the ability to jump out of the local optimum and the ability to avoid premature maturity, and the functions F14-F16 are fixeddimensional functions to test whether the algorithm can converge to the theoretical optimum on simple problems, please refer to Table 1 for details.
In addition, the PSO, GWO, WOA, equilibrium optimizer (EO), and the original HHO were selected for a comprehensive comparison.The number of populations for each algorithm is set to 30, and the number of iterations is 500.In order to reduce the interference caused by the randomness of the algorithm, each algorithm is run 30 times independently of the test function.The results of these 30 times are averaged.Then we standardization deviation, where the average value is used to evaluate the performance of the algorithm, and the standard deviation is used to assess the stability of the algorithm.

Low-Dimensional Test Functions
From the results in Table 2, we can see that the mean value of the TDHHO algorithm reaches the theoretical optimal value for functions F1-F4, and the standard deviation is 0. Compared with other algorithms, we can see that the TDHHO algorithm has strong convergence performance and can quickly converge to the optimal value.Although it does not reach the theoretical optimal value, the standard deviation and average value are still substantially ahead of other algorithms.This indicates that TDHHO has higher convergence accuracy and more excellent stability than other algorithms.In the multi-peak functions F8-F11, both TDHHO and HHO algorithms reach the theoretical optimum.In contrast, the multi-peak functions F12 and F13 have multiple extrema and are more dispersed, which require high global exploration ability and the ability to avoid premature maturity of the algorithms.It indicates that the global optimization-seeking ability and the ability to jump out of the local optimum of TDHHO are more robust than the other algorithms, and the premature maturity of the algorithm is avoided.On the fixed dimensional functions F14-F16, only the mean value of TDHHO has reached the theoretical optimum, which is ahead of the other algorithms, and the standard deviation shows that the TDHHO algorithm is more stable than the other optimization algorithms.
In summary, the TDHHO algorithm can converge to the theoretical optimum on simple problems relatively more than the other algorithms.

Convergence Curve Analysis
Figure 3 shows the convergence curves of singlepeak functions F5, F6, multi-peak functions F12, F13, and fixed dimensional functions F15, F16 (population number is 30, maximum iteration number is 500, dimension number is 30).But the convergence curve of the TDHHO algorithm keeps decreasing in the whole itera-

Experimental Results and Analysis of High-Dimensional Test Functions
From the previous experimental results, it can be seen that the TDHHO algorithm has good performance on low-dimensional test functions.To verify the applicability of the TDHHO algorithm on high-dimensional test functions, the WOA algorithm, GWO algorithm, and HHO algorithm are selected as comparisons for highdimensional function test experiments.The experimental parameters are set as follows: the number of populations is 30, the maximum number of iterations is 500, and the 100-and 500-dimensional functions are run independently 30 times each, and the mean and standard deviation are taken to verify the performance and stability of the algorithm.As can be seen from Table 3, on the highdimensional test functions F1-F4, the mean values of the TDHHO algorithm converge to the theoretical value optimum in both dimensions, and for the high-dimensional test functions F5, F6, the TDHHO algorithm is also several orders of magnitude ahead of other algorithms in each dimension, and for the high-dimensional test function F7, the TDHHO algorithm is ahead of the WOA algorithm and the GWO algorithm in two metrics and behind the For the high-dimensional multi-peak function F8, the TDHHO algorithm has the same mean value as the HHO algorithm and is ahead of the other algorithms, but the standard deviation of the HHO algorithm is bet-ter than that of the TDHHO algorithm, indicating that the stability of the HHO algorithm is better than that of TDHHO in function F8, and for the high-dimensional multi-peak functions F12-F13, For the high-dimensional multi-peaked functions F12-F13, both the mean and standard deviation of TDHHO algorithm are significantly ahead of the other algorithms by several orders of magnitude, indicating that the TDHHO algorithm still has excellent global optimization-seeking ability in high dimensions and still maintains a strong ability to jump out of the local optimum.In summary, it can be seen that the TDHHO algorithm outperforms other algorithms on most of the high-dimensional test functions, indicating that the improved strategy has good results in highdimensional optimization-seeking problems, is equally applicable in high-dimensional situations, and can handle complex high-dimensional difficulties in life.

Comparison of Different Improved Harris Hawks Algorithms
To further verify the effectiveness of the improved algorithms, we select several improved Harris Hawks algorithms and compare the results of some of the test functions with those of the TDHHO algorithm.The algorithm in Ref. [23] was denoted as IHHO-1, the algorithm in Ref. [24] as GSHHO, the algorithm in Ref. [25] as IHHO-2, and the algorithm in Ref. [26] as EGHHO, the parameters of all algorithms were kept the same (test function dimension 30, number of populations 30, maximum number of iterations 500), each algorithm was run 30 times independently, and the mean and standard deviation of the results were taken for these 30 times.
From the data in Table 4, it can be seen that for the simple single-peaked functions F1-F4, the improved algorithm mostly achieves the theoretical optimal value, but for the complex single-peaked functions F5, F6, the convergence accuracy of the TDHHO algorithm is substantially better than the other improved algorithms.For the multi-peaked functions F12 and F13, the convergence accuracy of the TDHHO algorithm is several or-ders of magnitude better than that of the other improved algorithms, and it can be seen that the TDHHO algorithm has a better global search capability and the ability to jump out of the local extremes than the other improved algorithms.For the fixed dimensional functions F14-F16, the convergence accuracy of the TDHHO algorithm is not much ahead of other algorithms, but the standard deviation is substantially ahead of other algorithms, which indicates that the TDHHO algorithm is more stable.In summary, the improvement strategy of the TDHHO algorithm still has a certain superiority compared with other literature.

Application of Improved Harris Hawks Algorithm to Engineering Problems
To verify the effectiveness of the improved Harris Hawks algorithm in engineering applications, the design problem of an extension/compression spring in a classical engineering problem is selected, the improved Harris Hawks algorithm is applied to this problem, and the results are compared with other optimization algorithms to verify the effectiveness of the TDHHO algorithm.
The design problem of tension/compression springs is a classical engineering optimization problem.This problem aims to minimize the weight of an extension/ compression spring, which has many constraints, such as minimum deflection, shear stress, surge frequency, outer diameter limit, and other restrictions.The problem has three variables: wire diameter x 1 , average coil diam-eter x 2 , and number of active coils x 3 . min The TDHHO algorithm, together with the GWO, WOA, Sparrow Search Algorithm (SSA), and HHO, is used to solve the design problem of the tension/compression spring.The experimental results are shown in Table 5, where the TDHHO algorithm optimizes the design problem of the tension/compression spring better than the other algorithms, and it can be seen that the  TDHHO algorithm has some feasibility in the practical application problem.

Application of TDHHO Algorithm for Coverage Optimization in Wireless Sensor Networks
To further verify the effectiveness of the TDHHO algorithm for practical engineering applications, the TDHHO algorithm is chosen to optimize the wireless sensor network coverage.With the continuous development of the Internet of Things (IoT) technology, wireless sensor networks play a crucial role in connecting people to the Internet.Nowadays, they have been widely used in smart homes, communications, smart agriculture, and other fields.To optimize the user experience and reduce the cost, the wireless sensor network coverage optimization problem has gained more and more attention.Many scholars have applied intelligent optimization algorithms to sensor coverage optimization in recent years, hoping to use fewer sensor nodes to cover a larger area.
The WSN network coverage model is a twodimensional plane with a length of M 1 and a width of M 2 as the target, in which several wireless network sensors are deployed randomly.The set of these wireless sensor nodes is S = {S 1  S 2  S 3  S 4  S 5  S 6    S n } , and the sensing radius and communication radius of the wireless network sensor nodes are R a and R b , respectively.d(S i T j ) = (x ix j ) 2 + (y iy j ) 2 P(S i T j ) where d(S i T j ) denotes the distance between the point and the target point, T j denotes the position of the target point, P(S i T j ) denotes the probability that the node covers the target point, P(ST j ) denotes the probability of joint perception, and R COV denotes the total coverage rate.Equation ( 36) is taken as the objective function, and TDHHO, HHO, SSA, and WOA are used to solve the problem.The results are shown in Table 6.It can be seen from the table that the TDHHO algorithm has the best optimization effect and the highest coverage rate.Figures 4-7 show the optimal coverage effect of TDHHO, HHO, WOA, and SSA, respectively.Compared with other algorithms, the optimal coverage effect of TDHHO will be more uniform and have a wider coverage area.

Fig. 1
Fig. 1 Escape energy curve of HHO algorithm

Step 3 :
If | E | > 1, update the position according to Eq. (20) or Eq.(2).Step 4: If | E | ≥ 0.5 and k ≥ 0.5, update the position parameter of the individual according to Eq. (24).If | E | ≥ 0.5 and k < 0.5, the position parameters of the indi- vidual are updated according to Eq. (10).If | E | < 0.5 and k ≥ 0.5, the location parameters of the individual are updated according to Eq. (26).If | E | < 0.5 and k < 0.5, the position parameters of the individual are updated according to Eq. ( comparing the HHO algorithm and other algorithms shows that the convergence accuracy of TDHHO is greatly improved; for fixed dimensional functions, the convergence curve of the TDHHO algorithm is steeper than other algorithms in the first iteration, and it can converge to the optimal value quickly, which means that TDHHO algorithm has stronger convergence performance and better finding ability than other algorithms.In summary, it can be seen that the improved algorithm not only speeds up the convergence speed but also substantially improves the convergence accuracy of the algorithm.