Issue 
Wuhan Univ. J. Nat. Sci.
Volume 28, Number 6, December 2023



Page(s)  461  473  
DOI  https://doi.org/10.1051/wujns/2023286461  
Published online  15 January 2024 
Computer Science
CLC number: TP301.6
Harris Hawks Algorithm Incorporating Tuna Swarm Algorithm and Differential Variance Strategy
^{1}
College of Optoelectronic Information and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
^{2}
School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201620, China
^{†} To whom correspondence should be addressed. Email: snowyhm@sina.com
Received:
12
June
2023
Because of the low convergence accuracy of the basic Harris Hawks algorithm, which quickly falls into the local optimal, a Harris Hawks algorithm combining tuna swarm algorithm and differential mutation strategy (TDHHO) is proposed. The escape energy factor of nonlinear periodic energy decline balances the ability of global exploration and regional development. The parabolic foraging approach of the tuna swarm algorithm is introduced to enhance the global exploration ability of the algorithm and accelerate the convergence speed. The difference variation strategy is used to mutate the individual position and calculate the fitness, and the fitness of the original individual position is compared. The greedy technique is used to select the one with better fitness of the objective function, which increases the diversity of the population and improves the possibility of the algorithm jumping out of the local extreme value. The test function tests the TDHHO algorithm, and compared with other optimization algorithms, the experimental results show that the convergence speed and optimization accuracy of the improved Harris Hawks are improved. Finally, the enhanced Harris Hawks algorithm is applied to engineering optimization and wireless sensor networks (WSN) coverage optimization problems, and the feasibility of the TDHHO algorithm in practical application is further verified.
Key words: Harris Hawks optimization / nonlinear periodic energy decreases / differential mutation strategy / wireless sensor networks (WSN) coverage optimization results
Biography: XU Xiaohan, male, Master, research directions: intelligent optimization algorithm, signal processing, fault diagnosis, wavefront detection, etc. Email: iridescenthan@gmail.com
Fundation item: Supported by Key Laboratory of Space Active OptoElectronics Technology of Chinese Academy of Sciences (2021ZDKF4), and Shanghai Science and Technology Innovation Action Plan (21S31904200, 22S31903700)
© Wuhan University 2023
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
0 Introduction
In recent years, with the development of society, people need to face more and more complex problems, such as engineering optimization problems, complex computational problems, etc. Traditional algorithms for solving these complex problems lead to low computational accuracy and long computation time due to the substantial computational volume of the issues. To solve these complex problems better, experts and scholars have proposed a series of intelligent optimization algorithms and achieved better results.
Swarm intelligence optimization algorithm is a metaheuristic algorithm that has now been widely used in image processing^{ [1]}, control optimization^{ [2,3]}, electric power^{ [4,5]}, and so on. The mainstream swarm intelligence optimization algorithms are Particle Swarm Optimization (PSO)^{ [6]}, Grey Wolf Optimizer (GWO)^{ [7]}, Whale Optimization Algorithm (WOA)^{ [8]}, Butterfly Optimization Algorithm (BOA)^{ [9]}, Bacterial Foraging Optimization (BFA)^{ [10]}, Harris Hawks Optimization (HHO)^{ [11]}, Sparrow Search Algorithm (SSA)^{ [12]}, etc. Among them, the Harris Hawks algorithm is a new swarm intelligence optimization algorithm proposed by Heidari et al^{[11]} inspired by the hunting behavior of the Harris Hawks population in 2019. It has the characteristics of high stability and simple structure compared with other optimization algorithms. However, the HHO algorithm also has disadvantages, such as quickly falling into local extremes and low convergence accuracy.
Many scholars have proposed improved strategies to overcome the shortcomings of the HHO algorithm. Guo et al ^{[13]} introduced the Corsi variation and adaptive weights into the Harris Hawks algorithm to improve the algorithm's local exploitation ability; Tang et al^{[14]} introduced strategies such as elite hierarchy and random wandering to optimize the algorithm's ability to jump out of the local optimum; Chen et al^{[15]} improved the algorithm's global exploration ability by introducing reverse learning and spiral exploration and other strategies to enhance the global exploration ability of the algorithm; Liu et al^{ [16]} optimized the foraging behavior of individuals by setting a square domain topology, and then formed a fixed replacement probability, which enhanced the communication between populations and improved the algorithm's optimizationseeking ability and robustness; Zhang et al^{ [17]} explored the impact of different update methods of escape energy E on the algorithm; Yin et al^{[18]} used such infinite collapse chaos mapping to reduce the influence of randomness in the population initialization process, and also introduced strategies such as golden positive cosine and fused lensing imaging learning to improve the search space and meritseeking ability of the algorithm; Chen et al^{ [19]} enhanced the convergence speed of the algorithm by strategies such as random traceless σvariation, pseudoopposition and pseudoreflection learning mechanisms.
These improvement strategies have improved the algorithm's overall performance, but the Harris Hawks algorithm still has a lot of room for improvement. In summary, to further enhance the optimization accuracy and convergence speed of the Harris Hawk, we propose a Harris Hawks algorithm that integrates the tuna school algorithm and the differential variation strategy (TDHHO). The problem of performing only the development phase after the middle of the algorithm is solved by reflecting the linear periodically decreasing escape energy E, which balances the global exploration and local exploitation; secondly, the parabolicshaped foraging strategy of the tuna school algorithm is introduced to strengthen the global exploration ability of the original algorithm; then the variation of the updated positions of individuals is performed by the differential variation strategy to increase the diversity of the population and make the algorithm avoid falling into local extremes. The experimental results prove the effectiveness of the TDHHO algorithm by testing 16 benchmark test functions and comparing it with other optimization algorithms. Finally, the algorithm is applied to the design problem of extension/compression spring and wireless sensor networks (WSN) coverage optimization problem, and the experimental results show that the TDHHO algorithm has a more vital foraging ability than other algorithms and has a specific application value.
1 Harris Hawks Optimization Algorithm
Harris Hawks algorithm is a new metaheuristic algorithm inspired by the hunting behavior of Harris Hawk, which is divided into the global exploration phase, local development phase, and conversion phase.
1.1 Global Exploration Phase
During the global exploration phase, the Harris Hawks will search for prey locations on a large scale within the hunting area, updating its position through two equal opportunity strategies with the following position update formula.
$X(t+\mathrm{1})={X}_{\mathrm{r}\mathrm{a}\mathrm{n}\mathrm{d}}(t){r}_{\mathrm{1}}{X}_{\mathrm{r}\mathrm{a}\mathrm{n}\mathrm{d}}(t)\mathrm{2}{r}_{\mathrm{2}}X(t),q\ge \mathrm{0.5}$(1)
$X(t+\mathrm{1})=({X}_{\mathrm{r}\mathrm{a}\mathrm{b}\mathrm{b}\mathrm{i}\mathrm{t}}(t){X}_{m}(t)){r}_{\mathrm{3}}(\mathrm{l}\mathrm{b}+{r}_{\mathrm{4}}(\mathrm{u}\mathrm{b}\mathrm{l}\mathrm{b})),q<\mathrm{0.5}$(2)
${X}_{m}(t)=\frac{\mathrm{1}}{N}{\displaystyle \sum _{i=\mathrm{1}}^{N}}{X}_{i}(t)$(3)
where $X(t)$ denotes the position of the $t$th iteration, $X(t+\mathrm{1})$ means the position of the $t+\mathrm{1}$th iteration,$\text{}{X}_{\mathrm{r}\mathrm{a}\mathrm{n}\mathrm{d}}\text{}(t)$ is the position of the individual randomly selected in the current population, ${X}_{\mathrm{r}\mathrm{a}\mathrm{b}\mathrm{b}\mathrm{i}\mathrm{t}}\left(t\right)$ represents the position of the prey, i.e., the current optimal solution, $\mathrm{u}\mathrm{b}$ describes the upper bound of the population, $\mathrm{l}\mathrm{b}$ represents the lower bound of the population, ${X}_{m}(t)$ represents the average position of the population, $q,{r}_{\mathrm{1}},{r}_{\mathrm{2}},{r}_{\mathrm{3}},{r}_{\mathrm{4}}$ are random numbers between 0 and 1.
1.2 Conversion Phase
In the Harris Hawks algorithm, the escape energy E controls the Harris Hawks to perform the exploration or exploitation behavior with the following equation.
${E}_{\mathrm{1}}=\mathrm{1}\frac{t}{T}$(4)
$E=\mathrm{2}\times {E}_{\mathrm{0}}\times {E}_{\mathrm{1}}$(5)
where $t$ denotes the current number of iterations, $T$ represents the maximum number of iterations, ${E}_{\mathrm{0}}$ denotes a random number between $$1 and 1, and ${E}_{\mathrm{1}}$ denotes a linear parameter. When $\leftE\right\ge \mathrm{1}$, the algorithm performs an exploration phase, which searches for prey locations on a large scale. When $\leftE\right<\mathrm{1}$, the algorithm performs the exploitation phase, which conducts a local search.
1.3 Partial Development Stage
After entering the development phase, the Harris Hawks has surrounded the prey and will attack the target. The Harris Hawks has a variety of attack strategies, namely soft surround, rugged surround, swooping soft surround, and swooping hard surround. The choice of attack strategy is controlled by the escape energy $E$ and the random number $k$, which is between 0 and 1.
When $\leftE\right\ge \mathrm{0.5}$ and $k\ge \mathrm{0.5}$, the Harris Hawks uses a soft envelope strategy, according to the formula:
$X(t+\mathrm{1})=({X}_{\mathrm{r}\mathrm{a}\mathrm{b}\mathrm{b}\mathrm{i}\mathrm{t}}(t)X(t))EJ{X}_{\mathrm{r}\mathrm{a}\mathrm{b}\mathrm{b}\mathrm{i}\mathrm{t}}(t)X(t)$(6)
When $\leftE\right<\mathrm{0.5}$ and $k\ge \mathrm{0.5}$, the Harris Hawks adopts a complex encircling strategy. At this time, the prey's energy has been exhausted, and it cannot jump. The position update formula is as follows:
$X(t+\mathrm{1})={X}_{\mathrm{r}\mathrm{a}\mathrm{b}\mathrm{b}\mathrm{i}\mathrm{t}}(t)E{X}_{\mathrm{r}\mathrm{a}\mathrm{b}\mathrm{b}\mathrm{i}\mathrm{t}}(t)X(t)$(7)
When $E\ge \mathrm{5}$ and $k<\mathrm{0.5}$, the Harris hawk adopts a swooping soft encirclement strategy. The encircled prey still has a large escape energy E and can jump. Then, the Harris hawk will attack in two ways. If one attack fails, the second is used. If the second attack fails, the original position remains unchanged. The formula is as follows:
$Y={X}_{\mathrm{r}\mathrm{a}\mathrm{b}\mathrm{b}\mathrm{i}\mathrm{t}}(t)EJ{X}_{\mathrm{r}\mathrm{a}\mathrm{b}\mathrm{b}\mathrm{i}\mathrm{t}}X(t)$(8)
$Z=Y+S\times \mathrm{L}\mathrm{F}(D)$(9)
$X(t+\mathrm{1})=\{\begin{array}{l}Y,\text{}f(Y)f(X(t))\\ Z,\text{}f(Z)f(X(t))\end{array}$(10)
where $S$ denotes a Ddimensional random vector, $\mathrm{L}\mathrm{F}\left(D\right)$ the $\mathrm{L}\mathrm{e}\mathrm{v}\mathrm{y}$ flight function, and $f\left(x\right)$ the fitness function.
When $\leftE\right<\mathrm{0.5}$ and $k<\mathrm{0.5}$, Harris's hawk adopts a swooping hard encirclement strategy, in which the encircled prey has insufficient escape energy $E$ and Poor jumping ability. At this time, the Harris hawk will also attack in two ways. If the attack fails, then another will be used. If the second attack fails, the original position will remain unchanged. The parameters in the following formula are the same as above.
$Y={X}_{\mathrm{r}\mathrm{a}\mathrm{b}\mathrm{b}\mathrm{i}\mathrm{t}}(t)EJ{X}_{\mathrm{r}\mathrm{a}\mathrm{b}\mathrm{b}\mathrm{i}\mathrm{t}}(t){X}_{m}(t)$(11)
$Z=Y+S\times \mathrm{L}\mathrm{F}(D)$(12)
$X(t+\mathrm{1})=\{\begin{array}{l}Y,\text{}f(Y)f(X(t))\\ Z,\text{}f(Z)f(X(t))\end{array}$(13)
2 Improved Harris Hawks Algorithm
2.1 The Nonlinear Periodic EnergyDecreasing Escape Energy E
The exploration and development phases of the Harris Hawks algorithm are controlled by the escape energy $E$, which decreases linearly in the HHO algorithm, as shown in Fig. 1 (the maximum number of iterations of the algorithm is set to 500).
Fig. 1 Escape energy curve of HHO algorithm 
When the escape energy $\leftE\right>\mathrm{1}$, the exploration phase is executed, and the algorithm has a strong global search capability. When the escape energy $\leftE\right<\mathrm{1}$, the exploitation phase is performed, and the algorithm has a strong local search capability. In the original HHO algorithm, when the number of iterations $t\ge T/\mathrm{2}$, and the escape energy E is always less than 1, the algorithm always performs the exploitation phase, which easily falls into the local optimal solution and is easy to mature prematurely. To solve the problem of only the development stage being implemented in the algorithm's middle and later stages, an escape energy decay strategy that integrates nonlinear energy decline and energy periodic fluctuations ^{[20]} is proposed, as shown in Fig. 2. The formula is as follows:
Fig. 2 Escape energy curve of TDHHO algorithm 
${E}_{\mathrm{2}}=\mathrm{1}\frac{t}{T}\times \frac{t}{T}$(14)
$E=\mathrm{2}\times {E}_{\mathrm{0}}\times {E}_{\mathrm{2}}\times \mathrm{c}\mathrm{o}\mathrm{s}(\mathrm{2}k\mathrm{\pi}\times \frac{t}{T})$(15)
where $t$ is the current number of iterations, $T$ is the maximum number of iterations, ${E}_{\mathrm{2}}$ denotes the nonlinear parameter, and k takes the value of 4. It can be seen that with the assistance of the parabolic feeding strategy of the tuna swarm algorithm. However, the improved energy decay trend is still approaching 0; the decay in the early and middle stages becomes slow, and the deterioration in the later stages becomes rapid. In detail, under the influence of the periodic fluctuation strategy of energy, Harris hawks' hunting stage will alternate between exploration and exploitation phases for a long time. In other words, the duration of the two stages will be balanced. Harris hawks have more opportunities to conduct global search (even if the number of iterations exceeds $T/\mathrm{2}$) and more abundant dwell time for each local search. Those methods enable the algorithm to maintain highprecision local optimization ability from the beginning and a more robust ability to jump out of local optima.
2.2 Introduction of a Parabolic ShapeBased Update Mechanism
The tuna school optimization(TSO) algorithm ^{[21] } is a metaheuristic algorithm that mimics the foraging strategy of tuna schools. The TSO algorithm has two foraging methods: the spiral foraging method and the parabolic foraging method. Both have a robust global search capability, making the algorithm characterized by strong meritseeking ability and good convergence. The parabolic foraging method is that tuna will explore the surrounding space to search for food with a parabolicshaped route, and the mathematical expression is shown in the following equation.
${X}_{i}^{t+\mathrm{1}}={X}_{\mathrm{b}\mathrm{e}\mathrm{s}\mathrm{t}}^{t}+\mathrm{r}\mathrm{a}\mathrm{n}\mathrm{d}\times \mathrm{\Delta}X+\mathrm{T}\mathrm{F}\times p\times p\times \mathrm{\Delta}X,\mathrm{r}\mathrm{a}\mathrm{n}\mathrm{d}<\mathrm{0.5}\text{}$(16)
$\mathrm{\Delta}X=({X}_{\mathrm{b}\mathrm{e}\mathrm{s}\mathrm{t}}^{t}{X}_{i}^{t})$(17)
${X}_{i}^{t+\mathrm{1}}=\mathrm{T}\mathrm{F}\times p\times p\times \mathrm{\Delta}X,\mathrm{r}\mathrm{a}\mathrm{n}\mathrm{d}\ge \mathrm{0.5}$(18)
$p={(\mathrm{1}\frac{t}{T})}^{\frac{t}{T}}$(19)
where $\mathrm{T}\mathrm{F}$ is a random number of 1 or $$1, $t$ is the number of current iterations, and $T$ is the maximum number of iterations.
In the classical Harris Hawks algorithm, the global exploration phase described in Eq. (1) involves updating the hawk's position through communication with individuals from a randomly selected population. However, this method does not directly involve communication with the globally optimal individual. Consequently, obtaining a new position becomes overly random and lacks clear guidance, ultimately leading to a decrease in convergence speed and a degradation of the algorithm's overall convergence. The accuracy of the search for the optimal is not high. To address this situation, a parabolic shapebased updating mechanism is introduced in the exploration phase of the HHO, replacing equation (1) with equation (20), using the global optimal solution to guide it, which strengthens the convergence speed of the algorithm, and the position updating equation is as follows (with the same parameters as above).
$X(t+\mathrm{1})={X}_{\mathrm{r}\mathrm{a}\mathrm{b}\mathrm{b}\mathrm{i}\mathrm{t}}(t)+r\times \mathrm{\Delta}t+\mathrm{T}\mathrm{F}\times {p}^{\mathrm{2}}\times \mathrm{\Delta}t,q\ge \mathrm{0.5}$(20)
$\mathrm{\Delta}t={X}_{\mathrm{r}\mathrm{a}\mathrm{b}\mathrm{b}\mathrm{i}\mathrm{t}}(t)X(t)$(21)
To ensure the convergence of the algorithm, both the original algorithm and the improved algorithm focus on the execution of the local development stage in the later iteration, which will increase the risk of the algorithm falling into local optima and reduce optimization accuracy. In this case, we introduced equation (19). Suppose the position is updated by Eq. (6) or Eq. (7). In that case, the fitness is calculated and compared with the fitness of the position before the update, and if it becomes better, the position updated by Eq. (6) or Eq. (7) is used. Otherwise, the algorithm may fall into the local extremes, and Eq. (19) is used to update the position. The fitness is calculated and compared with the fitness of the original position, and if it becomes better, the new position is used. Otherwise, this strategy increases the chance of the algorithm jumping out of the local extremes and strengthens the algorithm's ability to find the best, replacing Eq. (6) with Eq. (24) and Eq. (7) with Eq. (26). The new position update equation is as follows (with the same parameters as above).
$K=({X}_{\mathrm{r}\mathrm{a}\mathrm{b}\mathrm{b}\mathrm{i}\mathrm{t}}(t)X(t))EJ\times {X}_{\mathrm{r}\mathrm{a}\mathrm{b}\mathrm{b}\mathrm{i}\mathrm{t}}(t)X(t)$(22)
$H=\mathrm{T}\mathrm{F}\times {p}^{\mathrm{2}}\times X(t)$(23)
$X(t+\mathrm{1})=\{\begin{array}{l}K,\text{}f(K)f(X(t))\\ H,\text{}f(H)f(X(t))\end{array}$(24)
$L={X}_{\mathrm{r}\mathrm{a}\mathrm{b}\mathrm{b}\mathrm{i}\mathrm{t}}(t)E{X}_{\mathrm{r}\mathrm{a}\mathrm{b}\mathrm{b}\mathrm{i}\mathrm{t}}(t)X(t)$(25)
$X(t+\mathrm{1})=\{\begin{array}{l}L,\text{}f(L)f(X(t))\\ H,\text{}f(H)f(X(t))\end{array}$(26)
2.3 Incorporating Differential Variation Strategies
The location update formula in the Harris Hawks algorithm uses many global optimal solutions to guide the individuals, which is conducive to the rapid convergence of the algorithm. When dealing with simple problems, it can quickly converge to the optimal value, but when dealing with complex issues, the algorithm is easy to fall into the local optimum under the guidance of the global optimal solution, resulting in premature maturity of the algorithm, which reduces the accuracy of the search for the best. To address this problem, when the Harris Hawks algorithm individual location updates after the normal execution of the formula, we introduce the $\mathrm{D}\mathrm{E}/\mathrm{r}\mathrm{a}\mathrm{n}\mathrm{d}/\mathrm{2}$ differential variation strategy^{[22]} (the differential variation strategy is to achieve the purpose of variation by randomly selecting individuals within the population for differencing and scaling), which varies the positions of individuals in the population to produce a new result and calculate the fitness, and then use the greedy strategy to compare the new position with the old position to retain the better fitness. The formula is as follows:
${X}_{\mathrm{n}\mathrm{e}\mathrm{w}}={X}_{\mathrm{o}\mathrm{l}\mathrm{d}}+({X}_{r\mathrm{1}}(t){X}_{r\mathrm{2}}(t)+{X}_{r\mathrm{3}}(t){X}_{r\mathrm{4}}(t))$(27)
$X(t+\mathrm{1})=\{\begin{array}{l}{X}_{\mathrm{n}\mathrm{e}\mathrm{w}},\text{}f({X}_{\mathrm{n}\mathrm{e}\mathrm{w}})f({X}_{\mathrm{o}\mathrm{l}\mathrm{d}})\\ {X}_{\mathrm{o}\mathrm{l}\mathrm{d}},\text{}f({X}_{\mathrm{o}\mathrm{l}\mathrm{d}})f({X}_{\mathrm{n}\mathrm{e}\mathrm{w}})\end{array}$(28)
where ${X}_{\mathrm{o}\mathrm{l}\mathrm{d}}$ denotes the position of individuals after the normal execution of the HHO algorithm, representing the random individuals in the population. Through the differential variation strategy, the population diversity is increased. The individuals in the early stage of the population are more different from each other. The difference is more extensive, which can help the algorithm jump out of the local optimum, avoid the premature maturity of the algorithm, and improve the algorithm's optimizationseeking accuracy and global search ability. In the late iteration, the individuals in the population tend to be consistent, and the difference decreases, ensuring the algorithm's convergence performance.
2.4 Algorithm Flow of TDHHO
Step 1: Initialize the basic parameters of the population, such as the population size, upper and lower boundaries, and other parameters.
Step 2: Calculate the escape energy E of the TDHHO algorithm according to Eq. (15).
Step 3: If $\leftE\right>\mathrm{1}$, update the position according to Eq. (20) or Eq. (2).
Step 4: If $\leftE\right\ge \mathrm{0.5}$ and $k\ge \mathrm{0.5}$, update the position parameter of the individual according to Eq. (24). If $\leftE\right\ge \mathrm{0.5}$ and $k<\mathrm{0.5}$, the position parameters of the individual are updated according to Eq. (10). If $\leftE\right<\mathrm{0.5}$ and $k\ge \mathrm{0.5}$, the location parameters of the individual are updated according to Eq. (26). If $\leftE\right<\mathrm{0.5}$ and $k<\mathrm{0.5}$, the position parameters of the individual are updated according to Eq. (13).
Step 5: Update the position parameters of the individual using Eq. (28).
Step 6: Determine whether the maximum number of iterations is reached, and if yes, output the best position and terminate the algorithm; otherwise, return to Step 2.
3 Experimental Results and Analysis
3.1 Experimental Environment and Test Functions
The experimental environment of this paper is: OS Win10 64bit, RAM 16GB, CPU I710750H, simulation software Matlab2018b.
To verify the effectiveness of the improved Harris Hawks algorithm, 16 benchmark test functions are selected, as shown in Table 1 below. Among these 16 benchmark functions, functions F1F7 are singlepeak test functions, which test the convergence performance and local exploitation ability of the algorithm; F8F13 are multipeak test functions, which are characterized by the existence of multiple local extrema, easily allowing the algorithm to fall into the local optimum, testing The global exploration ability of the algorithm, the ability to jump out of the local optimum and the ability to avoid premature maturity, and the functions F14F16 are fixeddimensional functions to test whether the algorithm can converge to the theoretical optimum on simple problems, please refer to Table 1 for details.
In addition, the PSO, GWO, WOA, equilibrium optimizer (EO), and the original HHO were selected for a comprehensive comparison. The number of populations for each algorithm is set to 30, and the number of iterations is 500. In order to reduce the interference caused by the randomness of the algorithm, each algorithm is run 30 times independently of the test function. The results of these 30 times are averaged. Then we standardization deviation, where the average value is used to evaluate the performance of the algorithm, and the standard deviation is used to assess the stability of the algorithm.
Test function
3.2 Experimental Results and Analysis ofLowDimensional Test Functions
From the results in Table 2, we can see that the mean value of the TDHHO algorithm reaches the theoretical optimal value for functions F1F4, and the standard deviation is 0. Compared with other algorithms, we can see that the TDHHO algorithm has strong convergence performance and can quickly converge to the optimal value. Although it does not reach the theoretical optimal value, the standard deviation and average value are still substantially ahead of other algorithms. This indicates that TDHHO has higher convergence accuracy and more excellent stability than other algorithms. In the multipeak functions F8F11, both TDHHO and HHO algorithms reach the theoretical optimum. In contrast, the multipeak functions F12 and F13 have multiple extrema and are more dispersed, which require high global exploration ability and the ability to avoid premature maturity of the algorithms. It indicates that the global optimizationseeking ability and the ability to jump out of the local optimum of TDHHO are more robust than the other algorithms, and the premature maturity of the algorithm is avoided. On the fixed dimensional functions F14F16, only the mean value of TDHHO has reached the theoretical optimum, which is ahead of the other algorithms, and the standard deviation shows that the TDHHO algorithm is more stable than the other optimization algorithms.
In summary, the TDHHO algorithm can converge to the theoretical optimum on simple problems relatively more than the other algorithms.
Test results of different algorithms(dim=30)
3.3 Convergence Curve Analysis
Figure 3 shows the convergence curves of singlepeak functions F5, F6, multipeak functions F12, F13, and fixed dimensional functions F15, F16 (population number is 30, maximum iteration number is 500, dimension number is 30). But the convergence curve of the TDHHO algorithm keeps decreasing in the whole iteration cycle; comparing the HHO algorithm and other algorithms shows that the convergence accuracy of TDHHO is greatly improved; for fixed dimensional functions, the convergence curve of the TDHHO algorithm is steeper than other algorithms in the first iteration, and it can converge to the optimal value quickly, which means that TDHHO algorithm has stronger convergence performance and better finding ability than other algorithms. In summary, it can be seen that the improved algorithm not only speeds up the convergence speed but also substantially improves the convergence accuracy of the algorithm.
Fig.3 Partial function convergence curve 
3.4 Experimental Results and Analysis of HighDimensional Test Functions
From the previous experimental results, it can be seen that the TDHHO algorithm has good performance on lowdimensional test functions. To verify the applicability of the TDHHO algorithm on highdimensional test functions, the WOA algorithm, GWO algorithm, and HHO algorithm are selected as comparisons for highdimensional function test experiments. The experimental parameters are set as follows: the number of populations is 30, the maximum number of iterations is 500, and the 100 and 500dimensional functions are run independently 30 times each, and the mean and standard deviation are taken to verify the performance and stability of the algorithm. As can be seen from Table 3, on the highdimensional test functions F1F4, the mean values of the TDHHO algorithm converge to the theoretical value optimum in both dimensions, and for the highdimensional test functions F5, F6, the TDHHO algorithm is also several orders of magnitude ahead of other algorithms in each dimension, and for the highdimensional test function F7, the TDHHO algorithm is ahead of the WOA algorithm and the GWO algorithm in two metrics and behind the For the highdimensional multipeak function F8, the TDHHO algorithm has the same mean value as the HHO algorithm and is ahead of the other algorithms, but the standard deviation of the HHO algorithm is better than that of the TDHHO algorithm, indicating that the stability of the HHO algorithm is better than that of TDHHO in function F8, and for the highdimensional multipeak functions F12F13, For the highdimensional multipeaked functions F12F13, both the mean and standard deviation of TDHHO algorithm are significantly ahead of the other algorithms by several orders of magnitude, indicating that the TDHHO algorithm still has excellent global optimizationseeking ability in high dimensions and still maintains a strong ability to jump out of the local optimum. In summary, it can be seen that the TDHHO algorithm outperforms other algorithms on most of the highdimensional test functions, indicating that the improved strategy has good results in highdimensional optimizationseeking problems, is equally applicable in highdimensional situations, and can handle complex highdimensional difficulties in life.
Experimental results of highdimensional test functions with different algorithms
3.5 Comparison of Different Improved Harris Hawks Algorithms
To further verify the effectiveness of the improved algorithms, we select several improved Harris Hawks algorithms and compare the results of some of the test functions with those of the TDHHO algorithm. The algorithm in Ref. [23] was denoted as IHHO1, the algorithm in Ref.[24] as GSHHO, the algorithm in Ref.[25] as IHHO2, and the algorithm in Ref.[26] as EGHHO, the parameters of all algorithms were kept the same (test function dimension 30, number of populations 30, maximum number of iterations 500), each algorithm was run 30 times independently, and the mean and standard deviation of the results were taken for these 30 times.
From the data in Table 4, it can be seen that for the simple singlepeaked functions F1F4, the improved algorithm mostly achieves the theoretical optimal value, but for the complex singlepeaked functions F5, F6, the convergence accuracy of the TDHHO algorithm is substantially better than the other improved algorithms. For the multipeaked functions F12 and F13, the convergence accuracy of the TDHHO algorithm is several orders of magnitude better than that of the other improved algorithms, and it can be seen that the TDHHO algorithm has a better global search capability and the ability to jump out of the local extremes than the other improved algorithms. For the fixed dimensional functions F14F16, the convergence accuracy of the TDHHO algorithm is not much ahead of other algorithms, but the standard deviation is substantially ahead of other algorithms, which indicates that the TDHHO algorithm is more stable. In summary, the improvement strategy of the TDHHO algorithm still has a certain superiority compared with other literature.
Experimental results of different improved Harris Hawks algorithm test functions
4 Application Examples
4.1 Application of Improved Harris Hawks Algorithm to Engineering Problems
To verify the effectiveness of the improved Harris Hawks algorithm in engineering applications, the design problem of an extension/compression spring in a classical engineering problem is selected, the improved Harris Hawks algorithm is applied to this problem, and the results are compared with other optimization algorithms to verify the effectiveness of the TDHHO algorithm.
The design problem of tension/compression springs is a classical engineering optimization problem. This problem aims to minimize the weight of an extension/compression spring, which has many constraints, such as minimum deflection, shear stress, surge frequency, outer diameter limit, and other restrictions. The problem has three variables: wire diameter $\text{}{x}_{\mathrm{1}}$, average coil diameter ${x}_{\mathrm{2}}$, and number of active coils ${x}_{\mathrm{3}}$.
$\mathrm{m}\mathrm{i}\mathrm{n}f(x)=({x}_{\mathrm{3}}+\mathrm{2}){x}_{\mathrm{2}}{x}_{\mathrm{1}}^{\mathrm{2}}$(29)
${g}_{\mathrm{1}}(x)=\mathrm{1}\frac{{x}_{\mathrm{2}}^{\mathrm{3}}{x}_{\mathrm{3}}}{\mathrm{71}\text{}\mathrm{785}{x}_{\mathrm{1}}^{\mathrm{4}}}\le \mathrm{0}$(30)
${g}_{\mathrm{2}}(x)=\frac{\mathrm{4}{x}_{\mathrm{2}}^{\mathrm{2}}{x}_{\mathrm{1}}{x}_{\mathrm{2}}}{\mathrm{12}\text{}\mathrm{566}({x}_{\mathrm{2}}{x}_{\mathrm{1}}^{\mathrm{3}}{x}_{\mathrm{1}}^{\mathrm{4}})}+\frac{\mathrm{1}}{\mathrm{5}\text{}\mathrm{108}{x}_{\mathrm{1}}^{\mathrm{2}}}\mathrm{1}\le \mathrm{0}$(31)
${g}_{\mathrm{3}}(x)=\mathrm{1}\frac{\mathrm{140.45}{x}_{\mathrm{1}}}{{x}_{\mathrm{2}}^{\mathrm{2}}{x}_{\mathrm{3}}}\le \mathrm{0}$(32)
${g}_{\mathrm{4}}(x)=\frac{{x}_{\mathrm{1}}+{x}_{\mathrm{2}}}{\mathrm{1.5}}\mathrm{1}\le \mathrm{0}$(33)
$\mathrm{0.05}\le {x}_{\mathrm{1}}\le \mathrm{2,0.25}\le {x}_{\mathrm{2}}\le \mathrm{1.3,2}\le {x}_{\mathrm{3}}\le \mathrm{15}$
The TDHHO algorithm, together with the GWO, WOA, Sparrow Search Algorithm (SSA), and HHO, is used to solve the design problem of the tension/compression spring. The experimental results are shown in Table 5, where the TDHHO algorithm optimizes the design problem of the tension/compression spring better than the other algorithms, and it can be seen that the TDHHO algorithm has some feasibility in the practical application problem.
Experimental results of different algorithms for the design of tension/compression springs
4.2 Application of TDHHO Algorithm for Coverage Optimization in Wireless Sensor Networks
To further verify the effectiveness of the TDHHO algorithm for practical engineering applications, the TDHHO algorithm is chosen to optimize the wireless sensor network coverage. With the continuous development of the Internet of Things (IoT) technology, wireless sensor networks play a crucial role in connecting people to the Internet. Nowadays, they have been widely used in smart homes, communications, smart agriculture, and other fields. To optimize the user experience and reduce the cost, the wireless sensor network coverage optimization problem has gained more and more attention. Many scholars have applied intelligent optimization algorithms to sensor coverage optimization in recent years, hoping to use fewer sensor nodes to cover a larger area.
The WSN network coverage model is a twodimensional plane with a length of ${M}_{\mathrm{1}}$ and a width of ${M}_{\mathrm{2}}$ as the target, in which several wireless network sensors are deployed randomly. The set of these wireless sensor nodes is $S\text{}=\text{}\left\{{S}_{\mathrm{1}},\text{}{S}_{\mathrm{2}},\text{}{S}_{\mathrm{3}},\text{}{S}_{\mathrm{4}},\text{}{S}_{\mathrm{5}},\text{}{S}_{\mathrm{6}},\dots ,\text{}{S}_{n}\right\}$ , and the sensing radius and communication radius of the wireless network sensor nodes are$\text{}{R}_{a}$ and ${R}_{b}$, respectively.
$d({S}_{i},{T}_{j})=\sqrt[]{({x}_{i}{x}_{j}{)}^{\mathrm{2}}+({y}_{i}{y}_{j}{)}^{\mathrm{2}}}$(34)
$P({S}_{i},{T}_{j})=\{\begin{array}{l}\mathrm{1},\mathrm{i}\mathrm{f}\text{}d({S}_{i},{T}_{j})\le {R}_{a}\\ \mathrm{0},\mathrm{e}\mathrm{l}\mathrm{s}\mathrm{e}\end{array}$(35)
$P(S,{T}_{j})=\mathrm{1}{\displaystyle \prod _{i=\mathrm{1}}^{N}}[\mathrm{1}P({S}_{i},{T}_{j})]$(36)
${R}_{\mathrm{C}\mathrm{O}\mathrm{V}}=\frac{{\displaystyle \sum _{j=\mathrm{1}}^{{M}_{\mathrm{1}}\times {M}_{\mathrm{2}}}}P({S}_{i},{T}_{j})}{{M}_{\mathrm{1}}\times {M}_{\mathrm{2}}}$(37)
where $d({S}_{i},{T}_{j})$ denotes the distance between the point and the target point, ${T}_{j}$ denotes the position of the target point, $P({S}_{i},{T}_{j})$ denotes the probability that the node covers the target point, $P(S,{T}_{j})$ denotes the probability of joint perception, and ${R}_{\mathrm{C}\mathrm{O}\mathrm{V}}$ denotes the total coverage rate.
Equation (36) is taken as the objective function, and TDHHO, HHO, SSA, and WOA are used to solve the problem. The results are shown in Table 6. It can be seen from the table that the TDHHO algorithm has the best optimization effect and the highest coverage rate.Figures 47 show the optimal coverage effect of TDHHO, HHO, WOA, and SSA, respectively. Compared with other algorithms, the optimal coverage effect of TDHHO will be more uniform and have a wider coverage area.
Fig.4 The optimal coverage effect of TDHHO 
Fig.5 The optimal coverage effect of HHO 
Fig.6 The optimal coverage effect of WOA 
Fig.7 The optimal coverage effect of SSA 
Optimization coverage of different algorithms
5 Conclusion
To address the problem that the convergence accuracy of the Harris Hawks algorithm is not high and it is easy to fall into local optimum, we propose a Harris Hawks algorithm that integrates the tuna swarm algorithm and differential variation strategy. Firstly, the escape energy E in the original algorithm is improved so that the algorithm still has the opportunity to execute the global exploration strategy after the middle of the iteration, balancing the global search and local search ability; secondly, the parabolic shaped foraging strategy of the tuna algorithm is introduced to improve the convergence performance and the optimizationseeking accuracy of the algorithm; finally, the positions after the individual update are mutated by the differential variation strategy, and the adaptation degree is calculated and compared with that of the positions before the mutation The adaptation degree is compared. The position with a better adaptation degree is selected, which increases the population's diversity and improves the algorithm's ability to jump out of the local optimum and avoid premature maturity. Sixteen classical lowdimensional test functions are selected to test the improved Harris Hawks algorithm and compared with five algorithms, including HHO. The experimental results show that the improved Harris Hawks algorithm converges faster, has higher accuracy in finding the best, has a better ability to jump out of the local optimum, and is more stable. At the same time, 13 highdimensional test functions of 100 and 500 dimensions are experimented, except the improved algorithm still outperforms other algorithms in the rest of the functions, indicating that the improved algorithm is still applicable to highdimensional problems. To further verify the effect of the improved algorithm, several improved Harris Hawks algorithms are selected for comparison, and the results show that the improved strategy of the TDHHO algorithm has a certain superiority. The following work continues to improve the algorithm's performance and apply it to more practical optimization problems.
References
 RodríguezEsparza E, ZanellaCalzada L A, Oliva D, et al. An efficient Harris hawksinspired image segmentation method[J]. Expert Systems with Applications, 2020, 155: 113428. [CrossRef] [Google Scholar]
 Jia L, Zhao X Q. An improved particle swarm optimization (PSO) optimized integral separation PID and its application on central position control system[J]. IEEE Sensors Journal, 2019, 19(16): 70647071. [NASA ADS] [CrossRef] [Google Scholar]
 Zeng G H, Fu X W, Liu J, et al. PMSM vector control optimization based on fractional PI^{λ} of rotational speed outer loop of dragonfly algorithm[J]. Wuhan University Journal of Natural Sciences, 2021, 26(5):429436. [Google Scholar]
 Zhang H R, Yang Y, Zhang Y, et al. A combined model based on SSA, neural networks, and LSSVM for shortterm electric load and price forecasting[J]. Neural Computing and Applications, 2021, 33(2): 773788. [CrossRef] [Google Scholar]
 Liao H F, Zeng G H, Huang B, et al. Optimal control virtual inertia of optical storage microgrid based on improved sailfish algorithm[J]. Wuhan University Journal of Natural Sciences, 2022, 27(3):218230. [CrossRef] [EDP Sciences] [Google Scholar]
 Kennedy J, Eberhart R. Particle swarm optimization[C]// Proceedings of the 1995 International Conference on Neural Networks. Piscataway: IEEE, 1995: 19421948. [Google Scholar]
 Mirjalili S, Mirjalili S M, Lewis A. Grey wolf optimizer[J]. Advances in Engineering Software, 2014, 69: 4661. [CrossRef] [Google Scholar]
 Mirjalili S, Lewis A. The whale optimization algorithm[J]. Advances in Engineering Software, 2016, 95: 5167. [CrossRef] [Google Scholar]
 Arora S, Singh S. Butterfly optimization algorithm: A novel approach for global optimization[J]. Soft Computing, 2019, 23(3): 715734. [CrossRef] [Google Scholar]
 Das S, Biswas A, Dasgupta S, et al. Bacterial foraging optimization algorithm: Theoretical foundations, analysis, and applications[C]// Foundations of Computational Intelligence Volume 3: Global Optimization. Berlin: SpringerVerlag, 2009: 2355. [Google Scholar]
 Heidari A A, Mirjalili S, Faris H, et al. Harris Hawks optimization: Algorithm and applications[J]. Future Generation Computer Systems, 2019, 97: 849872. [CrossRef] [Google Scholar]
 Xue J K, Shen B. A novel swarm intelligence optimization approach: Sparrow search algorithm[J]. Systems Science & Control Engineering, 2020, 8(1): 2234. [CrossRef] [Google Scholar]
 Guo Y X, Liu S, Gao W X, et al. Improved Harris Hawks optimization algorithm with multiple strategies [J]. Microelectronics and Computers, 2021, 38(7): 1824(Ch). [Google Scholar]
 Tang A D, Han T, Xu D W, et al. Chaotic elite Harris Hawks optimization algorithm[J]. Journal of Computer Applications, 2021,41(8): 22652272(Ch). [Google Scholar]
 Li C Y, Li J, Chen H L, et al. Enhanced Harris Hawks optimization with multistrategy for global optimization tasks[J]. Expert Systems with Applications, 2021, 185: 115499. [CrossRef] [Google Scholar]
 Liu X L, Liang T Y. Harris Hawk optimization algorithm based on square neighborhood and random array[J]. Control and Decision, 2022, 37(10): 24672476(Ch). [Google Scholar]
 Zhang Y, Zhou X Z, Shi P C. Modified Harris Hawks optimization algorithm for global optimization problems[J]. Arabian Journal for Science and Engineering, 2020, 45(12): 1094910974. [CrossRef] [Google Scholar]
 Yin D X, Zhang L N, Zhang D M, et al. Harris Hawks optimization based on chaotic lens imaging learning and its application[J]. Chinese Journal of Sensors and Actuator, 2021, 34 (11): 14631474(Ch). [Google Scholar]
 Chen Q, Li K S. Based on random tracelessness σ modified HHO algorithm for mutation and its application[J]. Computer Application Research, 2022(5): 19 (Ch). [Google Scholar]
 Zhao S J, Gao L F, Yu D M, et al. Improved HHO Algorithm Integrating Periodic Energy Declining and Newton Local Enhancement[J]. Control and Decision, 2021, 36(3): 629636(Ch). [Google Scholar]
 Xie L, Han T, Zhou H, et al. Tuna swarm optimization: A novel swarmbased metaheuristic algorithm for global optimization[J]. Computational Intelligence and Neuroscience, 2021, 2021: 9210050. [PubMed] [Google Scholar]
 Storn R, Price K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces[J]. Journal of Global Optimization, 1997, 11: 341359. [CrossRef] [Google Scholar]
 Chen G, Zeng G H, Huang B, et al. HHO algorithm integrating mutually beneficial symbiosis and lens imaging learning[J]. Computer Engineering and Application, 2022, 58(10): 7686(Ch). [Google Scholar]
 Nie C F. Harris Hawk optimization algorithm combining golden sine and random walk[J]. Intelligent Computer and Application, 2021, 11(7): 113119+123(Ch). [Google Scholar]
 Zhang S, Wang J J, Li A L, et al. Harris Hawk optimization algorithm integrating normal clouds and dynamic perturbations [J]. Small Microcomputer System, 2022: 111(Ch). [Google Scholar]
 Guo Y X, Liu S, Gao W X, et al. The HHO algorithm for elite reverse learning and golden sine optimization[J]. Computer Engineering and Application, 2021(1): 812Ch). □ [Google Scholar]
All Tables
Experimental results of highdimensional test functions with different algorithms
Experimental results of different algorithms for the design of tension/compression springs
All Figures
Fig. 1 Escape energy curve of HHO algorithm 

In the text 
Fig. 2 Escape energy curve of TDHHO algorithm 

In the text 
Fig.3 Partial function convergence curve 

In the text 
Fig.4 The optimal coverage effect of TDHHO 

In the text 
Fig.5 The optimal coverage effect of HHO 

In the text 
Fig.6 The optimal coverage effect of WOA 

In the text 
Fig.7 The optimal coverage effect of SSA 

In the text 
Current usage metrics show cumulative count of Article Views (fulltext article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 4896 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.