Open Access
Issue
Wuhan Univ. J. Nat. Sci.
Volume 28, Number 1, February 2023
Page(s) 20 - 28
DOI https://doi.org/10.1051/wujns/2023281020
Published online 17 March 2023

© Wuhan University 2023

Licence Creative CommonsThis is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

0 Introduction

Aerial robots will soon play a significant role in industrial inspection, accident warning, and national defense[1-3]. For such operations, flight mode dependent on the human remote control can no longer meet the mission requirements under complex conditions. At present, the autonomous navigation ability has become an important indicator to measure robot intelligent level. It is difficult to obtain the aerial robot pose in real time and solve the problem of autonomous robot navigation. Fully autonomous navigation requires aerial robots to have accurate pose estimation and robust environmental awareness. Due to the aerial robot's swing during flying, the state estimation results are difficult to converge during movement, which leads to the instability of the existing pose estimation algorithms.

Compared with the aerial robot pose estimator based on a single sensor, the multi-sensor fusion pose estimation technologies[4-8] can make full use of different kinds of sensors to obtain more accurate and robust robot pose estimation results. Stereo camera and inertial measurement unit (IMU) can output the robot position with centimeter-level precision in the local coordinate system[9, 10], but the pose in the local frame will drift with the aerial robot motion. Global navigation satellite system (GNSS) has been widely used in various mobile robot navigation tasks to provide drift-free position information for agents[11,12]. However, the GNSS-based agent navigation method is not suitable to use indoors, and is vulnerable to noise and multipath effects, resulting in only meter-level positioning accuracy. In order to leverage the complementary characteristics of different sensors, the pose estimation algorithm based on vision-IMU-GNSS information fusion can make full use of the respective advantages of stereo cameras, accelerometers, gyroscopes and navigation satellites to obtain accurate and drift free aerial robot pose estimation. Unfortunately, the pose estimation strategy based on vision-IMU-GNSS sensor fusion will face the following problems: Firstly, the output frequencies from each sensor are different (the frequencies of camera, IMU and GNSS receiver are 30, 200, and 10 Hz, respectively). How to fuse these raw measurements from different sensors will be an intractable problem[13,14]; Secondly, how can the pose estimator quickly restore to normal state when one of the sensors suddenly fails?

To deal with the aforementioned problems, we propose a robust and real-time visual-inertial-GNSS tightly coupled pose estimation (RRVPE) method, which is a probabilistic factor graph optimization-based pose estimation strategy, for aerial robot navigation. The RRVPE system can achieve real-time robot state estimation after compute unified device architecture (CUDA) acceleration on an airborne computer. The main novelties of RRVPE are exhibited as below:

1) The RRVPE system leverages the Kanade-Lucas[15] algorithm to track Shi-Tomasi[16] features in each video frame, and the image corners describing and matching between adjacent frames are not required in corner tracking procedures. After NVIDIA CUDA acceleration, the robot pose estimator can achieve camera-rate (30 Hz) performance on a small embedded platform.

2) The local bundle adjustment (BA) and factor graph solution process are bounded in a circumscribed container, which can dramatically abandon the number of variables in nonlinear optimization procedure. Furthermore, the computation complexity of the RRVPE system is discharged by the additional marginalization strategy.

3) By making full use of the GNSS raw measurements, the intrinsic drift from the vision-IMU odometry will be abandoned, and the yaw angle residual between the odometry frame and the world frame will be updated without any offline calibration. The aerial robot pose estimator is able to rapidly execute in unpredictable environments and achieves local smoothness and global consistency without visual closed loop detection.

1 System Overview

The aerial robot equipped with the RRVPE navigation system is shown in Fig. 1, which tightly fuses sparse optical flow tracking and inertial measurements with GNSS raw data for precise and driftless aerial robot pose estimation.

thumbnail Fig. 1 The aerial robot equipped with the RRVPE navigation system

1. Intel RealSense D435i camera: responsible for building real-world 3D voxel map; 2. Intel RealSense T265 camera: responsible for providing binocular video stream and inertial measurement information; 3. U-Blox ZED-F9P receiver: responsible for receiving GNSS pseudorange, Doppler, ephemeris and time pulse information

The structure of the RRVPE system overview is illustrated in Fig. 2. First, the raw sensor data from the aerial robot are preprocessed, including visual feature extraction and tracking, IMU pre-integration, and GNSS signal filtering. Then, vision, IMU and GNSS cost functions are formulated respectively, and vision-IMU-GNSS information is jointly initialized to obtain all the initial values of the aerial robot pose estimator. Finally, the aerial robot pose solving process is constructed as a state estimation problem. In the meantime, the corresponding probability factor graph model and marginalization strategy are designed. The aerial robot pose is solved by non-linear optimization, and subsequently the accurate, robust and drift free robot pose can be achieved.

thumbnail Fig. 2 Main parallel threads of RRVPE system

In the system initialization stage, the camera trajectory is solved by the structure from motion (SfM) algorithm[17-19], and the IMU raw measurement data is pre-integrated[20, 21] to initialize the vision-IMU tightly coupled robot odometry. Then, the rough robot position in the world coordinate frame is solved by the single point positioning (SPP) algorithm. Under the condition that visual-IMU odometry is used as prior information, the transformation matrix between the odometry coordinate frame and the world coordinate frame is solved in nonlinear optimization. Finally, the precise pose of the aerial robot in the global coordinate system is modified by probability factor graph optimization.

After the estimator initialization, constraints from all sensor measurements are tightly coupled to calculate aerial robot states within a circumscribed container. If the GNSS broadcast is not available or cannot be entirely initialized, the RRVPE system will naturally degrade to visual-IMU odometry. In order to maintain the real-time performance of the estimation system, the additional marginalization scheme[6] is also applied after each optimization.

We define (ˑ)r as the robot coordinate system, (ˑ)c as the camera coordinate system and (ˑ)o as the odometry frame, where the direction of the gravity is aligned with the Z axis. World coordinate system (ˑ)w is a semiglobal frame, where the X and Y axes direct to the east and north direction respectively, and the Z axis is also gravity aligned. The earth-centered, earth-fixed (ECEF) frame (ˑ)e and the earth-centered inertial (ECI) frame (ˑ)E are global coordinate system that is fixed with respect to the center of the earth. The difference between the ECEF and the ECI frame is that the latter's coordinate axis does not change with the rotation of the earth.

2 Aerial Robot Pose Estimator

2.1 Formulation

In this section, the aerial robot pose estimation is formulated as a probabilistic factor graph optimization procedure, and sensor measurement information constitutes a composite of multifarious factors in the graph, which constrains the aerial robot states. The factors in the probabilistic graph are composed of visual factor, inertia factor and GNSS factor. All of the factors in the factor graph will be formulated in detail through this section.

We can take advantage of a sliding window-based tightly coupled visual-inertial-GNSS pose estimator for exceedingly robust and real-time aerial robot state estimation. The whole states χ inside the sliding window can be summarized as:

{ χ = [ x 0 , x 1 ,   . . . ,   x n , λ 1 , λ 2 , . . . , λ m , ψ ] T        x k = [ o r t k w , v r t k w , p r t k w , b ω t k , b a t k , δ t , δ t ' ] T ,   k [ 0 , n ] (1)

where xk is the aerial robot state at the time tk that the k-th video frame is captured. It contains orientation ortkw, velocity vrtkw, position prtkw, gyroscope bias bωtk and acceleration bias batk of the aerial robot in the odometry frame. δt and δt' correspond to the clock biases and bias drifting rate of the GNSS receiver, respectively. n is the window size and m is the total number of visual features in the sliding window. λl is the inverse depth of the l-th visual feature. ψ is the yaw bias between the odometry and the world frame.

2.2 Visual Constraint

The visual factor constraint in the probabilistic graph is constructed from a sequence of sparse corner points. Considering the unstable vibration of the aerial robot, we separate the Shi-Tomasi[16] sparse feature points for the Kanade-Lucas-Tomasi (KLT) optical flow tracking[15].

For each input video frame, when the number of feature points is less than 120, new corner points are extracted to maintain a sufficient number of tracking features. Meanwhile, a uniform feature point distribution is carried out by setting a minimum pixel space between neighboring corners. It is worth noting that the corner extraction and KLT optical flow tracking procedures can achieve camera-rate performance on the NVIDIA Jetson Xavier NX board after being accelerated by CUDA. Assume the homogeneous coordinates of the image feature point l in the world coordinate frame are:

p ˜ l w = [ X l Z l ,   Y l Z l ,   1 ] T (2)

Then the homogeneous coordinates of feature point l in the pixel plane of video frame i can be expressed as:

P ˜ l c i = [ u l i ,   v l i ,   1 ] T (3)

where u and v are coordinate values on the pixel plane. The projection model of the airborne camera can be expressed as:

P ˜ l c i = K T r c T w r i p ˜ l w + n c (4)

where T is the transformation matrix, nc is the camera imaging noise, and K is the camera internal parameter matrix:

K = [ f x 0 c x 0 f y c y 0 0 1 ] (5)

where f and c represent scaling and translation during camera projection, respectively. The elements of the internal parameter matrix can be obtained by the camera calibration process, and the reprojection model of the feature point l from the video frame i to the video frame j can be formulated as:

P ˜ ^ l c j = K T r c T w r j [ T r i w T c r K - 1 ( Z l c i P ˜ l c i ) ] (6)

with

Z l c i = λ l c i f x f y f x 2 f y 2 + ( u l i - c x ) 2 f y 2 + ( v l i - c y ) 2 f x 2 (7)

where λlci represents the inverse depth of feature point l relative to the airborne camera ci.

Then the visual factor constraint can be expressed as the deviation between the actual position P˜lcj of the image feature point l in the video frame j and the measurement position P˜^lcj:

E V ( Z ^ l c j , χ V ) = P ˜ l c j - P ˜ ^ l c j (8)

where χV represents the sub-vector related to visual information in the aerial robot state vector.

2.3 Inertial Measurements Constraint

In the world coordinate frame, the aerial robot's pose and velocity can be obtained by the raw data of the inertial measurement unit that are measured in the aerial robot body frame. The IMU raw data includes two parts: gyroscope measurement ω^rt and accelerometer measurement a^rt, both of which are affected by the gyroscope bias bω and the acceleration bias ba, respectively. The raw measurement values of the gyroscope and accelerometer can be constructed by the following formulas:

{ ω ^ r t = ω r t + b ω r t + n ω r t a ^ r t = a r t + b a r t + n a r t + R w r t g w (9)

where, symbols ω^rt and a^rt represent the measured values of the gyroscope and accelerometer at time t with the current body coordinate system as the reference system respectively; bω and ba are the gyroscope bias and accelerometer bias; nω and na are gyroscope noise and accelerometer noise; gw is the gravitational acceleration. The gyroscope and accelerometer noises are Gaussian white noise; the gyroscope biases and the accelerometer biases obey Brownian motion, and their derivatives obey Gaussian distribution.

Assuming that the motion time of the aerial robot in two consecutive video frames is tk and tk+1, then the orientation (o), velocity (v) and position (p) of the aerial robot at time t+1 in the local world coordinate system can be expressed by the following formula:

{ o w t k + 1 = o w t k t [ t k , t k + 1 ] 1 2 Φ ( ω ^ r t - b ω r t - n ω r t ) o r t r t k d t                  v w t k + 1 = v w t k + t [ t k , t k + 1 ] [ R r t w t ( a ^ r t - b a r t - n a r t ) - g w t ] d t                 p w t k + 1 = p w t k + ( t k + 1 - t k ) v w t k + t [ t k , t k + 1 ] [ R r t w t ( a ^ r t - b a r t - n a r t ) - g w t ] d t 2 (10)

where

Φ ( ω ) = [ - ω × ω - ω T 0 ] , ω × = [ 0 - ω z ω y ω z 0 - ω x - ω y ω x 0 ] (11)

In the above formula, symbols ω^ and a^ are the measured values from gyroscope and accelerometer, and the symbols represent quaternion multiplications.

If the reference coordinate system is converted from the local world coordinate system (w) to the robot coordinate system (r), the above formula can be rewritten as:

{ o w r t k o w t k + 1 = α r t k + 1 r t k                                       R w r t k v w t k + 1 = R w r t k [ v w t k - ( t k + 1 - t k ) g w ] + β r t k + 1 r t k                  R w r t k p w t k + 1 = R w r t k [ p w t k + ( t k + 1 - t k ) v w t k - 1 2 g w ( t k + 1 - t k ) 2 ] + γ r t k + 1 r t k (12)

where the IMU pre-integration term can be expressed as:

{ α r t k + 1 r t k = α r t r t k t [ t k , t k + 1 ] 1 2 Φ ( ω ^ r t - b ω r t - n ω r t ) d t β r t k + 1 r t k = t [ t k , t k + 1 ] R r t r t k ( a ^ r t - b a r t - n a r t ) d t γ r t k + 1 r t k = t [ t k , t k + 1 ] R r t r t k ( a ^ r t - b a r t - n a r t ) d t 2 (13)

Then the first-order Jacobian approximation of the IMU pre-integration term can be expressed by the following formula:

{ α r t k + 1 r t k α ^ r t k + 1 r t k [ 1 1 2 J b ω α Δ b ω t k ]        β r t k + 1 r t k β ^ r t k + 1 r t k + J b a β Δ b a t k + J b ω β Δ b ω t k γ r t k + 1 r t k γ ^ r t k + 1 r t k + J b a γ Δ b a t k + J b ω γ Δ b ω t k (14)

This formula represents a sub-matrix of the Jacobian matrix. When the gyroscope or accelerometer bias changes, the above first-order Jacobian approximation can be used to replace the IMU pre-integration without reintegration.

The gyroscope factor constraint term is constructed as a rotation residual based on quaternion outer product. Simultaneously, the accelerometer factor constraint term is constructed as velocity pre-integration residual and translation pre-integration residual respectively. The gyroscope bias and accelerometer bias factor terms are obtained from the bias difference between two consecutive video frames. Then the IMU factor constraint can be constructed as follows:

E I ( Z ^ r t k + 1 r t k , χ I ) = [ α r t k + 1 r t k ( α ^ r t k + 1 r t k ) - 1 , β r t k + 1 r t k - β ^ r t k + 1 r t k , γ r t k + 1 r t k - γ ^ r t k + 1 r t k , δ b ω , δ b a ] T = [ 2 [ o w r t k o w t k + 1 ( α ^ r t k + 1 r t k ) - 1 ] i m a g R w r t k [ v w t k + 1 - v w t k + ( t k + 1 - t k ) g w ] - β ^ r t k + 1 r t k R w r t k [ p w t k + 1 - p w t k - ( t k + 1 - t k ) v w t k + 1 2 g w ( t k + 1 - t k ) 2 ] - γ ^ r t k + 1 r t k b ω r t + 1 - b ω r t b a r t + 1 - b a r t ] (15)

where χI represents the sub-vector related to IMU in the aerial robot state vector.

2.4 GNSS Constraint

Currently, there are 4 complete and independently operated GNSS constellations, namely, BeiDou, GPS, Galileo, GLONASS. The navigation satellites in each GNSS constellation ceaselessly broadcast modulated carrier signals, and consequently the ground receiver can distinguish the navigation satellites and demodulate the original messages. The GNSS factor constraint in the probability factor graph model is composed of pseudorange factor, Doppler frequency shift factor and receiver clock offset factor. The pseudorange measurement model between the receiver and the navigation satellite can be expressed as:

P ^ r s = p r E - p s E + c ( δ t r + δ t s + Δ t t r o + Δ t i o n + Δ t m u l ) + n p r (16)

with

{ p r t k E = R ( ω e a r t h t r s ) p r t k e p s i E = R ( ω e a r t h t r s ) p s i e (17)

Here, prE and psE are the positions of the ground receiver and navigation satellite in the Earth-centered inertial (ECI) coordinate system respectively. P^rs is the measured value of GNSS pseudorange, c is the propagation speed of light in vacuum, δtr and δts are the clock offset of the receiver and satellite, respectively, Δttro and Δtion are the delay of troposphere and ionosphere in the atmosphere, respectively, Δtmul is the delay caused by multipath effect, npr is the noise of pseudo range signal, ωearth is the earth's rotation speed, trs represents the signal propagation time from the satellite to the receiver.

Then the GNSS pseudorange factor constraint can be constructed as the residual between the true pseudorange and the receiver measured pseudorange:

E p r ( Z ^ r t k s i , χ p r )    = p r t k E - p s i E + c ( δ t r t k + δ t s i + Δ t t r o + Δ t i o n + Δ t m u l ) - P ^ r t k s i (18)

where χpr represents the sub-vector related to the GNSS pseudorange in the aerial robot state vector.

Besides pseudorange, Doppler frequency shift is also an important navigation information in GNSS modulated signal. The Doppler frequency shift measurement of GNSS receiver and navigation satellite can be modeled as:

δ f ^ r s = - 1 λ [ I r s ( v r E - v s E ) + c ( δ t r ' + δ t s ' ) ] + n d p (19)

with

{ v r t k E = R ( ω e a r t h t r s ) v r t k e v s i E = R ( ω e a r t h t r s ) v s i e (20)

where λ is the carrier wavelength, Irs is the direction vector between the satellite and the receiver, vrtkE and vsiE are the speed of the receiver and the satellite respectively, and δtrtk' and δtsi' are the clock offset drifting rate of the receiver and the satellite respectively.

Then the GNSS Doppler shift factor constraint can be constructed as the residual between the true carrier Doppler shift and the Doppler shift measurement:

E d p ( z ^ r t k s i , χ d p ) = - 1 λ [ I r t k s i ( v r t k E - v s i E ) + c ( δ t r t k ' + δ t s i ' ) ] - δ f ^ r t k s i (21)

where, χdp represents the sub-vector related to GNSS Doppler frequency shift in the agent state vector χ, and δf^rtksi is the Doppler frequency shift measurement value.

Now, the GNSS receiver clock offset error from tk to tk+1 is constructed as follows:

E τ ( Z ^ k - 1 k , χ τ ) = δ t r t k - δ t r t k - 1 - ( t k - t k - 1 ) δ t r t k - 1 ' (22)

By combining the pseudorange factor Epr, the Doppler frequency shift factor Edp and the receiver clock offset factor Eτ, the GNSS factor constraint item in the aerial robot probability factor graph model can be formed.

2.5 Tightly Coupled Pose Estimation

Considering the aerial robot pose solving process as a state estimation problem, the optimal state of the aerial robot is the maximum a posteriori estimation of the robot state vector. Assuming that the measurement signals of the aerial robot's airborne camera, IMU, and GNSS receiver are independent of each other, and the measurement noise conforms to a Gaussian distribution with zero mean, the maximum a posteriori estimation problem can be equivalent to minimizing the sum of errors, then the solution process of the aerial robot's state vector χ can be expressed as:

χ = a r g   m a x χ P ( χ | z )    = a r g   m i n χ ( e p - H p χ 2 + k = 1 n E ( z k , χ ) 2 ) (23)

where, z is the aerial robot linear observation model, ep represents the prior error, HP matrix is the prior pose information obtained by the airborne camera, n is the number of robot state vectors in the sliding window, and E(·) represents the sum of all sensor measurement error factors.

Finally, by solving the aerial robot state vector χ by means of probability factor graph optimization, the complete robot pose information can be obtained.

3 Experiments

3.1 Implementation for Aerial Robot Navigation

We chose the NVIDIA jetson Xavier NX board as the companion computer for aerial robot autonomous navigation. The Intel RealSense T265 binocular camera is employed to provide visual and inertial raw measurement information for the aerial robot pose estimation. Simultaneously, the Intel Realsense D435i RGB-D camera can provide 3D point cloud map. The U-Blox ZED-F9P is used for GNSS receiver module, which can continuously provide GNSS pseudorange, Doppler frequency shift and universal time coordinated (UTC) pulse signals to aerial robot pose estimator. A carbon fiber quadrotor unmanned aerial vehicle (UAV) with 410 mm wheelbase can be used as the carrier of companion computer and sensors. Pixhawk4 is chosen as the flight control automatic pilot, and the PX4 is employed as the flight control firmware. Both onboard cameras and GNSS receiver are connected to the companion computer via USB. The ground station is connected with the Pixhawk4 automatic pilot and companion computer through WiFi 2.4G and Ethernet, respectively. The detailed description is shown in Fig. 3.

thumbnail Fig. 3 The aerial robot implementation scheme

3.2 Simulation in Public Dataset

The EuRoC datasets[22] are collected from a binocular fisheye camera (Aptina MT9V034) and synchronized inertial measurement unit (Analog Devices ADIS 16448) carried by a micro aerial robot. The resolution of the binocular camera is 752×480, and the exposure mode of this camera is a global shutter with 20 Hz output frequency. The EuRoC datasets[22] contain 11 sequences, which includes different lighting conditions and different environments. We compare the proposed RRVPE with OKVIS[4] and VINS-Fusion[16] in EuRoC datasets. OKVIS is another nonlinear optimization-based visual-inertial odometry, and VINS-Fusion is the state-of-the-art KLT sparse optical flow tracking-based tightly coupled agent state estimator.

All methods are compared in a NVIDIA Jetson Xavier NX embedded device, as shown in Fig. 4. The NVIDIA Jetson series devices are slightly different from other onboard computers on the score of its GPU module with 384 CUDA cores, which allows the RRVPE system to execute in real time with CUDA parallel acceleration. The comparison of experimental results on root-mean-square error (RMSE) are shown in Table 1, which is verified by an absolute trajectory error (ATE). Figure 5 shows the system consistency on absolute pose error (APE) as time goes on in the sequence MH01. RRVPE will inevitably generate some accumulation errors over time, which is an inherent characteristic of all visual- based robot state estimators. Fortunately, due to the local bundle adjustment, the accumulation error of the RRVPE system is always within a reasonable range. The experimental results show that, on the NVIDIA Jetson Xavier NX embedded companion computer, RRVPE system shows a favorable accuracy comparing with other state-of-the-art agent sate estimators, and achieves real-time performance.

thumbnail Fig. 4 NVIDIA Jetson Xavier NX

thumbnail Fig. 5 The change of APE as time goes on in sequence MH01

Table 1

Performance comparison in the EuRoC datasets on RMSE (unit:m)

3.3 Simulation for Aerial Robot Navigation

Due to the instability of the open source flight control algorithm, a simulation test is needed before real-world flight, which can effectively avoid the aerial robot crash caused by a program error. We carried out the virtual experiment for aerial robot autonomous navigation in the Gazebo simulator, as shown in Fig. 6. After taking off, the aerial robot leverages a virtual plug-in stereo camera and GNSS raw signals to obtain the spatial position. Meanwhile, a 3D voxel map calculated by a virtual plug-in RGB-D camera is structured to further capture the transformation matrix between the aerial robot and neighbouring obstruction. When the flight destination is entered manually, the trajectory planner generates a path for the aerial robot motion and sends the desired speed to the flight controller, then gradually approaches the destination and keeps a fixed distance from the neighbouring obstruction.

thumbnail Fig. 6 Aerial robot navigation test carried out in the Gazebo simulator

3.4 Real-World Aerial Robot Navigation

In order to verify the robustness and practicability of the proposed aerial robot navigation system, we conduct both simulation and real-world physical verification similar to the Gazebo test. The visual-inertial sensor used in our real-world test is an Intel RealSense T265 binocular camera. In the meantime, an Intel RealSense D435i RGB-D camera is used to capture the 3D environmental map. In addition, the U-Blox ZED-F9P is employed as GNSS receiver that is a high-precision multiband receiver with multi-constellation support. The real-world experiment was conducted on a campus tennis court, where the sky is open and most of the navigation satellites are well tracked. The terrain crossed by the aerial robot is an artificial arbitrary obstruction, as shown in Fig. 7.

thumbnail Fig. 7 The real-world navigation environment conducted on a campus tennis court

During flight, the aerial robot can change its route when approaching an obstacle and always keep a reasonable distance from the neighbouring obstruction. It is worth noting that all flight control commands are generated by the NVIDIA Jetson Xavier NX board, and the flight autopilot does not receive any external control signal generated by external environment.

4 Conclusion

In this paper, we proposed RRVPE: a robust and real-time visual-inertial-GNSS tightly coupled pose estimator for aerial robot navigation, which combines KLT sparse optical flow, inertial measurements and GNSS raw signal to estimate aerial robot state between consecutive images. In the nonlinear optimization phase, visual-inertial-GNSS raw measurements were formulated by the probabilistic factor graph in a small sliding container. The RRVPE system can achieve real-time robot state estimation with CUDA acceleration on an airborne computer. The proposed system is evaluated using both simulated and real-world physical experiments, demonstrating clear advantages over state-of-the-art approaches.

References

  1. Lei L, Li Z H, Yang H, et al. Extraction of the leaf area density of maize using UAV-LiDAR data [J]. Geomatics and Information Science of Wuhan University, 2021, 46(11): 1737-1745(Ch). [Google Scholar]
  2. Chen J J, Li S, Liu D H, et al. AiRobSim: Simulating a multisensor aerial robot for urban search and rescue operation and training [J]. Sensors (Basel, Switzerland), 2020, 20(18): 5223. [NASA ADS] [CrossRef] [PubMed] [Google Scholar]
  3. Tabib W, Goel K, Yao J, et al. Autonomous cave surveying with an aerial robot [EB/OL].[2022-06-25]. https://arxiv.org/abs/2003.13883. [Google Scholar]
  4. Geneva P, Eckenhoff K, Lee W, et al. OpenVINS: A research platform for visual-inertial estimation [C]// 2020 IEEE International Conference on Robotics and Automation (ICRA). New York: IEEE, 2020: 4666-4672. [Google Scholar]
  5. Paul M K, Roumeliotis S I. Alternating-stereo VINS: Observability analysis and performance evaluation [C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE, 2018: 4729-4737. [Google Scholar]
  6. Qin T, Li P L, Shen S J. VINS-mono: A robust and versatile monocular visual-inertial state estimator [J]. IEEE Transactions on Robotics, 2018, 34(4): 1004-1020. [CrossRef] [Google Scholar]
  7. Rosinol A, Abate M, Chang Y, et al. Kimera: An open-source library for real-time metric-semantic localization and mapping [C]// 2020 IEEE International Conference on Robotics and Automation (ICRA). New York: IEEE, 2020: 1689-1696. [Google Scholar]
  8. Campos C, Elvira R, Rodríguez J J G, et al. ORB-SLAM3: An accurate open-source library for visual, visual-inertial, and multimap SLAM [J]. IEEE Transactions on Robotics, 2021, 37(6): 1874-1890. [CrossRef] [Google Scholar]
  9. Mur-Artal R, Montiel J M M, Tardós J D. ORB-SLAM: A versatile and accurate monocular SLAM system [J]. IEEE Transactions on Robotics, 2015, 31(5): 1147-1163. [CrossRef] [Google Scholar]
  10. Mur-Artal R, Tardós J D. ORB-SLAM2: An open-source SLAM system for monocular, stereo, and RGB-D cameras [J]. IEEE Transactions on Robotics, 2017, 33(5): 1255-1262. [CrossRef] [Google Scholar]
  11. Li T, Zhang H P, Gao Z Z, et al. Tight fusion of a monocular camera, MEMS-IMU, and single-frequency multi-GNSS RTK for precise navigation in GNSS-challenged environments [J]. Remote Sensing, 2019, 11(6): 610. [NASA ADS] [CrossRef] [Google Scholar]
  12. Cao S Z, Lu X Y, Shen S J. GVINS: Tightly coupled GNSS-visual-inertial fusion for smooth and consistent state estimation [J]. IEEE Transactions on Robotics, 2022, 38(4): 2004-2021. [CrossRef] [Google Scholar]
  13. Zhang C, Yang Z, Fang Q H, et al. FRL-SLAM: A fast, robust and lightweight SLAM system for quadruped robot navigation [C]//2021 IEEE International Conference on Robotics and Biomimetics (ROBIO). New York: IEEE, 2022: 1165-1170. [Google Scholar]
  14. Zhang C, Yang Z, Liao L W, et al. RPEOD: A real-time pose estimation and object detection system for aerial robot target tracking [J]. Machines, 2022, 10(3): 181. [CrossRef] [MathSciNet] [Google Scholar]
  15. Lucas B D, Kanade T. An iterative image registration technique with an application to stereo vision [C]// Proceedings of the 7th International Joint Conference on Artificial Intelligence. New York: ACM, 1981: 674-679. [Google Scholar]
  16. Shi J B, Tomasi. Good features to track [C]//1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE, 2002: 593-600. [Google Scholar]
  17. Leutenegger S, Lynen S, Bosse M, et al. Keyframe-based visual-inertial odometry using nonlinear optimization [J]. The International Journal of Robotics Research, 2015, 34(3): 314-334. [CrossRef] [Google Scholar]
  18. Forster C, Pizzoli M, Scaramuzza D. SVO: Fast semi-direct monocular visual odometry [C]// 2014 IEEE International Conference on Robotics and Automation (ICRA). New York: IEEE, 2014: 15-22. [Google Scholar]
  19. Engel J, Schöps T, Cremers D. LSD-SLAM: Large-scale direct monocular SLAM [C]// Proceedings of the European Conference on Computer Vision (ECCV). Berlin: Springer, 2014: 834-849. [Google Scholar]
  20. Qin T, Li P L, Shen S J. Relocalization, global optimization and map merging for monocular visual-inertial SLAM [C]// 2018 IEEE International Conference on Robotics and Automation (ICRA). New York: IEEE, 2018: 1197-1204. [Google Scholar]
  21. Qin T, Shen S J. Robust initialization of monocular visual-inertial estimation on aerial robots [C]//2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). New York: IEEE, 2017: 4225-4232. [Google Scholar]
  22. Burri M, Nikolic J, Gohl P, et al. The EuRoC micro aerial vehicle datasets [J]. The International Journal of RoboticsResearch, 2016, 35(10): 1157-1163. □ [CrossRef] [Google Scholar]

All Tables

Table 1

Performance comparison in the EuRoC datasets on RMSE (unit:m)

All Figures

thumbnail Fig. 1 The aerial robot equipped with the RRVPE navigation system

1. Intel RealSense D435i camera: responsible for building real-world 3D voxel map; 2. Intel RealSense T265 camera: responsible for providing binocular video stream and inertial measurement information; 3. U-Blox ZED-F9P receiver: responsible for receiving GNSS pseudorange, Doppler, ephemeris and time pulse information

In the text
thumbnail Fig. 2 Main parallel threads of RRVPE system
In the text
thumbnail Fig. 3 The aerial robot implementation scheme
In the text
thumbnail Fig. 4 NVIDIA Jetson Xavier NX
In the text
thumbnail Fig. 5 The change of APE as time goes on in sequence MH01
In the text
thumbnail Fig. 6 Aerial robot navigation test carried out in the Gazebo simulator
In the text
thumbnail Fig. 7 The real-world navigation environment conducted on a campus tennis court
In the text

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.