Visually Augmented Navigation (VAN) in an Unstructured Underwater Environment
|Figure 1. The goal of VAN is to improve the near-seafloor navigation capability of underwater vehicles by using camera-based motion contraints to "reset" navigation drift error.|
|Figure 2. VAN fuses camera-based motion estimates derived from overlapping seafloor imagery with vehicle navigation data to constrain navigation error. In this framework error does not monotonically accumulates with time as in dead-reckoned navigation systems, instead it is bounded by the network topology of camera constraints.|
|Figure 3. A pairwise feature-based image registration pipeline is used within a calibrated camera framework to estimate an essential matrix encoding the epipolar geometry relating the camera pair.|
|Figure 4. Each pairwise camera measurement results in a 5 DOF relative pose measurement (i.e. baseline direction and relative orientation.)|
|Figure 5. A priori temporal pose information from the navigation sensors can be exploited during image registration to restrict putative correspondence selection.|
|Figure 6. VAN experimental platform: The SeaBED AUV is a 2000m rated hover capable vehicle equipped with a 12bit down-looking CCD camera which captures strobe illuminated imagery at a rate of 1 frame every 3 seconds. The resulting image sequence typically has low temporal overlap on the order of 20-30%.|
|Figure 7. Stellwagen Bank Data Set: For this data set SeaBED was programmed to do terrain following at a fixed altitude of 3m off the seafloor. The top plot shows the depth excursions versus time necessary to achieve constant altitude and therefore is indicative of the ruggedness of the terrain. The sampling of imagery shown highlights the variability of terrain texture and scene relief driving the requirement for epipolar registration models.|
|Figure 8. The two trackline plots to the far left show the result of estimating the vehicle trajectory from A to B for a 100 image sequence. The blue trajectory corresponds to the VAN result and overlaid in brown is the dead-reckoned result shown for comparison; note that ellipses represent 99.9% confidence bounds. The VAN result is shown again on the plot to the right, but this time with temporal camera measurements highlighted as green links and spatial camera measurements highlighted in red. The red spatial links “reset” the accumulated drift error which occurs around the loop.|
The plot to the far right shows the time evolution of uncertainty in the VAN trajectory estimate. Uncertainty monotonically accumulates until the spatial camera measurements occur (indicated in red.) The spatial measurements result in reduced uncertainty for all correlated states. Uncertainty then continues to increase as no more spatial measurements are made.
R. Eustice, MIT/WHOI Joint Program in Applied Ocean Physics & Engineering
O. Pizarro, MIT/WHOI Joint Program in Applied Ocean Physics & Engineering
H. Singh, Deep Submergence Laboratory, Woods Hole Oceanographic Institution
VAN is a framework for sensor fusion of navigation data with camera-based 5 DOF relative pose measurements for 6 DOF vehicle motion in an unstructured 3D underwater environment. The fundamental goal of this work is to concurrently estimate online current vehicle position and its past trajectory. This goal is framed within the context of improving mobile robot navigation to support near-seafloor science and exploration.
Sensor fusion is accomplished within an augmented state Kalman filter by representing the vehicle trajectory as a collection of delayed state vehicle poses. Camera spatial constraints from overlapping imagery (Figures 2 & 4) provide partial observation of these poses and are used to enforce consistency and provide a mechanism for loop closure. The multi-sensor camera+navigation framework has compelling advantages over a camera-only based approach by 1) improving the robustness of pairwise image registration, 2) setting the free gauge scale, and 3) allowing for an unconnected camera graph topology.
Method details ...
- We assume near-seafloor imagery is collected by a pose instrumented calibrated camera platform (e.g. Figure 6).
- Pairwise image registration utilizes feature-based methods to estimate the essential matrix that relates the image pair, see Figures 3 & 5.
- The camera measurement provides a 6 DOF coordinate transform modulo scale, see Figure 4. This measurement is incorporated as a relative pose observation by the augmented state Kalman filter.
Trajectory estimation results are presented for a real-world underwater data set collected at the Stellwagen Bank National Marine Sanctuary by the SeaBED AUV, see Figures 6 & 7. SeaBED has a single down-looking camera and is instrumented with a typcial underwater navigation sensor suite of depth sensor, magnetic compass and tilt sensors, and a Doppler velocity log. The AUV conducted the survey over a sloping rocky ocean bottom. The intended survey pattern consisted of 15 North/South legs each 180 meters long and spaced 1.5 meters apart while maintaining an average altitude of 3.0 meters above the seafloor with a forward velocity of 0.35 meters per second. Closed-loop feedback on the navigation data was used for real-time vehicle control. The presented results are for a 100 image subsequence from this data set.
A number of important observations are worth pointing out. First, refering to Figure 8 note that the uncertainty ellipses are smaller for camera poses which are related by spatial links. Spatial links provide the mechanism for relating past vehicle poses to the present allowing for correction of dead-reckoned (DR) drift error. Trajectory uncertainty in a DR navigation system is unbounded and is essentially a function of time, in contrast, the error growth in a visually augmented navigation (VAN) system is a function of distance. The network topology associated with camera measurement links allows error accumulated over time to be ``reset'' and essentially become a function of distance away from the reference network node.
A second observation to point out is the delayed state smoothing which occurs in the ASKF. Spatial links not only decrease the uncertainty of the image pair involved, but also decrease the uncertainty of delayed state poses which share cross-correlation. Figure 8 shows the effect of spatial link measurements and the associated state smoothing. In this figure we see the trace of the XY sub-block for a sampling of delayed state elements plotted as a function of image frame number. Note the behavior of the plot at image frame 754 associated with establishment of the first cross-track spatial link. Information from that spatial measurement is propagated via the network topology down the image chain updating estimates of vehicle poses which are cross-correlated.
Thirdly, referring back to Figure 8 note that a temporal(green) link does not exist between consecutive image frames near XY location (-4,0). In a vision-only based navigation system, such a break in the temporal image chain would prevent concatenation of measured camera poses which would cause algorithms which rely on a connected camera topology to fail. It is a testament to the robustness of the VAN approach that a disconnected camera topology does not present any significant issue. The key is that navigation allows correlation to be built between the two poses even though a camera link measurement does not exist.
Finally, an additional point worth mentioning is that the VAN system results in a self-consistent estimate of the vehicle's trajectory. Initial processing of the image sequence resulted in a VAN estimated trajectory that did not lie within the 99.9% confidence bounds predicted by DR. The VAN estimate showed a crossing trajectory like in Figure 8 while the DR estimate showed the trajectory as consisting of two parallel South/North tracklines. Upon further investigation it became clear that the cause of this discrepancy was due to a significant nonlinear heading bias in the magnetic flux gate compass. An independently collected data set was used to calculate a bias correction curve which was then applied to the data set used in this paper. The bias corrected heading measurements result in a DR trajectory which now agrees well with the VAN estimate as seen in Figure 8. Essentially VAN camera derived measurements had been good enough to compensate for the large heading bias allowing recovery of a consistent vehicle trajectory (recall that in a KF update the prior will be essentially ignored if the measurements are very certain).
1) Improved error: Error growth is a function of network topology and not time.
2) Smoothing: All correlated state estimates benefit from camera measurements.
3) Robustness: State correlation allows for an unconnected camera topology.
This work was funded in part by the Censsis ERC of NSF under grant EEC-9986821 and in part by WHOI through a grant from the Penzance Foundation.