RD-VIO: Robust Visual-Inertial Odometry for Mobile Augmented Reality in Dynamic Environments
Note: RD-VIO is an important module of OpenXRLab. For more details, please refer to xrslam.
Jinyu Li1*, Xiaokun Pan1*, Gan Huang1, Ziyang Zhang1, Nan Wang2, Hujun Bao1, Guofeng Zhang1

1 State Key Lab of CAD&CG, Zhejiang University    2 SenseTime Research
* Equal Contribution     Corresponding author
RD-VIO Overview

Figure 1: Overview of RD-VIO system.

Abstract

It is typically challenging for visual or visual-inertial odometry systems to handle the problems of dynamic scenes and pure rotation. In this work, we design a novel visual-inertial odometry (VIO) system called RD-VIO to handle both of these two problems. Firstly, we propose an IMU-PARSAC algorithm which can robustly detect and match keypoints in a two-stage process. In the first state, landmarks are matched with new keypoints using visual and IMU measurements. We collect statistical information from the matching and then guide the intra-keypoint matching in the second stage. Secondly, to handle the problem of pure rotation, we detect the motion type and adapt the deferred-triangulation technique during the data-association process. We make the pure-rotational frames into the special subframes. When solving the visual-inertial bundle adjustment, they provide additional constraints to the pure-rotational motion. We evaluate the proposed VIO system on public datasets and online comparison. Experiments show the proposed RD-VIO has obvious advantages over other methods in dynamic environments.

Video

Key Features

Citation

@article{li2024rd,
title={RD-VIO: Robust visual-inertial odometry for mobile augmented reality in dynamic environments},
author={Li, Jinyu and Pan, Xiaokun and Huang, Gan and Zhang, Ziyang and Wang, Nan and Bao, Hujun and Zhang, Guofeng},
journal={IEEE transactions on visualization and computer graphics},
volume={30},
number={10},
pages={6941--6955},
year={2024},
publisher={IEEE}
}
    

ACKNOWLEDGMENTS

This work was partially supported by NSF of China (No. 61932003). The authors would like to thank Xinyang Liu for his kind help in data collection. Thanks to Danpeng Chen, Weijian Xie and Shangjin Zhai for their kind help in system fine-tuning and evaluation.