Methods for state estimation that rely on visual information are challenging on dynamic robots because of rapid changes in the viewing angle of onboard cameras. In this thesis, we show that by leveraging structure in the way that dynamic robots locomote, we can increase the feasibility of performing state estimation despite these challenges. We present a method that takes advantage of the underlying periodic predictability that is often present in the motion of legged robots to improve the performance of the feature tracking module within a visual-inertial SLAM system. Inspired by previous work on coordinated mapping with multiple robots, our method performs multi-session SLAM on a single robot, where each session is responsible for mapping during a distinct portion of the robot’s gait cycle. Our method outperforms several state-of-the-art methods for visual and visual-inertial SLAM in both a simulated environment and on data collected from a real-world quadrupedal robot.
Howie Choset (Co-Advisor)
Matthew Travers (Co-Advisor)
Zoom Participation. See announcement.