Visual Odomoetry on a FPGA for Lunar Navigation

Fall 2012

Student
Pronoy Biswas
Advisor
Red Whittaker
Project description

Abstract

In order to obtain accurate real time positioning of the Moon rover we propose utilization of a robust Visual Odometry system implemented on a rad hardened, low-power FPGA. Using results from a 5 km field test, we compare different approaches for Visual Odometry and finally integrate an optimal solution with the rover navigation system.

Project Motivation

The Moon has presented itself as a compelling site for exploration. In order to exploit this opportunity with a robotic Lunar explorer, its position must be accurately known. With high quality location data the following is possible:

  • Scientific measurements can be analyzed with a spatial context
  • Rovers can navigate around known hazards to scientifically interesting areas
  • Autonomous pinpoint landing

To realize the previously described objectives, visual odometry on an FPGA can provide a Lunar robot with frequent and precise updates on position data. This solution is ideal because it requires no processing overhead from the main CPU, eliminates wheel-slip error, and requires no complex mechanical mechanisms.

Project Description

Visual Odometry is the process of determining the motion of a robot or vehicle using only the imagery obtained from cameras. It works by detecting significant features in a pair of images and then matching them across the images, thereby determining the change in the robot's pose relative to the coordinates obtained from the set of matched features. Any rover in space requires absolute positioning to travel across significant distances. The absence of a GPS-like system and the inherent inaccuracies (due to drift and slip) in wheel odometry approaches create the need for an alternative source of positioning and this is where visual odometry comes into effect.

The goal of this project is to implement a vision system that will run on radiation hardened FPGAs, capable of running visual odometry algorithms in real-time to supplement other navigation infrastructure on-board the lunar rover. FPGAs are ideal for such a system because they consume significantly less power and radiation hardened versions are easy to acquire. In addition, FPGAs have been quite successfully used in many computer vision applications and FPGA versions of many popular algorithms like SIFT and SURF have been developed to run at near real-time speeds. During the course of this project, multiple prospective schemes for visual odometry will be studied and in an iterative fashion, designed, implemented and tested on the field.

Goals by Midterm

The primary and overall goal of the project is to design the architecture of the visual odometry algorithm for its implementation on FPGA. By midterm, the implementation of stereo imaging, depth map generation and feature extraction will be completed.

Goals by December

Our objective during this semester is to accomplish a full implementation of visual odometry on an FPGA, which should be able to achieve less than 10% of cumulative error over 5 km using stereo cameras. A Humvee will be rigged with stereo cameras and an IMU and GPS to collect ground truth. The test will be conducted during daylight hours at Robot City in Pittsburgh, Pennsylvania. The terrain will be chosen to most closely resemble the surface of the Moon. Because the Robot City site is not 5km in length, several shorter 500 -- 1000 m tests will be conducted in various configurations (straight, circular) to determine the robustness of the algorithm. Use of both the Humvee and Robot City come at no expense.

Impact and Contribution

If successfully implemented, visual odometry can prove to be a reliable navigation method that can guide landers to landing sites on the moon's surface and assist rovers in explorations like the search for lunar ice. Also, this can lead to the development of driving strategies that depend on visual odometry, as was done on the NASA MERs. A rover can estimate how much its wheels are slipping while traversing high-slip terrains, by routinely monitoring its distance to any nearby feature rich objects using visual odometry. Based on such measurements the rover can decide whether or not to plan the next leg of its exploration on a different path to avoid getting stuck. Another example would be the capability to recognize visually scenery in close proximity to an obstacle encountered in the past, could enable the rover to avoid them. One of the primary objectives of this project is to arrive at a high-speed implementation of visual odometry capable of running in realtime on an FPGA. If successful, this would effectively address the biggest bottleneck in the use of visual odometry on planetary rovers: the runtime of the vision algorithms on the limited computing power of the rovers.

Individual Contributions

So far, my contributions to the project are developing the feature matching module for the FPGA. This module compares image features for similarity by using a "sum of absolute differences" metric. This is a simple yet effective method of comparing feature descriptors that is ideal for a fast implementation in an FPGA. Once this module is finished, I hope to use my experience in LabView, digital electronics, and computer vision to work on any system required by the team.

Return to project list