Dr. Michael Milford, a researcher from Queensland University of Technology, will be presenting a paper later this year that describes the theory behind his work in visual based navigation. Visual based navigation could potentially replace expensive GPS technology in certain applications, providing better and quicker location and navigation services.
Dr. Milford, who cut his academic teeth studying the navigational habits of small, nearly blind rodents, has taken this expertise into the world of navigation and position finding. His paper is titled SeqSLAM: Visual Route-Based Navigation for Sunny Summer Days and Stormy Winter Nights and will be presented St. Paul, Minnesota at the 2012 International Conference on Robotics and Automation.
The work that the paper will present grew out of a grant Dr. Milford secured in Nov 2011. The grant is meant to fund research that will:
… develop novel visual navigation algorithms that can recognize places along a route, whether travelled [sic] on a bright sunny summer day or in the middle of a dark and stormy winter night. Visual recognition under any environmental conditions is a holy grail for robotics and computer vision, and is a task far beyond current state of the art algorithms. Consequently robot and personal navigation systems use GPS or laser range finders, missing out on visual sensor advantages such as cheap cost and small size. This project will set a new benchmark in visual route recognition, and in doing so enable the extensive use of low cost visual sensors in robot and personal navigation systems under wide ranging environmental conditions.
The basic idea behind the research is that a person’s position could be determined by the world around them. What makes Dr. Milford’s work special is that he attempts to create an algorithm that will provide reliable position information without relying on prominent features from high resolution images to do this. Night or day, rain or shine – the idea is that a low-cost solution could provide position information reliably and accurately based upon changes in images captured over time.
Here is paper recently presented by Dr. Milford on the topic: Feature-based visual odometry and featureless place recognition for SLAM in 2.5D environments