Skip to main content

Leveraging variable sensor spatial acuity with a homogeneous, multi-scale place recognition framework.

Author
Abstract
:

Most robot navigation systems perform place recognition using a single-sensor modality and one, or at most two heterogeneous map scales. In contrast, mammals perform navigation by combining sensing from a wide variety of modalities including vision, auditory, olfactory and tactile senses with a multi-scale, homogeneous neural map of the environment. In this paper, we develop a multi-scale, multi-sensor system for mapping and place recognition that combines spatial localization hypotheses at different spatial scales from multiple different sensors to calculate an overall place recognition estimate. We evaluate the system's performance over three repeated 1.5-km day and night journeys across a university campus spanning outdoor and multi-level indoor environments, incorporating camera, WiFi and barometric sensory information. The system outperforms a conventional camera-only localization system, with the results demonstrating not only how combining multiple sensing modalities together improves performance, but also how combining these sensing modalities over multiple scales further improves performance over a single-scale approach. The multi-scale mapping framework enables us to analyze the naturally varying spatial acuity of different sensing modalities, revealing how the multi-scale approach captures each sensing modality at its optimal operation point where a single-scale approach does not, and enables us to then weight sensor contributions at different scales based on their utility for place recognition at that scale.

Year of Publication
:
2018
Journal
:
Biological cybernetics
Date Published
:
2018
ISSN Number
:
0340-1200
URL
:
https://dx.doi.org/10.1007/s00422-017-0745-7
DOI
:
10.1007/s00422-017-0745-7
Short Title
:
Biol Cybern
Download citation