Intelligent Reference Curation for Visual Place Recognition via Bayesian Selective Fusion

Download
Citations
Altmetric
Author
Molloy, TL; Fischer, T; Milford, MJ; Nair, GDate
2020Source Title
IEEE Robotics and Automation LettersPublisher
Institute of Electrical and Electronics Engineers (IEEE)University of Melbourne Author/s
Nair, GirishAffiliation
Electrical and Electronic EngineeringMetadata
Show full item recordDocument Type
Journal ArticleCitations
Molloy, T. L., Fischer, T., Milford, M. J. & Nair, G. (2020). Intelligent Reference Curation for Visual Place Recognition via Bayesian Selective Fusion. IEEE Robotics and Automation Letters, PP (99), pp.1-1. https://doi.org/10.1109/lra.2020.3047791.Access Status
Open AccessAbstract
A key challenge in visual place recognition (VPR) is recognizing places despite drastic visual appearance changes due to factors such as time of day, season, weather or lighting conditions. Numerous approaches based on deep-learnt image descriptors, sequence matching, domain translation, and probabilistic localization have had success in addressing this challenge, but most rely on the availability of carefully curated representative reference images of the possible places. In this paper, we propose a novel approach, dubbed Bayesian Selective Fusion, for actively selecting and fusing informative reference images to determine the best place match for a given query image. The selective element of our approach avoids the counterproductive fusion of every reference image and enables the dynamic selection of informative reference images in environments with changing visual conditions (such as indoors with flickering lights, outdoors during sunshowers or over the day-night cycle). The probabilistic element of our approach provides a means of fusing multiple reference images that accounts for their varying uncertainty via a novel training-free likelihood function for VPR. On difficult query images from two benchmark datasets, we demonstrate that our approach matches and exceeds the performance of several alternative fusion approaches along with state-of-the-art techniques that are provided with prior (unfair) knowledge of the best reference images. Our approach is well suited for longterm robot autonomy where dynamic visual environments are commonplace since it is training-free, descriptor-agnostic, and complements existing techniques such as sequence matching.
Export Reference in RIS Format
Endnote
- Click on "Export Reference in RIS Format" and choose "open with... Endnote".
Refworks
- Click on "Export Reference in RIS Format". Login to Refworks, go to References => Import References