Show simple item record

dc.contributor.authorJaenal, Alberto
dc.contributor.authorMoreno, Francisco-Angel
dc.contributor.authorGonzalez-Jimenez, Javier
dc.date.accessioned2024-02-19T15:26:46Z
dc.date.available2024-02-19T15:26:46Z
dc.date.issued2021-04-02
dc.identifier.otherhttp://hdl.handle.net/10668/3818
dc.identifier.urihttp://hdl.handle.net/20.500.12105/18304
dc.description.abstractThis paper addresses appearance-based robot localization in 2D with a sparse, lightweight map of the environment composed of descriptor-pose image pairs. Based on previous research in the field, we assume that image descriptors are samples of a low-dimensional Descriptor Manifold that is locally articulated by the camera pose. We propose a piecewise approximation of the geometry of such Descriptor Manifold through a tessellation of so-called Patches of Smooth Appearance Change (PSACs), which defines our appearance map. Upon this map, the presented robot localization method applies both a Gaussian Process Particle Filter (GPPF) to perform camera tracking and a Place Recognition (PR) technique for relocalization within the most likely PSACs according to the observed descriptor. A specific Gaussian Process (GP) is trained for each PSAC to regress a Gaussian distribution over the descriptor for any particle pose lying within that PSAC. The evaluation of the observed descriptor in this distribution gives us a likelihood, which is used as the weight for the particle. Besides, we model the impact of appearance variations on image descriptors as a white noise distribution within the GP formulation, ensuring adequate operation under lighting and scene appearance changes with respect to the conditions in which the map was constructed. A series of experiments with both real and synthetic images show that our method outperforms state-of-the-art appearance-based localization methods in terms of robustness and accuracy, with median errors below 0.3 m and 6°.
dc.description.sponsorshipThis research was funded by: Government of Spain grant number FPU17/04512; by the “I Plan Propio de Investigación, Transferencia y Divulgación Científica” of the University of Málaga; and under projects ARPEGGIO (PID2020-117057) and WISER (DPI2017-84827-R) financed by the Government of Spain and European Regional Development’s funds (FEDER).
dc.language.isoeng
dc.publisherMultidisciplinary Digital Publishing Institute (MDPI) 
dc.type.hasVersionVoR
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/*
dc.subjectAppearance-based localization
dc.subjectComputer vision
dc.subjectGaussian processes
dc.subjectManifold learning
dc.subjectRobot vision systems
dc.subjectIndoor positioning
dc.subjectImage manifold
dc.subjectDescriptor manifold
dc.subjectAprendizaje
dc.subjectDescriptores
dc.subjectReconocimiento de normas patrones automatizadas
dc.subjectAmbiente
dc.subjectMétodos
dc.subjectInteligencia artificial
dc.subject.meshLighting 
dc.subject.meshPattern Recognition, Automated
dc.subject.meshImaging, Three-Dimensional
dc.subject.meshImage Interpretation, Computer-Assisted
dc.subject.meshUncertainty 
dc.subject.meshEnvironment 
dc.subject.meshNormal Distribution 
dc.subject.meshArtificial Intelligence 
dc.titleAppearance-Based Sequential Robot Localization Using a Patchwise Approximation of a Descriptor Manifold
dc.typeresearch article
dc.rights.licenseAttribution 4.0 International*
dc.identifier.pubmedID33918493es_ES
dc.identifier.doi10.3390/s21072483
dc.identifier.e-issn1424-8220es_ES
dc.relation.publisherversionhttps://www.mdpi.com/1424-8220/21/7/2483/htmes
dc.identifier.journalSensorses_ES
dc.rights.accessRightsopen accesses_ES
dc.contributor.authoraffiliation[Jaenal,A; Moreno,FA; Gonzalez-Jimenez,J] Machine Perception and Intelligent Robotics Group (MAPIR), Department of System Engineering and Automation Biomedical Research Institute of Malaga (IBIMA), University of Malaga, Málaga, Spain.


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record

Attribution 4.0 International
This item is licensed under a: Attribution 4.0 International