Show simple item record

dc.contributor.authorAntich Tobaruela, Javier
dc.contributor.authorOrtiz Rodriguez, Alberto
dc.date.accessioned2024-07-11T09:10:49Z
dc.date.available2024-07-11T09:10:49Z
dc.date.issued2017-12-29
dc.identifier.citationAntich Tobaruela J, Ortiz Rodríguez A. Reactive navigation in extremely dense and highly intricate environments. PLoS One. 2017 Dec 29;12(12):e0189008.en
dc.identifier.issn1932-6203
dc.identifier.otherhttp://hdl.handle.net/20.500.13003/9515
dc.identifier.urihttp://hdl.handle.net/20.500.12105/20470
dc.description.abstractReactive navigation is a well-known paradigm for controlling an autonomous mobile robot, which suggests making all control decisions through some light processing of the current/ recent sensor data. Among the many advantages of this paradigm are: 1) the possibility to apply it to robots with limited and low-priced hardware resources, and 2) the fact of being able to safely navigate a robot in completely unknown environments containing unpredictable moving obstacles. As a major disadvantage, nevertheless, the reactive paradigm may occasionally cause robots to get trapped in certain areas of the environment D typically, these conflicting areas have a large concave shape and/or are full of closely-spaced obstacles. In this last respect, an enormous effort has been devoted to overcome such a serious drawback during the last two decades. As a result of this effort, a substantial number of new approaches for reactive navigation have been put forward. Some of these approaches have clearly improved the way how a reactively-controlled robot can move among densely cluttered obstacles; some other approaches have essentially focused on increasing the variety of obstacle shapes and sizes that could be successfully circumnavigated; etc. In this paper, as a starting point, we choose the best existing reactive approach to move in densely cluttered environments, and we also choose the existing reactive approach with the greatest ability to circumvent large intricate-shaped obstacles. Then, we combine these two approaches in a way that makes the most of them. From the experimental point of view, we use both simulated and real scenarios of challenging complexity for testing purposes. In such scenarios, we demonstrate that the combined approach herein proposed clearly outperforms the two individual approaches on which it is built.en
dc.description.sponsorshipThis work has received research funding from the European Union's Seventh Framework Programme (grant agreement no. 605200) and the European Union's H2020 Framework Programme (grant agreement no. 779776). Besides, this work is partially supported by the Spanish project MERBOTS DPI2014-57746-C3-2-R. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.; This work has received research funding from the European Union's Seventh Framework Programme (grant agreement no. 605200) and the European Union's H2020 Framework Programme (grant agreement no. 779776). It reflects only the author's views and the European Union is not liable for any use that may be made of the information contained herein. Besides, this work is partially supported by the Spanish project MERBOTS DPI2014-57746-C3-2-R.es_ES
dc.language.isoengen
dc.publisherPublic Library of Science (PLOS) en
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/*
dc.titleReactive navigation in extremely dense and highly intricate environmentsen
dc.typeresearch articleen
dc.rights.licenseAttribution 4.0 International*
dc.identifier.pubmedID29287078es_ES
dc.format.volume12es_ES
dc.format.number12es_ES
dc.format.pagee0189008es_ES
dc.identifier.doi10.1371/journal.pone.0189008
dc.relation.publisherversionhttps://dx.doi.org/10.1371/journal.pone.0189008en
dc.identifier.journalPloS Onees_ES
dc.rights.accessRightsopen accessen
dc.identifier.scopus2-s2.0-85039854789
dc.identifier.wos419096600005
dc.identifier.puiL620000474


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record

Attribution 4.0 International
This item is licensed under a: Attribution 4.0 International