|国家预印本平台
| 注册
首页|ActLoc: Learning to Localize on the Move via Active Viewpoint Selection

ActLoc: Learning to Localize on the Move via Active Viewpoint Selection

ActLoc: Learning to Localize on the Move via Active Viewpoint Selection

来源:Arxiv_logoArxiv
英文摘要

Reliable localization is critical for robot navigation, yet most existing systems implicitly assume that all viewing directions at a location are equally informative. In practice, localization becomes unreliable when the robot observes unmapped, ambiguous, or uninformative regions. To address this, we present ActLoc, an active viewpoint-aware planning framework for enhancing localization accuracy for general robot navigation tasks. At its core, ActLoc employs a largescale trained attention-based model for viewpoint selection. The model encodes a metric map and the camera poses used during map construction, and predicts localization accuracy across yaw and pitch directions at arbitrary 3D locations. These per-point accuracy distributions are incorporated into a path planner, enabling the robot to actively select camera orientations that maximize localization robustness while respecting task and motion constraints. ActLoc achieves stateof-the-art results on single-viewpoint selection and generalizes effectively to fulltrajectory planning. Its modular design makes it readily applicable to diverse robot navigation and inspection tasks.

Jiajie Li、Boyang Sun、Luca Di Giammarino、Hermann Blum、Marc Pollefeys

自动化技术、自动化技术设备计算技术、计算机技术

Jiajie Li,Boyang Sun,Luca Di Giammarino,Hermann Blum,Marc Pollefeys.ActLoc: Learning to Localize on the Move via Active Viewpoint Selection[EB/OL].(2025-08-28)[2025-09-06].https://arxiv.org/abs/2508.20981.点此复制

评论