Observability, Identifiability and Sensitivity of Vision-Aided Navigation
Observability, Identifiability and Sensitivity of Vision-Aided Navigation
We analyze the observability of motion estimates from the fusion of visual and inertial sensors. Because the model contains unknown parameters, such as sensor biases, the problem is usually cast as a mixed identification/filtering, and the resulting observability analysis provides a necessary condition for any algorithm to converge to a unique point estimate. Unfortunately, most models treat sensor bias rates as noise, independent of other states including biases themselves, an assumption that is patently violated in practice. When this assumption is lifted, the resulting model is not observable, and therefore past analyses cannot be used to conclude that the set of states that are indistinguishable from the measurements is a singleton. In other words, the resulting model is not observable. We therefore re-cast the analysis as one of sensitivity: Rather than attempting to prove that the indistinguishable set is a singleton, which is not the case, we derive bounds on its volume, as a function of characteristics of the input and its sufficient excitation. This provides an explicit characterization of the indistinguishable set that can be used for analysis and validation purposes.
Stefano Soatto、Konstantine Tsotsos、Joshua Hernandez
无线电导航无线电、电信测量技术及仪器自动化技术、自动化技术设备
Stefano Soatto,Konstantine Tsotsos,Joshua Hernandez.Observability, Identifiability and Sensitivity of Vision-Aided Navigation[EB/OL].(2013-11-28)[2025-07-16].https://arxiv.org/abs/1311.7434.点此复制
评论