Landmark selection for task-oriented navigation

Ronen Lerner, Ehud Rivlin, Ilan Shimshoni

Research output: Contribution to journalArticlepeer-review

Abstract

Many vision-based navigation systems are restricted to the use of only a limited number of landmarks when computing the camera pose. This limitation is due to the overhead of detecting and tracking these landmarks along the image sequence. A new algorithm is proposed for subset selection from the available landmarks. This algorithm searches for the subset that yields minimal uncertainty for the obtained pose parameters. Navigation tasks have different types of goals: moving along a path, photographing an object for a long period of time, etc. The significance of the various pose parameters differs for different navigation tasks. Therefore, a requirements matrix is constructed from a supplied severity function, which defines the relative importance of each parameter. This knowledge can then be used to search for the subset that minimizes the uncertainty of the important parameters, possibly at the cost of greater uncertainty in others. It is shown that the task-oriented landmark selection problem can be defined as an integer-programming problem for which a very good approximation can be obtained. The problem is then translated into a semi-definite programming representation which can be rapidly solved. The feasibility and performance of the proposed algorithm is studied through simulations and lab experimentation.

Original languageEnglish
Pages (from-to)494-505
Number of pages12
JournalIEEE Transactions on Robotics
Volume23
Issue number3
DOIs
StatePublished - Jun 2007

Keywords

  • Covariance matrix
  • Feature selection
  • Landmarks
  • Pose estimation
  • Semi-definite programming (SDP)

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Computer Science Applications
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Landmark selection for task-oriented navigation'. Together they form a unique fingerprint.

Cite this