Visual homing: Surfing on the epipoles

Ronen Basri, Ehud Rivlin, Ilan Shimshoni

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

Abstract

We introduce a novel method for visual homing. Using this method a robot can be sent to desired positions and orientations in 3-D space specified by single images taken from these positions. Our method determines the path of the robot on-line. The starting position of the robot is not constrained, and a 3-D model of the environment is not required. The method is based on recovering the epipolar geometry relating the current image taken by the robot and the target image. Using the epipolar geometry, most of the parameters which specify the differences in position and orientation of the camera between the two images are recovered. However, since not all of the parameters can be recovered from two images, we have developed specific methods to bypass these missing parameters and resolve the ambiguities that exist. We present two homing algorithms for two standard projection models, weak and full perspective. We have performed simulations and real experiments which demonstrate the robustness of the method and that the algorithms always converge to the target pose.

Original languageEnglish
Title of host publicationThe Confluence of Vision and Control
PublisherSpringer London
Pages863-869
Number of pages7
Volume237
DOIs
StatePublished - 1998
Externally publishedYes
EventProceedings of the 1998 IEEE 6th International Conference on Computer Vision - Bombay, India
Duration: 4 Jan 19987 Jan 1998

Conference

ConferenceProceedings of the 1998 IEEE 6th International Conference on Computer Vision
CityBombay, India
Period4/01/987/01/98

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition

Fingerprint

Dive into the research topics of 'Visual homing: Surfing on the epipoles'. Together they form a unique fingerprint.

Cite this