Viewpoint-independent book spine segmentation

Lior Talker, Yael Moses

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

We propose a method to precisely segment books on bookshelves in images taken from general viewpoints. The proposed segmentation algorithm overcomes difficulties due to text and texture on book spines, various book orientations under perspective projection, and book proximity. A shape dependent active contour is used as a first step to establish a set of book spine candidates. A subset of these candidates are selected using spatial constraints on the assembly of spine candidates by formulating the selection problem as the maximal weighted independent set (MWIS) of a graph. The segmented book spines may be used by recognition systems (e.g., library automation), or rendered in computer graphics applications. We also propose a novel application that uses the segmented book spines to assist users in bookshelf reorganization or to modify the image to create a bookshelf with a tidier look. Our method was successfully tested on challenging sets of images.

Original languageEnglish
Title of host publication2014 IEEE Winter Conference on Applications of Computer Vision, WACV 2014
PublisherIEEE Computer Society
Pages453-460
Number of pages8
ISBN (Print)9781479949854
DOIs
StatePublished - 2014
Externally publishedYes
Event2014 IEEE Winter Conference on Applications of Computer Vision, WACV 2014 - Steamboat Springs, CO, United States
Duration: 24 Mar 201426 Mar 2014

Publication series

Name2014 IEEE Winter Conference on Applications of Computer Vision, WACV 2014

Conference

Conference2014 IEEE Winter Conference on Applications of Computer Vision, WACV 2014
Country/TerritoryUnited States
CitySteamboat Springs, CO
Period24/03/1426/03/14

ASJC Scopus subject areas

  • Computer Science Applications
  • Computer Vision and Pattern Recognition

Fingerprint

Dive into the research topics of 'Viewpoint-independent book spine segmentation'. Together they form a unique fingerprint.

Cite this