TY - GEN
T1 - Pose, illumination and expression invariant pairwise face-similarity measure via Doppelgnger list comparison
AU - Schroff, Florian
AU - Treibitz, Tali
AU - Kriegman, David
AU - Belongie, Serge
PY - 2011
Y1 - 2011
N2 - Face recognition approaches have traditionally focused on direct comparisons between aligned images, e.g. using pixel values or local image features. Such comparisons become prohibitively difficult when comparing faces across extreme differences in pose, illumination and expression. The goal of this work is to develop a face-similarity measure that is largely invariant to these differences. We propose a novel data driven method based on the insight that comparing images of faces is most meaningful when they are in comparable imaging conditions. To this end we describe an image of a face by an ordered list of identities from a Library. The order of the list is determined by the similarity of the Library images to the probe image. The lists act as a signature for each face image: similarity between face images is determined via the similarity of the signatures. Here the CMU Multi-PIE database, which includes images of 337 individuals in more than 2000 pose, lighting and illumination combinations, serves as the Library. We show improved performance over state of the art face-similarity measures based on local features, such as FPLBP, especially across large pose variations on FacePix and multi-PIE. On LFW we show improved performance in comparison with measures like SIFT (on fiducials), LBP, FPLBP and Gabor (C1).
AB - Face recognition approaches have traditionally focused on direct comparisons between aligned images, e.g. using pixel values or local image features. Such comparisons become prohibitively difficult when comparing faces across extreme differences in pose, illumination and expression. The goal of this work is to develop a face-similarity measure that is largely invariant to these differences. We propose a novel data driven method based on the insight that comparing images of faces is most meaningful when they are in comparable imaging conditions. To this end we describe an image of a face by an ordered list of identities from a Library. The order of the list is determined by the similarity of the Library images to the probe image. The lists act as a signature for each face image: similarity between face images is determined via the similarity of the signatures. Here the CMU Multi-PIE database, which includes images of 337 individuals in more than 2000 pose, lighting and illumination combinations, serves as the Library. We show improved performance over state of the art face-similarity measures based on local features, such as FPLBP, especially across large pose variations on FacePix and multi-PIE. On LFW we show improved performance in comparison with measures like SIFT (on fiducials), LBP, FPLBP and Gabor (C1).
UR - http://www.scopus.com/inward/record.url?scp=84856663391&partnerID=8YFLogxK
U2 - 10.1109/ICCV.2011.6126535
DO - 10.1109/ICCV.2011.6126535
M3 - Conference contribution
AN - SCOPUS:84856663391
SN - 9781457711015
T3 - Proceedings of the IEEE International Conference on Computer Vision
SP - 2494
EP - 2501
BT - 2011 International Conference on Computer Vision, ICCV 2011
T2 - 2011 IEEE International Conference on Computer Vision, ICCV 2011
Y2 - 6 November 2011 through 13 November 2011
ER -