The paper presents a new approach for shape recovery based on integrating geometric and photometric information. We consider 3D objects which are symmetric with respect to a plane (e.g., faces) and their reconstruction from a single image. Both the viewpoint and the illumination are not necessarily frontal. In principle, no correspondence between symmetric points is required, but knowledge about a few corresponding pairs accelerates the process. The basic idea is that an image taken from a general, non-frontal viewpoint, under non-frontal illumination can be regarded as a pair of images of half of the object, taken from two different viewing positions and two different lighting directions. We show that integrating the photometric and geometric information yields the unknown lighting and viewing parameters, as well as dense correspondence between pairs of symmetric points. As a result, a dense shape recovery of the object is computed. The method has been implemented and tested experimentally on simulated and real data.