Conference on Computer Vision and Pattern Recongnition, June 1996.
The silhouette of a smooth 3D object observed by a moving camera changes over time. Past work has shown how surface geometry can be recovered using the deformation of the silhouette when the camera motion is known. This paper addresses the problem of estimating both the full Euclidean surface structure and the camera motion from a dense set of silhouettes captured under orthographic or scaled orthographic projection. The approach relies on a viewpoint-invariant representation of curves swept by viewpoint-dependent features such as bitangents, inflections and contour points with parallel tangents. Feature points, which form stereo frontier points between non-consecutive images, are matched using this representation. The camera's angular velocity is computed from constraints derived from this correspondence along with the image velocity of these features. From the angular velocity, the epipolar geometry is ascertained, and infinitesimal motion frontier points can be detected. In turn, the motion of these frontier points constrains the translation component of camera motion. Finally, the surface is reconstructed using established techniques once the camera motion has been estimated.