HKUST Library Institutional Repository Banner

HKUST Institutional Repository >
Computer Science and Engineering >
CSE Doctoral Theses >

Please use this identifier to cite or link to this item: http://hdl.handle.net/1783.1/3078
Title: 3D modeling from photometry and geometry
Authors: Tan, Ping
Issue Date: 2007
Abstract: In this thesis, we focus on the 3D reconstruction from multiple images. We explore different approaches for reconstruction including photometric methods, i.e. modeling from a changing lighting, and geometric methods, i.e. modeling from a changing viewpoint. Photometric stereo uses images from different lighting conditions to build 3D model of an object. In this work, we improve photometric stereo from both reconstruction accuracy and data capturing simplicity. In conventional photometric stereo algorithms, surface shape can only be recovered at the resolution of input images, since only one normal direction is computed for each pixel. However, for a rough surface, there often exists sub-pixel level geometry structures. We have studied the relationship between surface reflectance and sub-pixel geometry structures to design a new photometric stereo algorithm that recovers sub-pixel level geometry structures. Our method significantly improves the modeling accuracy. Another limitation in conventional photometric stereo is that, the lighting conditions have to be recorded in data capturing. Otherwise, surface shape can only be recovered up to an unknown General Bas-Relief (GBR) shape ambiguity. In this thesis, we have observed that isotropy/reciprocity induces symmetry structures on Gauss sphere for any isotropic surface. And these symmetry structures are destroyed by a GBR transformation. Hence, we can resolve the unknown GBR shape ambiguity by restoring the broken symmetry for any isotropic surface. With our method, there is no need to record lighting conditions in data capturing for isotropic surface. This makes the data capturing procedure significantly simplified. Image-based modeling (IBM) uses images from different viewpoints to build 3D model of an object. Previous methods on image-based modeling are very successful at recovering camera poses and a set of 3D points of the object. But these recovered 3D points are typically unstructured. To generate 3D models ready to be used in applications such as movies, games or virtual tours, we organize these unstructured 3D points and build high quality texture mapped models from them. There are two key contributions in our approach. Firstly, we propose to segment 3D points together with 2D images into individual objects by a joint 2D-3D segmentation method. General segmentation of images is a very difficult problem. But since we have both 2D and 3D information which reinforce each other, the segmentation can be performed quite efficiently. Segmentation results provide clearly defined object boundary which is very useful for high quality modeling. Secondly, we proposed to synthesize occluded structure according to object priors learned from visible structures. An inevitable problem in image-based method is occlusion. No or less information is available in occluded regions. Our method can successfully propagate information from visible regions to invisible regions. Our method is tested on image-based modeling of trees. With our approach, we can build highly realistic digital model of trees from their images.
Description: Thesis (Ph.D.)--Hong Kong University of Science and Technology, 2007
viii, 111 leaves : ill. ; 30 cm
HKUST Call Number: Thesis CSED 2007 Tan
URI: http://hdl.handle.net/1783.1/3078
Appears in Collections:CSE Doctoral Theses

Files in This Item:

File Description SizeFormat
th_redirect.html0KbHTMLView/Open

All items in this Repository are protected by copyright, with all rights reserved.