Please use this identifier to cite or link to this item:

View-dependent deformation for virtual human modeling from silhouettes

Authors Wang, Yu HKUST affiliated (currently or previously)
Wang, Charlie Changling HKUST affiliated (currently or previously)
Yuen, Matthew Ming Fai View this author's profile
Issue Date 2001
Source Visualization, Imaging, and Image Processing. Proceedings of the IASTED International Conference, Marbella, Spain , 2001, p. 140-144
Summary The primary objective of this research work is to develop an efficient and intuitive deformation technique for virtual human modeling by silhouettes input. With our method, the reference silhouettes (the front-view and right-view silhouettes of the synthetic human model) and the target silhouettes (the front-view and right-view silhouettes of the human from the photographs) are used to modify the synthetic human model, which is represented by a polygonal mesh. The system moves the vertices of the polygonal model so that the spatial relation between the original positions and the reference silhouettes is identical to the relation between the resulting positions and the target silhouettes. Our method is related to the axial deformation. The self-intersection problem is solved.
Rights Copyright © ACTA Press. This paper is made available with permission of ACTA Press.
Language English
Format Conference paper
Files in this item:
File Description Size Format
326028new.pdf 972408 B Adobe PDF