HKUST Institutional Repository >
Computer Science and Engineering >
CSE Master Theses >
Please use this identifier to cite or link to this item:
|Title: ||Expressive facial animation transfer for virtual actors|
|Authors: ||Zhao, Hui|
|Issue Date: ||2007 |
|Abstract: ||One of the most difficult computer graphic challenges is creating digital characters that are indistinguishable from real human beings. Since people are very sensitive to subtleties in human faces, the biggest difficulty in producing a successful computer graphic character depends on how to create a realistic 3D face model. Current technologies used in the film industry can produce plausible facial animation of virtual actors, but these systems are too complex and expensive to be deployed by non-specialists for daily use.
Driven by the popularity of online chats and 3D games, there is a high demand for realistic virtual characters who can reflect the facial expressions of the user. In this thesis, we first give a brief introduction to the current research and technology of digital virtual characters, and then present an effective method that transfers facial animation from 2D videos onto 3D faces in a visually pleasing manner. Our method is based on a Laplacian deformation framework. We propose to represent the facial animation with the displacements of a set of the feature points. With the assumption that the feature points move only parallel to the image plane, i.e. X − Y directions, we can map the displacements of the feature points from a 2D video to a 3D face. The positions of the non-feature points are then calculated from the Laplacian system. Our method can transfer both facial expression and speech animation. Furthermore, it is efficient and practical, with an intuitive interface. This proposed technique outperforms previous methods which are based on machine learning techniques or anatomical models in terms of speed and applicability, and is useful for a wide range of applications, such as, avatars, character animation for 3D films, computer games, and online chatting. We demonstrate the versatility of our approach by some special effects, such as expression exaggeration and expression imitation.|
|Description: ||Thesis (M.Phil.)--Hong Kong University of Science and Technology, 2007|
viii, 41 leaves : ill. (chiefly col.) ; 30 cm
HKUST Call Number: Thesis CSED 2007 Zhao
|Appears in Collections:||CSE Master Theses |
Files in This Item:
All items in this Repository are protected by copyright, with all rights reserved.