HKUST Library Institutional Repository Banner

HKUST Institutional Repository >
Computer Science and Engineering >
CSE Conference Papers >

Please use this identifier to cite or link to this item:
Title: Structuring visual words in 3D for arbitrary-view object localization
Authors: Xiao, Jianxiong
Chen, Jingni
Yeung, Dit-Yan
Quan, Long
Keywords: Structural information
3D locations
Arbitrary-view object localization
Issue Date: Oct-2008
Citation: Proceedings 10th European Conference on Computer Vision, ECCV 2008, Marseille, France, 12-18 October 2008, Part III, LNCS 5304, p. 725-737
Abstract: We propose a novel and efficient method for generic arbitrary-view object class detection and localization. In contrast to existing single-view and multi-view methods using complicated mechanisms for relating the structural information in different parts of the objects or different viewpoints, we aim at representing the structural information in their true 3D locations. Uncalibrated multi-view images from a hand-held camera are used to reconstruct the 3D visual word models in the training stage. In the testing stage, beyond bounding boxes, our method can automatically determine the locations and outlines ofmultiple objects in the test image with occlusion handling, and can accurately estimate both the intrinsic and extrinsic camera parameters in an optimized way.With exemplar models, our method can also handle shape deformation for intra-class variance. To handle large data sets from models, we propose several speedup techniques to make the prediction efficient. Experimental results obtained based on some standard data sets demonstrate the effectiveness of the proposed approach.
Rights: The original publication is available at
Appears in Collections:CSE Conference Papers

Files in This Item:

File Description SizeFormat
detection.pdf2246KbAdobe PDFView/Open

All items in this Repository are protected by copyright, with all rights reserved.