HKUST Institutional Repository >
Computer Science and Engineering >
CSE Journal/Magazine Articles >
Please use this identifier to cite or link to this item:
|Title: ||Fusing images with different focuses using support vector machines|
|Authors: ||Li, Shutao|
Kwok, James Tin-Yau
Tsang, Ivor W.
|Keywords: ||Image fusion|
Support vector machines
|Issue Date: ||Nov-2004 |
|Citation: ||IEEE transactions on neural networks, v. 15, no. 6, Nov. 2004, p. 1555-1561|
|Abstract: ||Many vision-related processing tasks, such as edge detection, image segmentation and stereo matching, can be performed more easily when all objects in the scene are in good focus. However, in practice, this may not be always feasible as optical lenses, especially those with long focal lengths, only have a limited depth of field. One common approach to recover an everywhere-in-focus image is to use wavelet-based image fusion. First, several source images with different focuses of the same scene are taken and processed with the discrete wavelet transform (DWT). Among these wavelet decompositions, the wavelet coefficient with the largest magnitude is selected at each pixel location. Finally, the fused image can be recovered by performing the inverse DWT. In this paper, we improve this fusion procedure by applying the discrete wavelet frame transform (DWFT) and the support vector machines (SVM). Unlike DWT, DWFT yields a translation-invariant signal representation. Using features extracted from the DWFT coefficients, a SVM is trained to select the source image that has the best focus at each pixel location, and the corresponding DWFT coefficients are then incorporated into the composite wavelet representation. Experimental results show that the proposed method outperforms the traditional approach both visually and quantitatively.|
|Rights: ||© 2004 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder|
This research has been partially supported by the Research Grant Council of the Hong Kong Special Administrative Region under grants HKUST6195/02E.
|Appears in Collections:||CSE Journal/Magazine Articles|
Files in This Item:
|fusing.pdf||pre-published version||1292Kb||Adobe PDF||View/Open|
Find published version via
All items in this Repository are protected by copyright, with all rights reserved.