HKUST Institutional Repository >
Computer Science and Engineering >
CSE Doctoral Theses >
Please use this identifier to cite or link to this item:
|Title: ||Visual enhancement using multiple cues|
|Authors: ||Chen, Jia|
|Issue Date: ||2009 |
|Abstract: ||Despite the advances in imaging technology, fundamental limitations of camera still exist and captured photographs can be defective. Two major types of defects are blur and noise. Enhancement of image and video is an important topic in computer vision and graphics, because it can serve as not only a pre-processing step for other algorithms but also a post-processing step to directly produce enhanced output for users.
In this thesis, I will explore the issues and propose solutions to visual enhancement given images corrupted by blur and noise. There are a great number of previous works and most of them use a single image for the purpose of enhancement. This thesis will handle the problem from a different perspective: using multiple cues for visual enhancement. New cues are introduced in addition to the source image itself for specific problems. In deblurring, we introduce one more shot thus converting the problem from using a single image to using two images. In denoising problem, we use the noise layer as well as the noisy image for image noise separation. We formulate video denoising by optimizing one frame with multiple temporal observations.
While it is intuitive that using multiple cues will provide us with more information for enhancement, many research challenges remain to be solved. First of all, it is important to construct and collect multiple cues in proper ways. Second, these cues should be linked in a computational framework. The theme of this thesis is centered at a unified multi-cue enhancement approach, where we emphasize the importance of invariant quantity in linking multiple cues. Specific optimization procedures will be designed to integrate prior knowledge with these observations.
I will first analyze image deblurring problem using two blurry images. Assuming that the two inputs are taken from the same static scene, the invariant quantity is the common clear image. Since the two input images have different motion blur defects, their frequency spectrums are complementary to each other. A feedback algorithm is proposed to iterate over kernel estimation and image deconvolution steps. This approach introduces an image prior and a motion prior in the context of multiple observations. Consequently, the visual quality of enhancement is greatly improved compared to previous method using single images.
The second part of this thesis will focus on denoising. Removing noise from images has been studied for decades. However, most previous automatic approaches only take the image itself as the processing target. We show that even with a single input image, an auxiliary cue, namely the noise layer can be constructed. The optimization can now be performed on both image layer and noise layer. Using the extracted noise layer, the artifacts of denoising algorithms can be easily visualized. We propose an interactive system based on this representation, which helps the user to achieve high quality image noise separation results.
The image denoising system will be extended to video where multiple frames exist as temporal cues for estimating one clean frame. The key issue for this problem is how to set up connections between temporal cues. One classical method to find inter-frame correspondences is optical flow, which estimates pixel-wise alignment. An extended method will be introduced to compute a probabilistic motion field, which characterizes soft temporal correspondences between frames. The matched pixels will be placed inside a spatio-temporal Markov Random Field, and the clean frame is inferred by solving this MRF.|
|Description: ||Thesis (Ph.D.)--Hong Kong University of Science and Technology, 2009|
xii, 90 p. : ill. ; 30 cm
HKUST Call Number: Thesis CSED 2009 ChenJ
|Appears in Collections:||CSE Doctoral Theses|
Files in This Item:
All items in this Repository are protected by copyright, with all rights reserved.