||Image super-resolution refers to the process by which a higher-resolution enhanced image is synthesized from one or more low-resolution images. It finds a number of real-world applications in computer vision and computer graphics. In this thesis, we propose a novel learning-based method for the problem of single-image super-resolution. Given a low-resolution image, its underlying higher-resolution details are synthesized based on a set of training images. In order to build a compact yet descriptive training set, we investigate the characteristic local primitive structures contained in large volumes of small image patches. Inspired by recent progress in manifold learning research, we take the assumption that these small primitive patches in the low-resolution and high-resolution images form manifolds with similar local geometry in the corresponding image feature spaces. This assumption leads to a super-resolution approach which reconstructs the feature vector corresponding to an image patch by its neighbors in the feature space. To speed up the process of searching for the nearest neighbors for any input image patch, we partition the training set into multiple clusters and also sample the training set probabilistically so that the data distribution is as uniform as possible. These two preprocessing techniques make the neighbor embedding algorithm more efficient as well as more effective. Experimental results show that our super-resolution method can synthesize higher-quality images compared with existing methods.