||Nowadays, a vast diversity of video formats co-exist. Displaying a different formatted video requires a conversion of the video into the display format usually by interpolation. In this thesis, two format conversions requiring interpolation are investigated: Deinterlacing and frame rate up conversion (FRUC). Deinterlacing converts the interlaced into progressive formatted video by filling up the missing scanning lines of the interlaced video. With a wrong selection of interpolation direction from spatial and temporal domain, the resultant video would suffer from visual artifacts including flicker, blur, distorted edge and so on. FRUC increases frame rate by inserting extra frames into the original video. Unlike deinterlacing, FRUC has no partial pixel information of inserted frames; thus, accurate correspondence between two successive frames is necessary for avoiding visual artifacts, such as blocking artifacts, blur and distorted structure. A line-warping based deinterlacing system is proposed in this thesis. Line-warping between two spatial lines can effectively interpolate the missing line. However, temporal lines have less correlation than the missing line due to possible motions. Therefore, a robust multiple temporal line-warping is proposed, which exploits all possible motion directions for interpolation. With a spatial-temporal merger and a soft switch between interpolated result and pure temporal average, the deinterlaced result is more robust and has fewer artifacts than the compared methods in both stationary and motion region. A novel framework for FRUC is developed by adopting a hybrid of predictive variable block-size motion estimation and optical flow for reliable motion estimation. The hybrid structure can complement each shortcoming; consequently, the found dense and smooth motion field adapt to large motion. With the accurate motion field, the missing frame can be interpolated robustly through proposed motion vector interpolation and selection. Inserted frames are shown to have higher visual quality than pure motion estimation and pure optical flow based method.