||Multimedia which has been in existence for centuries has greatly impacted the world. The popularity of multimedia in our daily lives is due to the development of multimedia technologies, among which compression is of crucial importance. Advanced compression techniques can effectively reduce the large amount of raw multimedia data to avoid consuming considerable storage space or transmission bandwidth, thus they play an important role in multimedia processing systems. Modern multimedia compression systems involve many techniques such that the compression efficiency is highly improved. For example, the block matching motion estimation is widely employed to remove the temporal redundancy between frames hence achieving a good compression. To decode the compressed bitstream, the syntaxes and side information of the coding tools need to be precisely transmitted to the receiver side. In current video coding standards, the overhead bits for specifying the tools can use up a significant portion of the total bits, such as the representations of motion vector (MV) information. The latest video coding standard H.264 uses a predictive method for the coding of the MV, however, the predictor is not always efficient. In this thesis, we propose a more efficient MV coding algorithm. First, several candidates instead of only one in the traditional standard can be selected as the MV predictor, so that the prediction accuracy is improved. Second, to reduce the bits consumed for index sending, some non-effective predictors are excluded by a technique called adaptive template matching. Besides, a guessing strategy is further introduced so that in some situations the bits consumed for index can be totally avoided. Experimental results indicate that this proposed method can achieve a significant bit rate reduction of 8% on average compared with the H.264 standard. In multimedia processing systems, lossy compression is often used due to limited network bandwidth and storage capability, especially in networks, high compression ratio is usually expected. Meanwhile, the compressed multimedia signals suffer from a certain degree of distortions, such as blocking artifacts and descent of fidelity. Post-processing is a recommended means to reduce the artifacts introduced by compression and alleviate the conflict between bit rate reduction and quality preservation. In this thesis, we perform a systematic overview of the post-processing problem modeling and different kinds of solutions to help in understanding and researching on this issue. Two methods are proposed to solve this issue in different angles. The first method generates a convex problem to realize post-processing. The objective function includes a gradient expression to eliminate the blocking artifacts, as well as a formula to minimize the statistical distortion of the image, based on the analysis of quantization error model. Other image properties are also extracted as the constraints of the convex problem. Results show that the proposed method can effectively suppress the blocking artifacts while preserving a higher fidelity compared with other post-processing schemes. The second method is a novel idea which commits to generate a linear l2-norm function to minimize the mean square error of the output frame. It treats the reconstructed frames before deblocking and after deblocking as two noisy observations of the original frame, and tries to estimate the original one by applying a Wiener filter on the two noisy ones. This method generates an image of more natural subjective behavior and higher objective quality compared with the reconstructed frames without post-processing.