||Image motion is an important cue used by both biological and artificial visual systems to extract information about the environment. While several models have been proposed to account for the local motion processing in human and insects, the functional significances of each of them is not well understood. It is generally believed that biological systems use similar mechanisms to process different sources of information. We exploit this idea by establishing a relationship between motion detection and disparity detection. We review how binocular neurons can be characterized by their phase and position tuning, and show that a similar classification also holds for motion sensitive neurons. Reichardt detectors can be considered as examples of position tuned neurons. Motion energy detectors can be considered as examples of phase tuned neurons. We propose a new class of motion neurons by combining phase and position tuning. The impulse responses and frequencies responses of different motion filters are measured. Experimental results show that by comparing the responses produced by two such cells allows us to discriminate velocity robustly and reliably around a reference velocity. Next, we develop a probabilistic model to characterize the responses of various detectors and their dependences on noise. This allows us to predict the performances of various detectors analytically. It also provides insights on how to choose the filter parameters properly under different conditions. We also develop efficient algorithms for the realization of various motion detectors. Starting with the low pass filter, we propose a triple-axis decomposition scheme to implement 2D anisotropic Gaussian filtering using three 1D filters working at designated directions. Based on the Gaussian filter, we can build up Gabor filters and motion filters which can work in DSP with high accuracy and good performance. Finally, we describe how populations of motion neurons tuned to different orientations, directions, and spatial positions can be combined together to extract the focus of expansion from images which are induced by camera translation in the environment. Using this algorithm, we develop a real time active vision system which aligns a camera’s optical axis with its direction of translation.