||In mammals, binocular fusion takes place over a limited region, known as Panum's fusional area, which is much smaller than the range of binocular disparities encountered in natural scenes. This discrepancy suggests that there must be a mechanism for detecting whether the stimulus disparity is inside or outside the range of the preferred disparities of disparity-tuned neurons in the brain. However, this mechanism has received little attention to date. This thesis describes a biologically-plausible approach to address this detection problem. We compare the efficacy of several features computed from a population of disparity-tuned neurons as confidence measures that differentiate between in-range and out-of-range disparities. Interestingly, some intuitively appealing features, such as the average activation across the population and the difference between the peak and average responses, actually perform poorly. On the other hand, we find that normalizing the difference between the peak and average responses results in a reliable confidence measure. We validate our findings experimentally using both real and synthetic images, and theoretically using a probabilistic model of the population responses. This probabilistic model also enables us to derive a biologically-plausible detector which combines multiple features to improve the performance. Using this normalized feature, we also propose a new approach to estimate the stimulus disparity. The model computes the confidence for neural populations and estimates the disparity from the population with the highest confidence. In contrast to the sequential approach in a previously proposed coarse-to-fine model, our model operates in parallel. Working on real-world stereograms, our approach outperforms the coarse-to-fine model and it has the ability to identify occluded regions. Finally, we demonstrate the efficacy of the confidence measure in a stereo vision system. The system can compute and evaluate the population responses of disparity-tuned neurons in real time, and it can control virtual vergence eye movements.