Please use this identifier to cite or link to this item:

A unified model of joint development of disparity selectivity and vergence control

Authors Zhao, Yu
Issue Date 2013
Summary Reinforcement learning has been shown to be a prime candidate as a general mechanism in animals and humans to learn how to progressively choose behaviorally better options. An important problem is how the brain finds representations of relevant sensory input to use for such learning. Extensive empirical data have shown that such representations are also learned throughout development. Thus, learning sensory representations for tasks and learning of task solutions occur simultaneously. Here we propose a novel framework for efficient coding and task learning in the full perception and action cycle and apply it to the learning of disparity representation for vergence eye movements. Our approach integrates learning of a generative model of sensory signals and learning of a behavior policy with the identical objective of making the generative model work as effectively as possible. We show that this naturally leads to a self-calibrating system learning to represent binocular disparity and produce accurate vergence eye movements. Our framework is very general and could be useful in explaining the development of various sensorimotor behaviors and their underlying representations. Keywords – Binocular vision, vergence control, reinforcement learning, sparse coding, neural development
Note Thesis (M.Phil.)--Hong Kong University of Science and Technology, 2013
Language English
Format Thesis
Access View full-text via DOI
Files in this item:
File Description Size Format
th_redirect.html 344 B HTML
Copyrighted to the author. Reproduction is prohibited without the author’s prior written consent.