HKUST Institutional Repository >
Computer Science and Engineering >
CSE Technical Reports >
Please use this identifier to cite or link to this item:
|Title: ||Auto-associative learning of on-line handwriting using recurrent neural networks|
|Authors: ||Yeung, Dit-Yan|
|Keywords: ||Grammar-based models|
Recurrent neural networks
Temporal sequence processing
|Issue Date: ||Nov-1994 |
|Series/Report no.: ||Computer Science Technical Report ; HKUST-CS94-38|
|Abstract: ||Traditionally, the parametric grammar-based approach to the modeling and recognition of temporal sequences using hidden Markov models (HMMs) involves a very crucial step, which requires human experts to determine a priori the appropriate model architecture to work on. This includes, among other things, determining the number of states in the (probabilistic) grammar and the (probabilistic) transitions between states. As a long-term effort, we attempt to develop a more domain-independent, principled approach to the modeling of grammatical structures in temporal sequences, without knowing in advance the topology of the underlying grammars. This is achieved through an unsupervised learning process. In particular, a discrete-time recurrent neural network model, called ASCOC, we proposed before is trained to learn separately the dynamics of each individual embedded subgrammar (or subpattern) class. These subgrammar network models are trained in an auto-associative (or self-supervised) manner, which is similar in spirit to the principal component analysis (PCA) learning paradigm for feedforward neural networks, except that our focus here is on recurrent neural networks that model dynamical behaviors.
In the pilot study presented in this paper, some issues of this new approach to temporal sequence processing are investigated in the domain of on-line handwriting modeling and recognition. Some possible future research directions are also discussed.|
|Appears in Collections:||CSE Technical Reports|
Files in This Item:
All items in this Repository are protected by copyright, with all rights reserved.