Please use this identifier to cite or link to this item:

High-density discrete HMM with the use of scalar quantization indexing

Authors Mak, B. View this author's profile
Yeung, S.K.A.
Lai, Y.P.
Siu, M. HKUST affiliated (currently or previously)
Issue Date 2005
Source 9th European Conference on Speech Communication and Technology , 2005, p. 2121-2124
Summary With the advance in semiconductor memory and the availability of very large speech corpora (of hundreds to thousands of hours of speech), we would like to revisit the use of discrete hidden Markov model (DHMM) in automatic speech recognition. To estimate the discrete density in a DHMM state, the acoustic space is divided into bins and one simply count the relative amount of observations falling into each bin. With a very large speech corpus, we believe that the number of bins may be greatly increased to get a much higher density than before, and we will call the new models, the high-density discrete hidden Markov model (HDDHMM). Our HDDHMM is different from traditional DHMM in two aspects: firstly, the codebook will have a size in thousands or even tens of thousands; secondly, we propose a method based on scalar quantization indexing so that for a d-dimensional acoustic vector, the discrete codeword can be determined in O(d) time. During recognition, the state probability is reduced to an O(1) table look-up. The new HDDHMM was tested on WSJO with 5K vocabulary. Compared with a baseline 4-stream continuous density HMM system which has a WER of 9.71 %, a 4-stream HDDHMM system converted from the former achieves a WER of 11.60%, with no distance or Gaussian computation.
Language English
Format Conference paper
Access View full-text via Scopus
Files in this item:
File Description Size Format
interspeech2005hddhmm.pdf 84308 B Adobe PDF