Please use this identifier to cite or link to this item:

Improving Eigenspace-based MLLR Adaptation by Kernel PCA

Authors Mak, Brian View this author's profile
Hsiao, Roger
Issue Date 2004
Source Proceedings of the International Conference on Spoken Language Processing, Jeju Island, South Korea , October 4-8, 2004, volume I, pages 13-16,
Summary Eigenspace-based MLLR (EMLLR) adaptation has been shown effective for fast speaker adaptation. It applies the basic idea of eigenvoice adaptation, and derives a small set of eigenmatrices using principal component analysis (PCA). The MLLR adaptation transformation of a new speaker is then a linear combination of the eigenmatrices. In this paper, we investigate the use of kernel PCA to find the eigenmatrices in the kernel-induced high dimensional feature space so as to exploit possible nonlinearity in the transformation supervector space. In addition, composite kernel is used to preserve the row information in the transformation supervector which, otherwise, will be lost during the mapping to the kernel-induced feature space. We call our new method kernel eigenspace-based MLLR (KEMLLR) adaptation. On a RM adaptation task, we find that KEMLLR adaptation may reduce the word error rate of a speaker-independent model by 11%, and outperforms MLLR and EMLLR adaptation.
Language English
Format Conference paper
Files in this item:
File Description Size Format
icslp2004kemllr.pdf 90518 B Adobe PDF