HKUST Library Institutional Repository Banner

HKUST Institutional Repository >
Electronic and Computer Engineering  >
ECE Conference Papers >

Please use this identifier to cite or link to this item:
Title: An LLR-based technique of frame selection for GMM-based text-independent speaker identification
Authors: Tsoi, Pang Kuen
Fung, Pascale N.
Keywords: Speaker recognition
Frame selection
Log Likelihood Ratio (LLR)
GMM-based text-independent speaker identification system
Issue Date: 2000
Citation: Proceedings 6th International Conference on Spoken Language Processing (ICSLP 2000), 16-20 October 2000, Beijing, China
Abstract: In speaker recognition systems, frame selection, which aims at determining which frame is useful and which is not and selecting useful frames from the test utterance, can be utilized to increase recognition accuracy. In this paper, we present a new approach for frame selection using Log Likelihood Ratio (LLR), which is based on the idea that if a frame contains speaker information, the Log Likelihood Score of the corresponding speaker model will be much larger than that of its competing model. As a result, for each frame we can calculate the Log Likelihood Ratio (LLR) between the largest score and the second largest score in different speaker models and take it as a reference: Those frames with a small LLR can be rejected and those with a large LLR can be kept. This algorithm is implemented based on a GMM-based text-independent speaker identification system. We compare the algorithm with another frame selection approach bases on Jensen Difference (JD). Experiment shows that the approach using JD reduces the error by about 39.34%, while our approach using LLR reduces the error by about 46.32%.
Rights: We would like to give credit to International Speech Communication Association for granting us permission to repost this article.
Appears in Collections:HLTC Conference Papers
ECE Conference Papers

Files in This Item:

File Description SizeFormat
i00_2274.pdf187KbAdobe PDFView/Open

All items in this Repository are protected by copyright, with all rights reserved.