2004 IEEE International Conference on Acoustics, Speech, and Signal Processing
Download PDF

Abstract

Speaker adaptive training (SAT), which reduces inter-speaker variability, and eigenspace-based maximum likelihood linear regression (eigenMLLR) adaptation, which takes advantage of prior knowledge about the test speaker's linear transforms, are combined and developed. During training, SAT generates a set of speaker independent (SI) Gaussian parameters, along with matched speaker dependent transforms for all the speakers in the training set. Then, a set of regression class dependent eigen transforms are derived by doing singular value decomposition (SVD). Normally, during recognition, the test speaker's linear transforms are obtained with MLLR. In this work, the test speaker's linear transforms are assumed to be a linear combination of the decomposed eigen transforms. Experimental results conducted on large vocabulary conversational speech recognition (LVCSR) material from the switchboard corpus show that this strategy has better performance than ML-SAT and significantly reduces the number of parameters needed (an 87% reduction is achieved), while still effectively capturing the essential variation between speakers.
Like what you’re reading?
Already a member?
Get this article FREE with a new membership!

Related Articles