2011 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT)
Download PDF

Abstract

This paper presents an extensive acoustic analysis of utterances which were recorded by a single Czech female speaker using various expressive speaking styles. The recording of the expressive utterances was performed as a dialogue between a human and a computer on a given topic. Speech of the human speaker was captured and later carefully transcribed by human annotators. It was also annotated using a listening test. The aim of the annotations was to label each utterance with a corresponding speaking style (referred to as a communicative function). Based on such a labeling, the expressive recordings were classified into various groups and acoustically analyzed. In particular, we placed emphasis on some features which are supposed to influence the perception of speech, such as F0, phoneme duration, formant frequencies or energy. We made an effort to reveal some acoustic differences between the various speaking styles that could help us to improve expressive speech synthesis in a given limited domain.
Like what you’re reading?
Already a member?
Get this article FREE with a new membership!

Related Articles