Article Preview
Top1 Introduction
In the context of speech synthesis applications and to maintain high speech quality with minimal introduction of artifacts or reduction in naturalness, it may be advantageous to encode speech signals by mathematical representations (Dutoit, 1997; Taylor, 2009). In current speech synthesis systems, several signal processing techniques of speech representation, have been developed to generate natural sounding speech. The vocoder-based representation invented by Dudley (Dudley, 1940) in the 1930s was the first attempt to represent the speech signal by an excitation sound source (periodic or noise) and a vocal tract filter (a bank of analog band-pass filters). This representation has a direct correspondence with the speech production mechanism and the main objective was to obtain efficient transmission and storage of voice signal. In the 1960s, another technique for encoding and representing speech referred to as phase vocoder was suggested by Flanagan and Golden (Flanagan & Golden, 1966). The basic idea behind this approach is to represent speech in terms of its short-time amplitude and phase. After that, a digital formulation of the phase vocoder was introduced by Portnoff (Portnoff, 1981) by representing a speech waveform by its short-time Fourier transform (STFT). A computational efficiency was achieved by using the fast Fourier transform (FFT) algorithm.
In the late 1950s, Fant developed the famous linear speech source-system production model (Fant, 1960). In this model, a periodic impulse train source is applied to a linear slowly time-varying system (glottal and vocal tract model) for voiced part of speech. However, a random noise is used to excite the system for unvoiced part of speech. The vocal tract filter is assumed to be an all pole model and its parameters are estimated via LP analysis. The LP method was one of the most powerful speech analysis-synthesis techniques because it is simple, fast, and has a limited number of parameters. The method has been the predominant technique for estimating the basic speech parameters, e.g., pitch, formants, specter, vocal tract area functions, and for representing speech for low bit rate transmission or storage. Also, the method has been successfully applied in speech synthesis applications (Atal & Hanauer, 1971). The main drawback of the LP analysis-synthesis method is that is inherently “buzzy” due to its parametric nature, and this degrades the speech quality. Also, phonemes such as nasals cannot be modeled by the LP model because they contain anti-formants, and this model is an all-pole model. To improve the quality of the LP synthesis some efforts have been devoted by adopting a complex and a suitable excitation model. Hence, variations of the basic LP model have been developed such as Multipulse LP Coding (MPLPC) (Atal B. a., 1992), and Code Excited LP (CELP) (de Campos & Gouvea, 1996).