High-quality Speech Synthesis Using Super-resolution Mel-Spectrogram

Evgeniy Pavlovskiy, Leyuan Sheng, Dong-Yan Huang

Результат исследования: Рабочие материалырабочие материалы


In speech synthesis and speech enhancement systems, melspectrograms need to be precise in acoustic representations. However, the generated spectrograms are over-smooth, that could not produce high quality synthesized speech. Inspired by image-to-image translation, we address this problem by using a learning-based post filter combining Pix2PixHD and ResUnet to reconstruct the mel-spectrograms together with super-resolution. From the resulting super-resolution spectrogram networks, we can generate enhanced spectrograms to produce high quality synthesized speech. Our proposed model achieves improved mean opinion scores (MOS) of 3.71 and 4.01 over baseline results of 3.29 and 3.84, while using vocoder Griffin-Lim and WaveNet, respectively.
Язык оригиналаанглийский
Число страниц6
СостояниеОпубликовано - 3 дек. 2019


Подробные сведения о темах исследования «High-quality Speech Synthesis Using Super-resolution Mel-Spectrogram». Вместе они формируют уникальный семантический отпечаток (fingerprint).