Recognition-Synthesis Based Non-Parallel Voice Conversion with Adversarial Learning

This paper presents an adversarial learning method for recognition-synthesis based non-parallel voice conversion. A recognizer is used to transform acoustic features into linguistic representations while a synthesizer recovers output features from the recognizer outputs together with the speaker identity. By separating the speaker characteristics from the linguistic representations, voice conversion can be achieved by replacing the speaker identity with the target one. In our proposed method, a speaker adversarial loss is adopted in order to obtain speaker-independent linguistic representations using the recognizer. Furthermore, discriminators are introduced and a generative adversarial network (GAN) loss is used to prevent the predicted features from being over-smoothed. For training model parameters, a strategy of pre-training on a multi-speaker dataset and then fine-tuning on the source-target speaker pair is designed. Our method achieved higher similarity than the baseline model that obtained the best performance in Voice Conversion Challenge 2018.

Conclusions

In this paper, a method for non-parallel voice conversion is proposed. Our model is based on the recognition-synthesis framework and a speaker classifier module is introduced for speaker adversarial learning. We also incorporate GAN losses for boosting the quality of converted voice. The model is first pre-trained on a multi-speaker dataset then fine-tuned on the desired conversion pair. Both objective and subjective evaluations proved the effectiveness of our method. Our future work will try to further improve the performance of our method by pre-training on larger datasets.

Source