VQVC+: One-Shot Voice Conversion by Vector Quantization and U-Net architecture

Voice conversion (VC) is a task that transforms the source speaker's timbre, accent, and tones in audio into another one's while preserving the linguistic content. It is still a challenging work, especially in a one-shot setting. Auto-encoder-based VC methods disentangle the speaker and the content in input speech without given the speaker's identity, so these methods can further generalize to unseen speakers. The disentangle capability is achieved by vector quantization (VQ), adversarial training, or instance normalization (IN). However, the imperfect disentanglement may harm the quality of output speech. In this work, to further improve audio quality, we use the U-Net architecture within an auto-encoder-based VC system. We find that to leverage the U-Net architecture, a strong information bottleneck is necessary. The VQ-based method, which quantizes the latent vectors, can serve the purpose. The objective and the subjective evaluations show that the proposed method performs well in both audio naturalness and speaker similarity.

Conclusions

In this paper, we present a new model for one-shot VC. We use the U-Net combined with VQ layers to achieve a high-quality VC. With the well-designed architecture, our proposed model is able to separate the speaker information and the content information effectively in an elegant way with the self-reconstruction loss only. The objective results verify the strong disentanglement of our model, while the subjective results can support our conjecture that the skip-connection design is beneficial for achieving high-quality conversion.

Source