We propose a parallel-data-free voice-conversion (VC) method that can learn a mapping from source to target speech without relying on parallel data. The proposed method is general purpose, high quality, and parallel-data free and works without any extra data, modules, or alignment procedure. It also avoids over-smoothing, which occurs in many conventional statistical model-based VC methods. Our method, called CycleGAN-VC, uses a cycle-consistent adversarial network (CycleGAN) with gated convolutional neural networks (CNNs) and an identity-mapping loss. A CycleGAN learns forward and inverse mappings simultaneously using adversarial and cycle-consistency losses. This makes it possible to find an optimal pseudo pair from unpaired data. Furthermore, the adversarial loss contributes to reducing over-smoothing of the converted feature sequence. We configure a CycleGAN with gated CNNs and train it with an identity-mapping loss. This allows the mapping function to capture sequential and hierarchical structures while preserving linguistic information. We evaluated our method on a parallel-data-free VC task. An objective evaluation showed that the converted feature sequence was near natural in terms of global variance and modulation spectra. A subjective evaluation showed that the quality of the converted speech was comparable to that obtained with a Gaussian mixture model-based method under advantageous conditions with parallel and twice the amount of data.
Discussion and Conclusions
We proposed a parallel-data-free VC method called CycleGAN-VC, which uses a CycleGAN with gated CNNs and an identity-mapping loss. This method can learn a sequence-based mapping function without any extra data, modules, and time alignment procedure. An objective evaluation showed that the MCEP sequences obtained with our method are close to the target in terms of GV and MS. A subjective evaluation showed that the quality of converted speech was comparable to that obtained with the GMM-based method under advantageous conditions with parallel and twice the amount of data. However, there is still a margin between original and converted speeches. To fill the margin, we plan to apply our method to other features, such as STFT spectrograms, and other speech-synthesis frameworks, such as vocoder-free VC. Furthermore, our proposed method is a general framework, and possible future work includes applying the method to other VC applications.