Existing objective evaluation metrics for voice conversion (VC) are not always correlated with human perception. Therefore, training VC models with such criteria may not effectively improve naturalness and similarity of converted speech. In this paper, we propose deep learning-based assessment models to predict human ratings of converted speech. We adopt the convolutional and recurrent neural network models to build a mean opinion score (MOS) predictor, termed as MOSNet. The proposed models are tested on large-scale listening test results of the Voice Conversion Challenge (VCC) 2018. Experimental results show that the predicted scores of the proposed MOSNet are highly correlated with human MOS ratings at the system level while being fairly correlated with human MOS ratings at the utterance level. Meanwhile, we have modified MOSNet to predict the similarity scores, and the preliminary results show that the predicted scores are also fairly correlated with human ratings. These results confirm that the proposed models could be used as a computational evaluator to measure the MOS of VC systems to reduce the need for expensive human rating.
Conclusions
This paper presented a deep learning-based quality assessment model for the VC task, referred to as MOSNet. Based on largescale human perceptual MOS evaluation results from VCC 2018, our experimental results show that MOSNet yields predictions with a high correlation to human ratings at the system level and a fair correlation at the utterance level. We have shown decent generalization capability of MOSNet by applying the model trained with the VCC 2018 data to the VCC 2016 data. Moreover, with a slight modification, MOSNet can fairly predict the similarity scores of the converted speech relative to the target speech. As per our knowledge, the proposed MOSNet is the first end-to-end speech objective assessment model for VC. In future, we will consider the human perception theory and improve the model architecture and objective function of MOSNet to attain improved correlation with human ratings.