One-shot singing technique conversion¶
In this paper we propose modifications to the neural network framework AutoVC for the task of singing technique conversion. This includes utilizing a pretrained singing technique encoder which extracts technique information, upon which a decoder is conditioned during training. By swapping out a source singer's technique information for that of the target's during conversion, the input spectrogram is reconstructed with the target's technique. We document the beneficial effects of omitting the latent loss, the importance of sequential training, and our process for fine-tuning the bottleneck. We also conducted a listening study where participants rate the specificity of technique-converted voices as well as their naturalness. From this we can conclude how effective the technique conversions are and how different conditions affect them, while assessing the model's ability to reconstruct its input data.