We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

eess.AS

Change to browse by:

References & Citations

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo

Electrical Engineering and Systems Science > Audio and Speech Processing

Title: A Modularized Neural Network with Language-Specific Output Layers for Cross-lingual Voice Conversion

Abstract: This paper presents a cross-lingual voice conversion framework that adopts a modularized neural network. The modularized neural network has a common input structure that is shared for both languages, and two separate output modules, one for each language. The idea is motivated by the fact that phonetic systems of languages are similar because humans share a common vocal production system, but acoustic renderings, such as prosody and phonotactic, vary a lot from language to language. The modularized neural network is trained to map Phonetic PosteriorGram (PPG) to acoustic features for multiple speakers. It is conditioned on a speaker i-vector to generate the desired target voice. We validated the idea between English and Mandarin languages in objective and subjective tests. In addition, mixed-lingual PPG derived from a unified English-Mandarin acoustic model is proposed to capture the linguistic information from both languages. It is found that our proposed modularized neural network significantly outperforms the baseline approaches in terms of speech quality and speaker individuality, and mixed-lingual PPG representation further improves the conversion performance.
Comments: Accepted for publication at IEEE ASRU Workshop 2019
Subjects: Audio and Speech Processing (eess.AS)
Cite as: arXiv:1910.00496 [eess.AS]
  (or arXiv:1910.00496v1 [eess.AS] for this version)

Submission history

From: Emre Yilmaz [view email]
[v1] Tue, 1 Oct 2019 15:52:01 GMT (212kb,D)

Link back to: arXiv, form interface, contact.