Hélène Loevenbruck
GIPSA-lab
Session JEP poster P5 Jeudi 12 Juin - 10h30 12h30
-
papier 1591
Influence des fréquences lexicales des langues française et drehu sur l'acquisition des consonnes initiales de mots
- Julia Monnin ( Université de la Nouvelle-Calédonie et GIPSA-lab)
- Hélène Loevenbruck ( GIPSA-lab)
- Résumé : This study extends a cross-linguistic collaboration on phonological development, which aims to compare production of word-initial obstruents across sets of languages which have comparable consonants that differ in overall frequency or in the frequency with which they occur in analogous sound sequences. By comparing across languages, the influence of language-specific distributional patterns on consonant mastery can be disentangled from the effects of more general phonetic constraints on development. To extend the comparison to French, we counted type frequencies in French databases and did a preliminary experiment with French-acquiring two-year-old children. In preparation for studying the effects of Drehu exposition in Drehu and French bilingual children, we counted frequencies in Drehu.
- article
Session JEP orale O5 Corpus Jeudi 12 Juin - 16h30 17h30
-
papier 1606
Amélioration de la conversion de voix chuchotée enregistrée par capteur NAM vers la voix audible
- Viet-Anh Tran ( GIPSA-Lab)
- Gérard Bailly ( GIPSA-Lab)
- Hélène Loevenbruck ( GIPSA-Lab)
- Christian Jutten ( GIPSA-Lab)
- Résumé : The NAM-to-speech conversion proposed by Toda and colleagues which converts Non-Audible Murmur (NAM) to audible speech by statistical mapping trained using aligned corpora is a very promising technique, but its performance is still insufficient. In this paper, we present our current work to improve the intelligibility and the naturalness of the synthesized speech converted from whispered speech with this technique. The first system is proposed to improve F0 estimation and voicing decision. A simple neural network is used to detect voiced segments in the whisper while a GMM estimates a continuous melodic contour based on training voiced segments. In the second system, we attempt to integrate visual information for improving both spectral estimation, F0 estimation and voicing decision.
- article