direkt zum Inhalt springen

direkt zum Hauptnavigationsmenü

Sie sind hier

TU Berlin

Page Content

There is no English translation for this web page.

User-adaptive Speech Synthesizer in Human-Computer Interaction

LOCATION:  TEL, Auditorium 3 (20th floor), Ernst-Reuter-Platz 7, 10587 Berlin

Date/Time: 23.01.2017, 15:00-15:45

SPEAKER: Jin YAO (TU Berlin)

ABSTRACT:

Speech assistants become more and more popular, but it still cannot vocally interact with human like a human. Previous studies imply human tend to adapt to the interlocutor during the conversation and also get higher preference than the others who do not. Through a conversation experiment between 31 subjects and the recorded speeches from 4 confederates, this study firstly testifies that subjects can positively adapt their acoustic-prosodic features, i.e. global intensity, rate and pitch mean, when speaking to the 4 different interlocutors in the dialogues. After optimizing the synthetic speech through the communication model summarized above, another online experiment is conducted among the subjects for observing their preference between the user-adaptive speech synthesizer and the original one without adaptation. The result shows that the user-adaptive communication model improves the synthesizer’s performance in 28.13% of the test cases, but is still not sufficient enough to replace the current system, and further studies are needed for optimizing the model in the field of communication accommodation.

Zusatzinformationen / Extras

Quick Access:

Schnellnavigation zur Seite über Nummerneingabe