direkt zum Inhalt springen

direkt zum Hauptnavigationsmenü

Sie sind hier

TU Berlin

Inhalt des Dokuments

Reviewed Journal Papers

go back to overview

Modeling Input Modality Choice in Mobile Graphical and Speech Interfaces
Zitatschlüssel schaffer2015a
Autor Schaffer, Stefan and Schleicher, Robert and Möller, Sebastian
Seiten 21–34
Jahr 2015
ISSN 1071-5819
DOI 10.1016/j.ijhcs.2014.11.004
Journal Int. Journal of Human-Computer Studies
Jahrgang 73
Nummer 3
Monat mar
Notiz print/online
Zusammenfassung In this paper, we review three experiments with a mobile application that integrates graphical input with a touch-screen and a speech interface and develop a model for input modality choice in multimodal interaction. The model aims to enable simulation of multimodal human–computer interaction for automatic usability evaluation. The experimental results indicate that modality efficiency and input performance are important moderators of modality choice. Accordingly, we establish a utility-driven model that provides probability estimations of modality usage, based on the parameters of modality efficiency and input performance. Four variants of the model that differ in training data are fitted by means of Sequential Least Squares Programming. The analysis reveals a considerable fit regarding averaged modality usage. When applied to individual modality usage profiles, the accuracy decreases significantly. In an application example it is shown how the modality choice mechanism can be deployed for simulating interaction in the field of automatic usability evaluation. Results and possible limitations are discussed.
Link zur Publikation Link zur Originalpublikation Download Bibtex Eintrag

go back to overview

Zusatzinformationen / Extras


Schnellnavigation zur Seite über Nummerneingabe