Inhalt des Dokuments
Reviewed Conference Papers
go back to overview
Zitatschlüssel | zequeirajimenez2019c |
---|---|
Autor | Zequeira Jiménez, Rafael and Llagostera, Anna and Naderi, Babak and Möller, Sebastian and Berger, Jens |
Buchtitel | Companion Proceedings of The 2019 World Wide Web Conference |
Seiten | 1138–1143 |
Jahr | 2019 |
ISBN | 978-1-4503-6675-5 |
DOI | 10.1145/3308560.3317084 |
Adresse | New York, NY, USA |
Monat | may |
Verlag | ACM |
Serie | WWW '19 |
Wie herausgegeben | Fullpaper |
Zusammenfassung | Crowdsourcing is a great tool for conducting subjective user studies with large amounts of users. Collecting reliable annotations about the quality of speech stimuli is challenging. The task itself is of high subjectivity and users in crowdsourcing work without supervision. This work investigates the intra- and inter-listener agreement withing a subjective speech quality assessment task. To this end, a study has been conducted in the laboratory and in crowdsourcing in which listeners were requested to rate speech stimuli with respect to their overall quality. Ratings were collected on a 5-point scale in accordance with the ITU-T Rec. P.800 and P.808, respectively. The speech samples were taken from the database ITU-T Rec. P.501 Annex D, and were presented four times to the listeners. Finally, the crowdsourcing results were contrasted to the ratings collected in the laboratory. Strong and significant Spearman's correlation was achieved when contrasting the ratings collected in both environments. Our analysis show that while the inter-rater agreement increased the more the listeners conducted the assessment task, the intra-rater reliability remained constant. Our study setup helped to overcome the subjectivity of the task and we found that disagreement can represent a source of information to some extent. |
go back to overview
Zusatzinformationen / Extras
Direktzugang
Schnellnavigation zur Seite über Nummerneingabe