direkt zum Inhalt springen

direkt zum Hauptnavigationsmenü

Sie sind hier

TU Berlin

Inhalt des Dokuments

Reviewed Conference Papers

go back to overview

A Crowdsourcing Approach to Evaluate the Quality of Query-based Extractive Text Summaries
Zitatschlüssel iskender2019a
Autor Iskender, Neslihan and Gabryszak, Aleksandra and Polzehl, Tim and Hennig, Leonhard and Möller, Sebastian
Buchtitel 2019 Eleventh International Conference on Quality of Multimedia Experience (QoMEX)
Seiten 1–3
Jahr 2019
ISSN 2472-7814
Ort Berlin, Germany
Adresse Piscataway, NJ, USA
Monat jun
Notiz online
Verlag IEEE
Serie QoMEX
Wie herausgegeben Short paper
Zusammenfassung High cost and time consumption are concurrent barriers for research and application of automated summarization. In order to explore options to overcome this barrier, we analyze the feasibility and appropriateness of micro-task crowdsourcing for evaluation of different summary quality characteristics and report an ongoing work on the crowdsourced evaluation of query-based extractive text summaries. To do so, we assess and evaluate a number of linguistic quality factors such as grammaticality, non-redundancy, referential clarity, focus and structure & coherence. Our first results imply that referential clarity, focus and structure & coherence are the main factors effecting the perceived summary quality by crowdworkers. Further, we compare these results using an initial set of expert annotations that is currently being collected, as well as an initial set of automatic quality score ROUGE for summary evaluation. Preliminary results show that ROUGE does not correlate with linguistic quality factors, regardless if assessed by crowd or experts.Further, crowd and expert ratings show highest degree of correlation when assessing low quality summaries. Assessments increasingly divert when attributing high quality judgments.
Link zur Publikation Download Bibtex Eintrag

go back to overview

Zusatzinformationen / Extras


Schnellnavigation zur Seite über Nummerneingabe