TU Berlin

Quality and Usability LabReviewed Conference Papers

Page Content

to Navigation

Reviewed Conference Papers

go back to overview

A Crowdsourcing Approach to Evaluate the Quality of Query-based Extractive Text Summaries
Citation key iskender2019a
Author Iskender, Neslihan and Gabryszak, Aleksandra and Polzehl, Tim and Hennig, Leonhard and Möller, Sebastian
Title of Book 2019 Eleventh International Conference on Quality of Multimedia Experience (QoMEX)
Pages 1–3
Year 2019
ISSN 2472-7814
Location Berlin, Germany
Address Piscataway, NJ, USA
Month jun
Note online
Publisher IEEE
Series QoMEX
How Published Short paper
Abstract High cost and time consumption are concurrent barriers for research and application of automated summarization. In order to explore options to overcome this barrier, we analyze the feasibility and appropriateness of micro-task crowdsourcing for evaluation of different summary quality characteristics and report an ongoing work on the crowdsourced evaluation of query-based extractive text summaries. To do so, we assess and evaluate a number of linguistic quality factors such as grammaticality, non-redundancy, referential clarity, focus and structure & coherence. Our first results imply that referential clarity, focus and structure & coherence are the main factors effecting the perceived summary quality by crowdworkers. Further, we compare these results using an initial set of expert annotations that is currently being collected, as well as an initial set of automatic quality score ROUGE for summary evaluation. Preliminary results show that ROUGE does not correlate with linguistic quality factors, regardless if assessed by crowd or experts.Further, crowd and expert ratings show highest degree of correlation when assessing low quality summaries. Assessments increasingly divert when attributing high quality judgments.
Link to publication Download Bibtex entry

go back to overview

Navigation

Quick Access

Schnellnavigation zur Seite über Nummerneingabe