direkt zum Inhalt springen

direkt zum Hauptnavigationsmenü

Sie sind hier

TU Berlin

Page Content

Neslihan Iskender

Lupe

Research Group

Crowdsourcing and Open Data

 

Teaching

  • Study Project Quality & Usability (Since SS 2018)
  • Interdiziplinäres Medienprojekt (Since SS 2018)
  • Usability Engineering (Exercise SS 2018)

 

Biography

Neslihan Iskender received her Bachelor and Master of Science degree in Industrial Engineering and Management at the Karlsruhe Institute of Technology. During her studies, she focused on managing new technologies and innovation management. Since May 2017, she is employed as a research assistant at the Quality and Usability Labs where she is working towards a PhD in the field of crowdsourcing. Her research Topics are:

  • Crowd assessments: Usability, UX, QoE, Quality
  • Real-time interaction, human computation as a service, (HuaaS)
  • Hybrid Worfklows for micro-task crowdsourcing
  • Internal Crowdsourcing

 

Current Projects

 

Past Projects

 

Contact

E-Mail: neslihan.iskender@tu-berlin.de

Phone: +49 (30) 8353-58347 

Fax: +49 (30) 8353-58409 

 

Address

Quality and Usability Lab

Deutsche Telekom Laboratories

Technische Universität Berlin

Ernst-Reuter-Platz 7

D-10587 Berlin, Germany 

 

 

Publications

Best Practices for Crowd-based Evaluation of German Summarization: Comparing Crowd, Expert and Automatic Evaluation
Citation key iskender2020c
Author Iskender, Neslihan and Polzehl, Tim and Möller, Sebastian
Title of Book Proceedings of the First Workshop on Evaluation and Comparison of NLP Systems
Pages 164–175
Year 2020
Location online
Address online
Month nov
Publisher Association for Computational Linguistics (ACL)
Series EMNLP | Eval4NLP
How Published Fullpaper
Abstract One of the main challenges in the development of summarization tools is summarization quality evaluation. On the one hand, the human assessment of summarization quality conducted by linguistic experts is slow, expensive, and still not a standardized procedure. On the other hand, the automatic assessment metrics are reported not to correlate high enough with human quality ratings. As a solution, we propose crowdsourcing as a fast, scalable, and cost-effective alternative to expert evaluations to assess the intrinsic and extrinsic quality of summarization by comparing crowd ratings with expert ratings and automatic metrics such as ROUGE, BLEU, or BertScore on a German summarization data set. Our results provide a basis for best practices for crowd-based summarization evaluation regarding major influential factors such as the best annotation aggregation method, the influence of readability and reading effort on summarization evaluation, and the optimal number of crowd workers to achieve comparable results to experts, especially when determining factors such as overall quality, grammaticality, referential clarity, focus, structure & coherence, summary usefulness, and summary informativeness.
Link to publication Link to original publication Download Bibtex entry

Publications

2018

Barz, Michael and Büyükdemircioglu, Neslihan and Prasad Surya, Rikhu and Polzehl, Tim and Sonntag, Daniel (2018). Device-Type Influence in Crowd-based Natural Language Translation Tasks. Proceedings of the 1st Workshop on Subjectivity, Ambiguity and Disagreement (SAD) in Crowdsourcing 2018, and the 1st Workshop CrowdBias'18: Disentangling the Relation Between Crowdsourcing and Bias Management, 93–97.

Link to publication Link to original publication

Zusatzinformationen / Extras

Quick Access:

Schnellnavigation zur Seite über Nummerneingabe