direkt zum Inhalt springen

direkt zum Hauptnavigationsmenü

Sie sind hier

TU Berlin

Inhalt des Dokuments

Neslihan Iskender

Lupe

Research Group

Crowdsourcing and Open Data

 

Teaching

  • Study Project Quality & Usability (Since SS 2018)
  • Interdiziplinäres Medienprojekt (Since SS 2018)
  • Usability Engineering (Exercise SS 2018)

 

Biography

Neslihan Iskender received her Bachelor and Master of Science degree in Industrial Engineering and Management at the Karlsruhe Institute of Technology. During her studies, she focused on managing new technologies and innovation management. Since May 2017, she is employed as a research assistant at the Quality and Usability Labs where she is working towards a PhD in the field of crowdsourcing. Her research Topics are:

  • Crowd assessments: Usability, UX, QoE, Quality
  • Real-time interaction, human computation as a service, (HuaaS)
  • Hybrid Worfklows for micro-task crowdsourcing
  • Internal Crowdsourcing

 

Current Projects

 

Past Projects

 

Contact

E-Mail: neslihan.iskender@tu-berlin.de

Phone: +49 (30) 8353-58347 

Fax: +49 (30) 8353-58409 

 

Address

Quality and Usability Lab

Deutsche Telekom Laboratories

Technische Universität Berlin

Ernst-Reuter-Platz 7

D-10587 Berlin, Germany 

 

 

Publications

Towards a Reliable and Robust Methodology for Crowd-Based Subjective Quality Assessment of Query-Based Extractive Text Summarization
Zitatschlüssel iskender2020b
Autor Iskender, Neslihan and Polzehl, Tim and Möller, Sebastian
Buchtitel Proceedings of The 12th Language Resources and Evaluation Conference
Seiten 245–253
Jahr 2020
Ort Marseille, France
Adresse Paris, France
Monat may
Notiz online
Verlag European Language Resources Association (ELRA)
Serie LREC
Wie herausgegeben Fullpaper
Zusammenfassung The intrinsic and extrinsic quality evaluation is an essential part of the summary evaluation methodology usually conducted in a traditional controlled laboratory environment. However, processing large text corpora using these methods reveals expensive from both the organizational and the financial perspective. For the first time, and as a fast, scalable, and cost-effective alternative, we propose micro-task crowdsourcing to evaluate both the intrinsic and extrinsic quality of query-based extractive text summaries. To investigate the appropriateness of crowdsourcing for this task, we conduct intensive comparative crowdsourcing and laboratory experiments, evaluating nine extrinsic and intrinsic quality measures on 5-point MOS scales. Correlating results of crowd and laboratory ratings reveals high applicability of crowdsourcing for the factors overall quality, grammaticality, non-redundancy, referential clarity, focus, structure & coherence, summary usefulness, and summary informativeness. Further, we investigate the effect of the number of repetitions of assessments on the robustness of mean opinion score of crowd ratings, measured against the increase of correlation coefficients between crowd and laboratory. Our results suggest that the optimal number of repetitions in crowdsourcing setups, in which any additional repetitions do no longer cause an adequate increase of overall correlation coefficients, lies between seven and nine for intrinsic and extrinsic quality factors.
Link zur Publikation Link zur Originalpublikation Download Bibtex Eintrag

Publications

2018

Barz, Michael and Büyükdemircioglu, Neslihan and Prasad Surya, Rikhu and Polzehl, Tim and Sonntag, Daniel (2018). Device-Type Influence in Crowd-based Natural Language Translation Tasks. Proceedings of the 1st Workshop on Subjectivity, Ambiguity and Disagreement (SAD) in Crowdsourcing 2018, and the 1st Workshop CrowdBias'18: Disentangling the Relation Between Crowdsourcing and Bias Management, 93–97.

Link zur Publikation Link zur Originalpublikation

Zusatzinformationen / Extras

Direktzugang

Schnellnavigation zur Seite über Nummerneingabe