direkt zum Inhalt springen

direkt zum Hauptnavigationsmenü

Sie sind hier

TU Berlin

Inhalt des Dokuments

Neslihan Iskender

Lupe

Research Group

Crowdsourcing and Open Data

 

Teaching

  • Study Project Quality & Usability (Since SS 2018)
  • Interdiziplinäres Medienprojekt (Since SS 2018)
  • Usability Engineering (Exercise SS 2018)

 

Biography

Neslihan Iskender received her Bachelor and Master of Science degree in Industrial Engineering and Management at the Karlsruhe Institute of Technology. During her studies, she focused on managing new technologies and innovation management. Since May 2017, she is employed as a research assistant at the Quality and Usability Labs where she is working towards a PhD in the field of crowdsourcing. Her research Topics are:

  • Crowd assessments: Usability, UX, QoE, Quality
  • Real-time interaction, human computation as a service, (HuaaS)
  • Hybrid Worfklows for micro-task crowdsourcing
  • Internal Crowdsourcing

 

Current Projects

 

Past Projects

 

Contact

E-Mail: neslihan.iskender@tu-berlin.de

Phone: +49 (30) 8353-58347 

Fax: +49 (30) 8353-58409 

 

Address

Quality and Usability Lab

Deutsche Telekom Laboratories

Technische Universität Berlin

Ernst-Reuter-Platz 7

D-10587 Berlin, Germany 

 

 

Publications

A Crowdsourcing Approach to Evaluate the Quality of Query-based Extractive Text Summaries
Zitatschlüssel iskender2019a
Autor Iskender, Neslihan and Gabryszak, Aleksandra and Polzehl, Tim and Hennig, Leonhard and Möller, Sebastian
Buchtitel 2019 Eleventh International Conference on Quality of Multimedia Experience (QoMEX)
Seiten 1–3
Jahr 2019
ISSN 2472-7814
Ort Berlin, Germany
Adresse Piscataway, NJ, USA
Monat jun
Notiz online
Verlag IEEE
Serie QoMEX
Wie herausgegeben Short paper
Zusammenfassung High cost and time consumption are concurrent barriers for research and application of automated summarization. In order to explore options to overcome this barrier, we analyze the feasibility and appropriateness of micro-task crowdsourcing for evaluation of different summary quality characteristics and report an ongoing work on the crowdsourced evaluation of query-based extractive text summaries. To do so, we assess and evaluate a number of linguistic quality factors such as grammaticality, non-redundancy, referential clarity, focus and structure & coherence. Our first results imply that referential clarity, focus and structure & coherence are the main factors effecting the perceived summary quality by crowdworkers. Further, we compare these results using an initial set of expert annotations that is currently being collected, as well as an initial set of automatic quality score ROUGE for summary evaluation. Preliminary results show that ROUGE does not correlate with linguistic quality factors, regardless if assessed by crowd or experts.Further, crowd and expert ratings show highest degree of correlation when assessing low quality summaries. Assessments increasingly divert when attributing high quality judgments.
Link zur Publikation Download Bibtex Eintrag

Publications

2018

Barz, Michael and Büyükdemircioglu, Neslihan and Prasad Surya, Rikhu and Polzehl, Tim and Sonntag, Daniel (2018). Device-Type Influence in Crowd-based Natural Language Translation Tasks. Proceedings of the 1st Workshop on Subjectivity, Ambiguity and Disagreement (SAD) in Crowdsourcing 2018, and the 1st Workshop CrowdBias'18: Disentangling the Relation Between Crowdsourcing and Bias Management, 93–97.

Link zur Publikation Link zur Originalpublikation

Zusatzinformationen / Extras

Direktzugang

Schnellnavigation zur Seite über Nummerneingabe