direkt zum Inhalt springen

direkt zum Hauptnavigationsmenü

Sie sind hier

TU Berlin

Inhalt des Dokuments

Dr. -Ing. Babak Naderi

Lupe [1]

Research Interests:

  • Motivation, Workload, and Performance in Crowdsourcing
  • Statistical Modeling and applied statistics
  • Speech Quality Assessment in Crowdsourcing
  • Gamification
  • Software Development (back-end,front-end, Android App)

Biography:

Babak Naderi has obtain his Dr.-Ing degree (PhD) on the basis of his thesis with a title of Motivation of Workers on Microtask Crowdsourcing Platforms [2] in September 2017. Babak has Master's degree in Geodesy and Geoinformation Science form the Technical University Berlin with a thesis on "Monte Carlo Localization for Pedestrian Indoor Navigation Using a Map Aided Movement Model" [3]. He has also a Bachelor's degree in Software Engineering.

Since August 2012, Babak Naderi is working as a research scientist at the Quality and Usability Lab of  TU-Berlin.

2013-2015 Babak was awarded with an BMBF funded Education program for future IT and Development Leadership involving Bosch, Datev, Deutsche Telekom AG, Holtzbrinck, SAP, Scheer Group, Siemens, and Software AG  amongst highly ranked academic institution (Softwarecampus [4]). He was taking part by leading CrowdMAQA [5]project.

Within dissertation, Babak studies the motivation of crowdworkers in details. He has developed the Crowdwork Motivation Scale [6] for measuring general motivation based on the Self-Determination Theory of Motivation. The scale has been validated within several studies. In addition, he has studied factors influencing the motivation, and influence of different motivation type on the quality of outcomes. Models for predicting task selection strategy of workers are developed, including models for automatically predicting expected workload associated to a task from its design, task acceptance and performance. 

Beside others research activities, Babak is actively working on the standardization of methods for speech quality assessment in crowdsourcing environment in the P.CROWD work program of Study Group 12 in ITU-T Standardization Sector [7].

Reviewed for WWW, CSCW, MMSys, PQS, HCOMP, ICWE, QoMEX, International Journal of Human-Computer Studies, Computer Networks, Behaviour & Information Technology.

 

Selected talks:

  • "Motivation of Crowd Workers, does it matter?",Schloss Dagstuhl, Evaluation in the Crowd: Crowdsourcing and Human-Centred Experiments, November 2015.
  • "Motivation and Quality Assessment in Online Paid Crowdsourcing Micro-task Platforms",Schloss Dagstuhl, Crowdsourcing: From Theory to Practice and Long-Term Perspectives, September 2013.

 

Office Hours: On Appointment

 

Adresse:

Quality and Usability Lab

Technische Universität Berlin
Ernst-Reuter-Platz 7
D-10587 Berlin

Tel.:+49 (30) 8353-54221
Fax: +49 (30) 8353-58409

babak.naderi[at]tu-berlin.de

Publications

vor >> [9]

Naderi, Babak and Mohtaj, Salar and Karan, Karan and Möller, Sebastian (2019). Automated Text Readability Assessment for German Language: A Quality of Experience Approach [13]. 11th International Conference on Quality of Multimedia Experience (QoMEX 2019). IEEE.


Zequeira Jiménez, Rafael and Naderi, Babak and Llagostera, A. and Berger, J. (2019). Modeling Worker Performance Based on Intra-rater Reliability in Crowdsourcing A Case Study of Speech Quality Assessment [14]. 2019 Eleventh International Conference on Quality of Multimedia Experience (QoMEX). IEEE, 1–6.


Mittag, Gabriel and Liedtke, Louis and Iskender, Neslihan and Naderi, Babak and Hübschen, Tobias and Schmidt, Gerhard and Möller, Sebastian (2019). Einfluss der Position und Stimmhaftigkeit von verdeckten Paketverlusten auf die Sprachqualität [15]. Fortschritte der Akustik - DAGA 2019. Deutsche Gesellschaft für Akustik DEGA e.V., 950–953.

Link zur Publikation [16]

Naderi, Babak and Mohtaj, Salar and Ensikat, Kaspar and Möller, Sebastian (2019). Subjective Assessment of Text Complexity: A Dataset for German Language [17]. CoRR

Link zur Originalpublikation [18]

Grau, Paul and Naderi, Babak and Kim, Juho (2018). Personalized Motivation-supportive Messages for Increasing Participation in Crowd-civic Systems [19]. Proceedings of the ACM on Human-Computer Interaction. ACM.

Link zur Publikation [20]

Naderi, Babak and Möller, Sebastian and Alnizami, Hanan and Corriveau, Philip J. and Tavakoli, Samira and Hossfeld, Tobias and Pinson, Margaret and McKnight, Patrick E. and Quartuccio, Jake and Nicholas, David and Saad, Michele (2018). Draft text for Technical Report "Subjective evaluation of quality of media with a crowdsourcing approach" [21]. ITU, 1–21.

Link zur Publikation [22]

Naderi, Babak and Möller, Sebastian and Hossfeld, Tobias and Hirth, Matthias (2018). Draft text for P.CROWD Recommendation "Subjective evaluation of speech quality with a crowdsourcing approach" [23]. ITU, 1–21.

Link zur Publikation [24]

Naderi, Babak and Möller, Sebastian and Hossfeld, Tobias and Hirth, Matthias (2018). P.808 Subjective evaluation of speech quality with a crowdsourcing approach [25]. , 1-22.

Link zur Publikation [26]

Naderi, Babak and Möller, Sebastian and Zequeira Jiménez, Rafael (2018). Evaluation of the Draft of P.CROWD Recommendation [27]. ITU, 1–8.

Link zur Publikation [28]

Naderi, Babak (2018). Motivation of Workers on Microtask Crowdsourcing Platforms [29]. Springer.


Naderi, Babak and Möller, Sebastian and Mittag, Gabriel (2018). Speech Quality Assessment in Crowdsourcing: Influence of Environmental Noise [30]. 44. Deutsche Jahrestagung für Akustik (DAGA). Deutsche Gesellschaft für Akustik DEGA e.V., 229–302.

Link zur Publikation [31]

Naderi, Babak and Möller, Sebastian (2017). Crowdsourcing speech quality assessment: Listening-opinion tests – Absolute Category Rating (ACR) [32]. ITU, 1–14.

Link zur Publikation [33]

Egger-Lampl, Sebastian and Redi, Judith and Hoßfeld, Tobias and Hirth, Matthias and Möller, Sebastian and Naderi, Babak and Keimel, Christian and Saupe, Dietmar (2017). Crowdsourcing Quality of Experience Experiments [34]. Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments: Dagstuhl Seminar 15481, Dagstuhl Castle, Germany, November 22 – 27, 2015, Revised Contributions. Springer International Publishing, 154–190.

Link zur Publikation [35]

Martin, David and Carpendale, Sheelagh and Gupta, Neha and Hoßfeld, Tobias and Naderi, Babak and Redi, Judith and Siahaan, Ernestasia and Wechsung, Ina (2017). Understanding the Crowd: Ethical and Practical Matters in the Academic Use of Crowdsourcing [36]. Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments: Dagstuhl Seminar 15481, Dagstuhl Castle, Germany, November 22 – 27, 2015, Revised Contributions. Springer International Publishing, 27–69.

Link zur Publikation [37]

Naderi, Babak and Möller, Sebastian (2017). Speech quality assessment in crowdsourcing: Comparison with laboratory and recommendations on task design [38]. ITU-T SG 12 Meeting January 2017. ITU, 1–8.

Link zur Publikation [39]

vor >> [41]
------ Links: ------

Zusatzinformationen / Extras

Direktzugang

Schnellnavigation zur Seite über Nummerneingabe

Copyright TU Berlin 2008