direkt zum Inhalt springen

direkt zum Hauptnavigationsmenü

Sie sind hier

TU Berlin

Inhalt des Dokuments

Dr. -Ing. Babak Naderi

Lupe [1]

Research Interests:

  • Subjective quality assessment
  • Speech Quality Assessment in Crowdsourcing
  • Motivation, Workload, and Performance in Crowdsourcing
  • Statistical Modeling, field data and applied statistics
  • Speech Enhancement
  • Text Complexity and Simplification


Babak Naderi has obtain his Dr.-Ing degree (PhD) on the basis of his thesis with a title of Motivation of Workers on Microtask Crowdsourcing Platforms [2] in September 2017. Babak has Master's degree in Geodesy and Geoinformation Science form the Technical University Berlin with a thesis on "Monte Carlo Localization for Pedestrian Indoor Navigation Using a Map Aided Movement Model" [3]. He has also a Bachelor's degree in Software Engineering.

Since August 2012, Babak Naderi is working as a research scientist at the Quality and Usability Lab of  TU-Berlin.

2013-2015 Babak was awarded with an BMBF funded Education program for future IT and Development Leadership involving Bosch, Datev, Deutsche Telekom AG, Holtzbrinck, SAP, Scheer Group, Siemens, and Software AG  amongst highly ranked academic institution (Softwarecampus [4]). He was taking part by leading CrowdMAQA [5]project.

Within dissertation, Babak studies the motivation of crowdworkers in details. He has developed the Crowdwork Motivation Scale [6] for measuring general motivation based on the Self-Determination Theory of Motivation. The scale has been validated within several studies. In addition, he has studied factors influencing the motivation, and influence of different motivation type on the quality of outcomes. Models for predicting task selection strategy of workers are developed, including models for automatically predicting expected workload associated to a task from its design, task acceptance and performance. 

Beside others research activities, Babak is actively working on the standardization of methods for speech quality assessment in crowdsourcing environment in the P.CROWD work program of Study Group 12 in ITU-T Standardization Sector [7].

Reviewed for WWW, CHI, ICASSP, CSCW, MMSys, PQS, HCOMP, ICWE, QoMEX, International Journal of Human-Computer Studies, Computer Networks, Behaviour & Information Technology, Quality and User Experience.


Selected talks:

  • "Motivation of Crowd Workers, does it matter?",Schloss Dagstuhl, Evaluation in the Crowd: Crowdsourcing and Human-Centred Experiments, November 2015.
  • "Motivation and Quality Assessment in Online Paid Crowdsourcing Micro-task Platforms",Schloss Dagstuhl, Crowdsourcing: From Theory to Practice and Long-Term Perspectives, September 2013.


Office Hours: On Appointment



Quality and Usability Lab

Technische Universität Berlin
Ernst-Reuter-Platz 7
D-10587 Berlin

Tel.:+49 (30) 8353-54221
Fax: +49 (30) 8353-58409



An Open Source Implementation of ITU-T Recommendation P.808 with Validation
Zitatschlüssel naderi2020d
Autor Naderi, Babak and Cutler, Ross
Buchtitel Proc. Interspeech 2020
Seiten 2862–2866
Jahr 2020
DOI 10.21437/Interspeech.2020-2665
Adresse ISCA
Monat oct
Notiz electronic
Verlag ISCA
Serie Interspeech
Wie herausgegeben Fullpaper
Zusammenfassung The ITU-T Recommendation P.808 provides a crowdsourcing approach for conducting a subjective assessment of speech quality using the Absolute Category Rating (ACR) method. We provide an open-source implementation of the ITU-T Rec. P.808 that runs on the Amazon Mechanical Turk platform. We extended our implementation to include Degradation Category Ratings (DCR) and Comparison Category Ratings (CCR) test methods. We also significantly speed up the test process by integrating the participant qualification step into the main rating task compared to a two-stage qualification and rating solution. We provide program scripts for creating and executing the subjective test, and data cleansing and analyzing the answers to avoid operational errors. To validate the implementation, we compare the Mean Opinion Scores (MOS) collected through our implementation with MOS values from a standard laboratory experiment conducted based on the ITU-T Rec. P.800. We also evaluate the reproducibility of the result of the subjective speech quality assessment through crowdsourcing using our implementation. Finally, we quantify the impact of parts of the system designed to improve the reliability: environmental tests, gold and trapping questions, rating patterns, and a headset usage test.
Link zur Publikation [8] Link zur Originalpublikation [9] Download Bibtex Eintrag [10]
------ Links: ------

Zusatzinformationen / Extras


Schnellnavigation zur Seite über Nummerneingabe

Copyright TU Berlin 2008